id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,772,853,949
|
[PrivateUse1] Support parseDispatchKey with modified PrivateUse1
|
fmo-mt
|
closed
|
[
"triaged",
"open source",
"module: dispatch",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: PrivateUse1"
] | 16
|
CONTRIBUTOR
|
PyTorch now support many private1 backend names like `AutogradPrivateUse1` or `QuantizedPrivateUse1`, not mentioned the original `PrivateUse1` backend.
However, users that implement `PrivateUse1` funtionalities would modified the backend name by calling `torch.utils.rename_privateuse1_backend("my_backend")`, in that case, all `PrivateUse1` backend string would not be found when we call other functions related to it. For example, we utilize `torch.library` to register some customize functions to our new backend, we would use "my_backend" as the backend name instead of "PrivateUse1", in which the error will be throw:
```
could not parse dispatch key 'my_backend'
```
So, this PR changed the function `c10::DispatchKey parseDispatchKey(const std::string& k)`, it would double check if the `PrivateUse1` has been modified, and if so, we would change `k` to adapt new backend name then find it again.
cc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens
| true
|
2,772,829,610
|
FSDP: How to support w8a8 quantization?
|
Lenan22
|
open
|
[
"triaged",
"module: fsdp",
"oncall: pt2"
] | 2
|
NONE
|
### 🐛 Describe the bug
I replaced nn.Linear with QuantLinear, substituting the nn.Linear operator with an int8 quantized operator.
act_tensor_int8, pertoken_scale = torch_npu.npu_dynamic_quant(x)
quant_out = torch_npu.npu_quant_matmul(act_tensor_int8,
self.weight.to(torch.int8),
self.weight_scale, # weight scale
offset=None,
bias=self.bias,
pertoken_scale=pertoken_scale,
output_dtype=torch.bfloat16)
This change has achieved performance gains on a single GPU. However, when wrapped with FSDP (Fully Sharded Data Parallel) on multiple GPUs,
model_fsdp = FullyShardedDataParallel(model, **settings)
it fails to run because FSDP performs parameter sharding and cannot handle this quantized operator. The error message is as follows:
[rank4]: RuntimeError: call aclnnQuantMatmulV4 failed, detail:E69999: Inner Error!
[rank4]: E69999: [PID: 1182939] 2025-01-07-17:15:19.281.742 op[QuantBatchMatmulV3], [InferShape] dimensions a(12608) and b(128) must be equal[FUNC:InferNDimWithBias][FILE:matmul_infer_fns.cc][LINE:322]
Do you have any good solutions for this issue?
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang @penguinwu
| true
|
2,772,653,582
|
[Easy] Fix linalg.norm hint message typo
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend"
] | 9
|
CONTRIBUTOR
|
Fixes #136454
**Test Result**
**Before**
```python
>>> import torch
>>> from torch import linalg
>>>
>>> my_tensor = torch.tensor([[[8., -3., 0., 1.]]])
>>> # ↓ ↓ ↓ ↓ ↓
>>> linalg.norm(input=my_tensor, ord='fro', dim=(0, 1, 2)) # Error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: linalg.norm: If dim is specified, it mut be of length 1 or 2. Got [0, 1, 2]
>>> # ↓ ↓ ↓ ↓ ↓
>>> linalg.norm(input=my_tensor, ord='nuc', dim=(0, 1, 2)) # Error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: linalg.norm: If dim is specified, it mut be of length 1 or 2. Got [0, 1, 2]
```
**After**
```python
>>> import torch
>>> from torch import linalg
>>>
>>> my_tensor = torch.tensor([[[8., -3., 0., 1.]]])
>>> # ↓ ↓ ↓ ↓ ↓
>>> linalg.norm(input=my_tensor, ord='fro', dim=(0, 1, 2)) # Error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: linalg.norm: If dim is specified, it must be of length 1 or 2. Got [0, 1, 2]
>>> # ↓ ↓ ↓ ↓ ↓
>>> linalg.norm(input=my_tensor, ord='nuc', dim=(0, 1, 2)) # Error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: linalg.norm: If dim is specified, it must be of length 1 or 2. Got [0, 1, 2]
```
cc @soulitzer
| true
|
2,772,627,057
|
added `__add__` and `__mul__` hints to torch.Size
|
randolf-scholz
|
closed
|
[
"oncall: distributed",
"module: typing",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 9
|
CONTRIBUTOR
|
Fixes #144218
`Size` returns `Size`, whereas `tuple` returns `tuple`: https://github.com/python/typeshed/blob/9f28171658b9ca6c32a7cb93fbb99fc92b17858b/stdlib/builtins.pyi#L985-L988
- Use `SupportIndex` instead of `int` in `__getitem__` (supported at runtime)
- `Size.__add__` overrides `tuple.__add__`, the latter supports adding tuples on non-integral type.
- Added typing unit tests.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @malfet @xuzhao9 @gramster
| true
|
2,772,327,034
|
Mismatch between PyTorch nn.Linear in float16 and equivalent NumPy implementation
|
AyoubMDL
|
closed
|
[] | 4
|
NONE
|
### 🐛 Describe the bug
I’m trying to replicate the behavior of PyTorch’s nn.Linear operation in float16 using NumPy, but I’m unable to get the exact same outputs. Specifically, I’ve implemented two NumPy versions of the linear operation, but neither matches the results produced by PyTorch’s nn.Linear when using torch.float16. According to the PyTorch documentation, intermediate operations in nn.Linear are performed in float32 precision, even when the inputs are in float16. However, even after accounting for this by performing accumulation in float32 , I was unable to match the outputs.
```python
import numpy as np
import torch
import torch.nn as nn
import argparse
def numpy_dense_from_scratch(x, weights):
# Convert to float32 for accumulation
x_fp32 = x.astype(np.float32)
weights_fp32 = weights.astype(np.float32)
result_fp16 = np.zeros((x.shape[0], weights.shape[0]), dtype=np.float16)
for i in range(x_fp32.shape[0]):
for j in range(weights_fp32.shape[0]):
# Accumulate the result in float32 for better precision
sum_fp32 = 0.0
for k in range(x_fp32.shape[1]):
sum_fp32 += x_fp32[i, k] * weights_fp32[j, k]
# Store the final result in float16 after accumulation
result_fp16[i, j] = np.float16(sum_fp32)
return result_fp16
def numpy_dense(x, weights):
x = x.astype(np.float32)
weights = weights.astype(np.float32)
res = np.matmul(x, weights.T, dtype=np.float32)
return res.astype(np.float16)
class Dense(nn.Linear):
def __init__(self, in_features, out_features):
super().__init__(in_features=in_features, out_features=out_features,
bias=False, device="cpu", dtype=torch.float16)
self.weight.requires_grad = False
def forward(self, input):
return super().forward(input)
def compare_outputs(pytorch_model, inputs):
def _to_numpy(tensor):
return tensor.cpu().numpy()
# Torch inference
pytorch_model.eval()
torch_outputs = [_to_numpy(pytorch_model(inputs))]
# Numpy outputs
numpy_outputs = [numpy_dense(_to_numpy(inputs), _to_numpy(pytorch_model.weight))]
# Numpy from scratch outputs
numpy_from_scratch_outputs = [numpy_dense_from_scratch(
_to_numpy(inputs), _to_numpy(pytorch_model.weight))]
# Both tests fail
np.testing.assert_array_equal(torch_outputs, numpy_from_scratch_outputs)
np.testing.assert_array_equal(torch_outputs, numpy_outputs)
def main():
parser = argparse.ArgumentParser(add_help=False)
parser.add_argument("--full_range", action="store_true")
args = parser.parse_args()
torch.manual_seed(0)
# Create random inputs either between [0, 1] or between [fp16min, fp16max]
size = (64, 256)
x_rand_tensor = torch.rand(size, requires_grad=False, dtype=torch.float32)
f16_min = torch.finfo(torch.float16).min + 1
f16_max = torch.finfo(torch.float16).max - 1
# Inputs for test
scale_factor = 1
offset = 0
if args.full_range:
scale_factor = (f16_max - f16_min)
offset = f16_min
x = (x_rand_tensor * scale_factor + offset).to(torch.float16)
# Create the model
dense_model = Dense(256, 1024)
compare_outputs(dense_model, x)
if __name__ == "__main__":
main()
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-1360P
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
BogoMIPS: 5222.42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxconverter-common==1.14.0
[pip3] onnxmltools==1.13.0
[pip3] onnxruntime==1.21.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxscript==0.1.0.dev20241223
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] Could not collect
| true
|
2,772,216,474
|
[AMD] SDPA internal changes
|
xw285cornell
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary: All the internal changes needed to enable flash attention w/ SDPA in fbcode.
Test Plan:
```
TORCH_ROCM_FA_PREFER_CK=1 buck run -m rocm621 mode/opt-amd-gpu scripts/xdwang/example:sdpa
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| Batch Size | Sequence Length | Heads | Head Dim | Flash Time (µs) | Math Time (µs) | xformers Time (µs) | Flash TFlops | Math TFlops | xformers TFlops | Speedup (Flash/Math) | Speedup (xformers/Math) | xformers trace_url | Flash trace_url |
+==============+===================+=========+============+===================+==================+======================+================+===============+===================+========================+===========================+======================+===================+
| 1 | 4096 | 32 | 64 | 455.552 | 7748.76 | 513.449 | 301.698 | 17.7369 | 267.678 | 17.0096 | 15.0916 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| 1 | 4096 | 16 | 128 | 329.971 | 4741.11 | 386.049 | 416.519 | 28.9888 | 356.014 | 14.3683 | 12.2811 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| 1 | 8192 | 32 | 64 | 1455.76 | 31869.6 | 1665.49 | 377.642 | 17.2501 | 330.087 | 21.8921 | 19.1353 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| 1 | 8192 | 16 | 128 | 1265.77 | 18972.8 | 1479.48 | 434.325 | 28.976 | 371.588 | 14.9891 | 12.824 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| 1 | 16384 | 32 | 64 | 5732.99 | 121861 | 6816.77 | 383.573 | 18.0453 | 322.59 | 21.2562 | 17.8767 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| 1 | 16384 | 16 | 128 | 4749.69 | 73776.4 | 5404.03 | 462.982 | 29.8066 | 406.923 | 15.5329 | 13.6521 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| Batch Size | Sequence Length | Heads | Head Dim | Flash Time (µs) | Math Time (µs) | xformers Time (µs) | Flash TFlops | Math TFlops | xformers TFlops | Speedup (Flash/Math) | Speedup (xformers/Math) | xformers trace_url | Flash trace_url |
+==============+===================+=========+============+===================+==================+======================+================+===============+===================+========================+===========================+======================+===================+
| 1 | 4096 | 32 | 64 | 1615.41 | 8342.67 | 1822.72 | 212.7 | 41.1855 | 188.508 | 5.16443 | 4.57705 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| 1 | 4096 | 16 | 128 | 1357.97 | 5943.53 | 1432.34 | 253.022 | 57.8104 | 239.886 | 4.37676 | 4.14953 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| 1 | 8192 | 32 | 64 | 5556.5 | 31726.7 | 6502.17 | 247.348 | 43.3197 | 211.374 | 5.70984 | 4.8794 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| 1 | 8192 | 16 | 128 | 5186 | 22529.4 | 5590.36 | 265.019 | 61.0044 | 245.85 | 4.34427 | 4.03004 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| 1 | 16384 | 32 | 64 | 22527.7 | 130413 | 26527.6 | 244.035 | 42.155 | 207.239 | 5.789 | 4.91613 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
| 1 | 16384 | 16 | 128 | 18347.9 | 87553.2 | 20358 | 299.628 | 62.791 | 270.044 | 4.77184 | 4.30068 | | |
+--------------+-------------------+---------+------------+-------------------+------------------+----------------------+----------------+---------------+-------------------+------------------------+---------------------------+----------------------+-------------------+
```
Reviewed By: leitian, feikou, yoyoyocmu, sijiac
Differential Revision: D67262726
| true
|
2,772,212,190
|
[aot] don't dce aten rng nodes
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 5
|
MEMBER
|
FIXES https://github.com/pytorch/pytorch/issues/143431
For aot_eager backend, we dce twice in aot. The first dce errs on the side of caution and provides a restrictive dce function: https://github.com/pytorch/pytorch/blob/2e1ea8598f477322965c28fb52e6e5f53876d8dd/torch/fx/experimental/proxy_tensor.py#L1173
The second one is more aggressive: https://github.com/pytorch/pytorch/blob/2e1ea8598f477322965c28fb52e6e5f53876d8dd/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py#L185
But this deviates from eager accuracy when rand ops are dce'd
The repro doesn't work for inductor, but that's a separate issue
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144319
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,772,186,843
|
[Quant][Inductor][X86] Separate binary post op fusion and lowering for qconv
|
Xia-Weiwen
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144318
* #144312
* #144224
**Summary**
The current implementation fuses quantized ops and their post ops and lowers the fused the op to cpp backend in the same pass. It is better to separate post op fusion and lowering because
- it looks better in terms of design
- we need the post op fusion pass for PT2E quantization eager mode
As one of a series of PRs which do the separation, this PR moves binary post op fusion of qconv out of the lowering pass to after the weight-prepack pass. The workflow is
1. Weight prepack for qlinear so that `dq - conv` patterns are replaced by `onednn.qconv2d_pointwise`
2. Fuse `onednn.qconv2d_pointwise` and post ops
3. Lower to cpp backend
This PR adds additional `PatternMatcherPass`'s to handle the post op fusion. Pattern matchers used for fusion are reused.
**Test plan**
It is covered by existing UTs in `test_mkldnn_pattern_matcher.py` for post op fusion.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,772,058,095
|
Move Windows arm64 scripts from pytorch/builder
|
iremyux
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: build"
] | 6
|
COLLABORATOR
|
This PR moves the Windows Arm64 scripts from the builder repository to the main repository. The corresponding PR to pytorch/builder that removes them is here : https://github.com/pytorch/builder/pull/2058
| true
|
2,772,054,569
|
[CD] Enable profiling for XPU Windows nightly wheels
|
chuanqi129
|
closed
|
[
"module: windows",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng",
"topic: not user facing",
"ciflow/binaries_wheel",
"module: xpu"
] | 7
|
COLLABORATOR
|
PR https://github.com/pytorch/pytorch/pull/144034 added profiling support for torch XPU Windows binary, enable it in PyTorch XPU Windows CD
Works for https://github.com/pytorch/pytorch/issues/114850
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,772,049,829
|
refactor benchmarking to use dynamo_timed
|
nmacchioni
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
use dynamo_timed for all our wrapped calls, instead of our custom timer
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144507
* #144505
* #144501
* #144353
* #133287
* #144365
* #133121
* #133058
* __->__ #144315
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,772,012,953
|
[inductor] [dtype propogation] `conv1d,2d,3d` pass the check when handling `uint8,16,32,64` while eager throws the error
|
shaoyuyoung
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
related to #144310
```python
import torch
import torch.nn.functional as F
torch._dynamo.config.recompile_limit = 12
def run_test(dim, dtype):
input_shape = [1, 8] + [64] * dim
input = torch.randn(input_shape).to(dtype).cuda()
kernel_size = 4
stride = 4
padding = 2
conv_kernel = (torch.ones(8, 1, *([kernel_size] * dim)) / (kernel_size ** dim)).cuda()
op = F.conv2d if dim == 2 else F.conv1d if dim == 1 else F.conv3d
try:
output = op(input, conv_kernel, stride=stride, padding=padding, groups=8)
print(f"succeed on eager for dim={dim}, dtype={dtype}")
except Exception as e:
print(f"failed on eager for dim={dim}, dtype={dtype}: {e}")
try:
cf = torch.compile(op)
output = cf(input, conv_kernel, stride=stride, padding=padding, groups=8)
print(f"succeed on inductor for dim={dim}, dtype={dtype}")
except Exception as e:
print(f"failed on inductor for dim={dim}, dtype={dtype}: {e}")
for dim in (1, 2, 3):
for dtype in (torch.uint8, torch.uint16, torch.uint32, torch.uint64):
run_test(dim, dtype)
```
### Error logs
```
failed on eager for dim=1, dtype=torch.uint8: "conv_depthwise2d_forward_cuda" not implemented for 'Byte'
succeed on inductor for dim=1, dtype=torch.uint8
failed on eager for dim=1, dtype=torch.uint16: "conv_depthwise2d_forward_cuda" not implemented for 'UInt16'
succeed on inductor for dim=1, dtype=torch.uint16
failed on eager for dim=1, dtype=torch.uint32: "conv_depthwise2d_forward_cuda" not implemented for 'UInt32'
succeed on inductor for dim=1, dtype=torch.uint32
failed on eager for dim=1, dtype=torch.uint64: "conv_depthwise2d_forward_cuda" not implemented for 'UInt64'
succeed on inductor for dim=1, dtype=torch.uint64
failed on eager for dim=2, dtype=torch.uint8: "conv_depthwise2d_forward_cuda" not implemented for 'Byte'
succeed on inductor for dim=2, dtype=torch.uint8
failed on eager for dim=2, dtype=torch.uint16: "conv_depthwise2d_forward_cuda" not implemented for 'UInt16'
succeed on inductor for dim=2, dtype=torch.uint16
failed on eager for dim=2, dtype=torch.uint32: "conv_depthwise2d_forward_cuda" not implemented for 'UInt32'
succeed on inductor for dim=2, dtype=torch.uint32
failed on eager for dim=2, dtype=torch.uint64: "conv_depthwise2d_forward_cuda" not implemented for 'UInt64'
succeed on inductor for dim=2, dtype=torch.uint64
failed on eager for dim=3, dtype=torch.uint8: "conv_depthwise3d" not implemented for 'Byte'
succeed on inductor for dim=3, dtype=torch.uint8
failed on eager for dim=3, dtype=torch.uint16: "conv_depthwise3d" not implemented for 'UInt16'
succeed on inductor for dim=3, dtype=torch.uint16
failed on eager for dim=3, dtype=torch.uint32: "conv_depthwise3d" not implemented for 'UInt32'
succeed on inductor for dim=3, dtype=torch.uint32
failed on eager for dim=3, dtype=torch.uint64: "conv_depthwise3d" not implemented for 'UInt64'
succeed on inductor for dim=3, dtype=torch.uint64
```
### Versions
nightly 20250105
cc @chauhang @penguinwu
| true
|
2,771,995,311
|
[inductor] [bug fix] align `avg_pool` with eager when handling `uint`
|
shaoyuyoung
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 9
|
CONTRIBUTOR
|
Fixes #144310
~~We just need to add a check in lowering~~
updated: we add the error checking in `meta registration`
### UT
```
pytest -s -v test/inductor/test_torchinductor.py -k test_avg_pool_errors_with_uint
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,987,829
|
[Quant][Inductor][X86] Separate unary post op fusion and lowering for qconv
|
Xia-Weiwen
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144318
* __->__ #144312
* #144224
**Summary**
The current implementation fuses quantized ops and their post ops and lowers the fused the op to cpp backend in the same pass. It is better to separate post op fusion and lowering because
- it looks better in terms of design
- we need the post op fusion pass for PT2E quantization eager mode
As one of a series of PRs which do the separation, this PR moves unary post op fusion of qconv out of the lowering pass to after the weight-prepack pass. The workflow is
1. Weight prepack for qlinear so that `dq - conv` patterns are replaced by `onednn.qconv2d_pointwise`
2. Fuse `onednn.qconv2d_pointwise` and post ops
3. Lower to cpp backend
This PR adds additional `PatternMatcherPass`'s to handle the post op fusion. Pattern matchers used for fusion are reused.
**Test plan**
It is covered by existing UTs in `test_mkldnn_pattern_matcher.py` for post op fusion.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,771,974,790
|
Preload cuda runtime via python import.
|
oraluben
|
open
|
[
"triaged",
"open source",
"Stale",
"release notes: cuda",
"topic: bug fixes"
] | 10
|
CONTRIBUTOR
|
This PR use python's import mechanism to detect and find the cuda runtime installed from pypi, and try to preload them first on supported platform.
Fixes https://github.com/pytorch/pytorch/issues/101314
Fixes https://github.com/pytorch/pytorch/issues/121207
There're also recent issues like #138324, #138460, #140797 whose root cause is the current behaviour: load global deps and fallback to pypi by searching . There's been a very adhoc workaround to read `/proc/self/maps` with two nested `try` to fix recent issues, which is not straightforward to understand, maintain and test.
| true
|
2,771,940,501
|
[inductor] [dtype propogation] `avg_pool1d,2d,3d` pass the check when handling `uint8,16,32,64` while eager throws the error
|
shaoyuyoung
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When handling uint, CPU inductor passes the check while eager and triton throws the error.
**device**: both on CPU and cuda
```python
import torch
torch._dynamo.config.recompile_limit = 12
def run_test(dim, dtype):
x = torch.randn([2] * (dim + 2)).to(dtype)
op = eval(f"torch.nn.functional.avg_pool{dim}d")
try:
op(x, kernel_size=2, stride=2)
print("succeed on eager")
except Exception as e:
print(e)
try:
torch.compile(op)(x, kernel_size=2, stride=2)
print("succeed on inductor")
except Exception as e:
print(e)
for dim in (1, 2, 3):
for dtype in (torch.uint8, torch.uint16, torch.uint32, torch.uint64):
run_test(dim, dtype)
```
### Error logs
```
"avg_pool2d" not implemented for 'Byte'
succeed on inductor
"avg_pool2d" not implemented for 'UInt16'
succeed on inductor
"avg_pool2d" not implemented for 'UInt32'
succeed on inductor
"avg_pool2d" not implemented for 'UInt64'
succeed on inductor
"avg_pool2d" not implemented for 'Byte'
succeed on inductor
"avg_pool2d" not implemented for 'UInt16'
succeed on inductor
"avg_pool2d" not implemented for 'UInt32'
succeed on inductor
"avg_pool2d" not implemented for 'UInt64'
succeed on inductor
"avg_pool3d_out_frame" not implemented for 'Byte'
succeed on inductor
"avg_pool3d_out_frame" not implemented for 'UInt16'
succeed on inductor
"avg_pool3d_out_frame" not implemented for 'UInt32'
succeed on inductor
"avg_pool3d_out_frame" not implemented for 'UInt64'
succeed on inductor
```
### Versions
nightly
cc @chauhang @penguinwu
| true
|
2,771,935,548
|
[Dynamo] Add functorch C++ bindings as in graph functions
|
yanboliang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144309
* #144308
* #144307
* #144306
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,771,935,470
|
[Dynamo] Inline functions in torch._ops
|
yanboliang
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144309
* __->__ #144308
* #144307
* #144306
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,771,935,396
|
[Dynamo] Inline functions in torch._functorch.pyfunctorch
|
yanboliang
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144309
* #144308
* __->__ #144307
* #144306
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,771,935,348
|
[Dynamo] Inline functions in torch._functorch.autograd_function
|
yanboliang
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144309
* #144308
* #144307
* __->__ #144306
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,771,929,300
|
Set RUNPATH on CUDA and XPU tests
|
oraluben
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 18
|
CONTRIBUTOR
|
#136627 has almost fixed the issue that test binaries' runpath has not been set correctly, with few cases left.
This PR fixes the rest.
The binaries are found by `auditwheel repair` a wheel built with `BUILD_TEST=1`.
@malfet
| true
|
2,771,888,232
|
Fix PythonMod printing
|
pytorchbot
|
closed
|
[
"module: cpu",
"open source",
"module: inductor",
"ciflow/inductor"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144078
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Fixes #144075
| true
|
2,771,780,136
|
[WIP][Windows Inductor] Enable Inductor XPU backend on Windows.
|
etaf
|
closed
|
[
"module: cpu",
"open source",
"release notes: releng",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
As title.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @ColinPeppler @desertfire
| true
|
2,771,774,132
|
Cannot get torch_python symbol
|
dilililiwhy
|
closed
|
[
"module: bc-breaking",
"module: cpp",
"triaged"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It seems that torch_python symbol is hidden in nightly build of torch 2.7, which could be used in out-of-tree extension.
For example, THP_FUNCTION_DEFAULT_PROPERTIES will be involved during code generation if the extension reuse the gen_autograd_functions module.
```
# undefined symbol: _ZN5torch8autograd29THPCppFunction_next_functionsEP7_objectPv
THP_FUNCTION_DEFAULT_PROPERTIES
torch::autograd::THPCppFunction_next_functions(_object*, void*)
# undefined symbol: _ZN5torch30installCapturedTracebackPythonEv
torch::installCapturedTracebackPython()
# undefined symbol: _ZN10THPPointerI12PyCodeObjectE4freeEv
THPPointer<PyCodeObject>::free()
...
```
If torch_python symbol is hidden by default in the future, **is adding TORCH_PYTHON_API the only way?**
### Versions
PyTorch version: 2.7.0.dev20250105+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: AlmaLinux 8.10 (Cerulean Leopard) (x86_64)
GCC version: (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9)
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.28
Python version: 3.9.21 (main, Dec 17 2024, 07:34:47) [GCC 14.2.1 20240801 (Red Hat 14.2.1-1)] (64-bit runtime)
Python platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6266C CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 30976K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.7.0.dev20250105+cpu
[conda] No relevant packages
cc @ezyang @gchanan @jbschlosser @seemethere @malfet @osalpekar @atalman
| true
|
2,771,732,375
|
[RFC] Fix CudaEventCache for dangling references
|
fduwjj
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)",
"topic: bug fixes"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144301
Reported in https://github.com/pytorch/pytorch/issues/143470, we have a dangling references in CudaEventCache. So we want to fix it.
1. We add a unit test to repro the issue mentioned in the issue.
2. We tried to convert variables to shared pointers as suggested by @suo in the issue.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,771,730,369
|
[RFC] Fix race for CudaEventCache when launching collective not from main
|
fduwjj
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144300
* #144299
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,771,730,252
|
Fix CudaEventCache for dangling references
|
fduwjj
|
closed
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144300
* __->__ #144299
| true
|
2,771,715,682
|
[mps/inductor] Add support for sign().
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
Drive-by fix of a test name while I was at it.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,686,992
|
operator benchmark change parsing from regex based to manual
|
apakbin
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"module: benchmark",
"ciflow/trunk",
"release notes: benchmark"
] | 7
|
CONTRIBUTOR
|
The regex-based parser would erroneously split on commas in nested brackets, for example, it would do the following parse which is wrong:
'M: [(32, 16), (64, 32)], ZPB: 2' -> ['M: [(32, 16)', ' (64, 32)]', 'ZPB: 2']
The new manual parser handles this situation the right way:
'M: [(32, 16), (64, 32)], ZPB: 2' -> ['M: [(32, 16), (64, 32)]', 'ZPB: 2']
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,771,679,096
|
Fix lint in `test_provenance_tracing.py`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Regression introduced by https://github.com/pytorch/pytorch/pull/143684/ that somehow did not surface on PR CI
IMO this also makes two branches of the test(compile vs aoti) more readable
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,676,078
|
[export] Fix sym_bool serialization
|
yiming0416
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 10
|
CONTRIBUTOR
|
Summary:
When there is a `torch._check()` that checks if a sym_int is equal to some constant, it will generate 3 nodes in the graph with target `operation.ge`, `operator.le` and `operator.eq`. These operators belong to `_SYM_BOOL_OPS` but the `meta_val` of these nodes are are `bool` instead of `torch.SymBool`.
Similar things can happen to `torch.SymInt`, where a `node.target` belongs to `_SYM_INT_OPS` but `node.meta["val"]` is an `int` instead of `torch.SymInt`.
Therefore, we need to check both `meta_val` type and `node.target` type during serialization.
Test Plan:
```
buck2 run @mode/dev-nosan caffe2/test:test_export -- -r test_sym_bool_torch_check_equal
buck2 run @mode/dev-nosan caffe2/test:test_export -- -r test_sym_int_torch_check_equal
```
Differential Revision: D67883754
| true
|
2,771,672,323
|
tuned_addmm: fix CK
|
coconutruben
|
closed
|
[
"module: rocm",
"fb-exported",
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Summary:
# Background
The CK backend on addmm seems to fail because during autotuning we pass in (B, X, W) whereas the kernel is expecting (W,X,B) as its arguments.
# What
This change fixes that by passing through the modified ordering *for the choices during autotuning only* as the graph already seems to correctly handle passing the args through in regular invocation
# Why
## at all
without this, addmm is hard to use on the CK backend
## not use `input_reorder`
there is a lot of overloading in the ROCm code (and in general) going on with templates and input_reorder. It's never used, and since it's never used it's partially broken. Particularly all of scaled_mm, addmm, mm, have shared code that at various parts refers to arguments as X, W, B, alpha, beta, scales, etc, and adjusts accordingly.
There is probably a cleaner way to do this that requires a medium size tracing and refactoring but for now, this accomplishes two main things
1. addmm works with the CK backend without crashing
2. we explicitly tell the choices about argument reordering right when we pass in the arguments in different order(s), making it more readable
Test Plan:
added unit test
```
buck2 test mode/dev-nosan-amd-gpu -c fbcode.re_gpu_tests=False fbcode//caffe2/test/inductor:test_ck_backend -- --exact 'caffe2/test/inductor:test_ck_backend - test_addmm (caffe2.test.inductor.test_ck_backend.TestCKBackend)'
```
with the change
```
Buck UI: https://www.internalfb.com/buck2/c293fe0d-2a8b-45ff-b904-3a8a6331fbdd
Test UI: https://www.internalfb.com/intern/testinfra/testrun/6755399687317126
Network: Up: 0B Down: 10MiB (reSessionID-bbfca263-51ca-4cf6-8629-fe01f77f2e0a)
Analyzing targets. Remaining 0/66491
Executing actions. Remaining 0/426985 3.9s exec time total
Command: test. Finished 1 local, 1 cache (50% hit) 3.7s exec time cached (95%)
Time elapsed: 39.8s
Tests finished: Pass 1. Fail 0. Fatal 0. Skip 0. Build failure 0
```
without the change
```
Memory access fault by GPU node-2 (Agent handle: 0x7f1c8e140e00) on address 0x7edbfcb00000. Reason: Unknown.
GPU core dump failed
Test was never completed. The test process might have crashed.
Buck UI: https://www.internalfb.com/buck2/2c739b18-a5ea-4f42-b5ef-f5244e707d5c
Test UI: https://www.internalfb.com/intern/testinfra/testrun/2533275055276985
Network: Up: 7.5KiB Down: 3.6KiB (reSessionID-a1a6f21e-63b6-40dc-84c9-3699640b626e)
Analyzing targets. Remaining 0/66491
Executing actions. Remaining 0/426985 0.5s exec time total
Command: test. Finished 2 local
Time elapsed: 40.9s
Tests finished: Pass 0. Fail 1. Fatal 0. Skip 0. Build failure 0
```
Differential Revision: D67882293
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,771,662,179
|
codecache: Remove cpp_prefix.h duplication per build, then precompile it
|
benjaminglass1
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149961
* #148773
* __->__ #144293
Prior to this PR, `_inductor/codegen/cpp_prefix.h` was copied into a new temporary directory on every inductor run utilizing the CPP backend (i.e. CPU-only), then included in the output source code. Instead, this PR puts it in an appropriate place in the torch includes, and includes it from there. This allows us to precompile it in cpp_wrapper and AOT inductor mode, saving significant compilation time.
Due to difficulties getting this to work in FBCode, the precompilation itself is only enabled in OSS PyTorch.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Differential Revision: [D69420620](https://our.internmc.facebook.com/intern/diff/D69420620)
| true
|
2,771,658,994
|
codecache: Remove cpp_prefix.h duplication per build, then precompile it
|
benjaminglass1
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144292
* #144002
* #143909
Prior to this PR, `_inductor/codegen/cpp_prefix.h` was copied into a new temporary directory on every inductor run utilizing the CPP backend, then included in the output source code. Instead, this puts it in an appropriate place in the torch includes, and includes it from there. This allows us to precompile it in cpp_wrapper and AOT inductor mode, saving significant compilation time.
Note for reviewers: I believe I've traced down all the implications of this change for FBCode here on the OSS side, but someone will need to run this through tests on the FB side to be sure.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,658,186
|
codecache: Remove cpp_prefix.h duplication per build, then precompile it
|
benjaminglass1
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Prior to this PR, `_inductor/codegen/cpp_prefix.h` was copied into a new temporary directory on every inductor run utilizing the CPP backend, then included in the output source code. Instead, this puts it in an appropriate place in the torch includes, and includes it from there. This allows us to precompile it in cpp_wrapper and AOT inductor mode, saving significant compilation time.
Note for reviewers: I believe I've traced down all the implications of this change for FBCode here on the OSS side, but someone will need to run this through tests on the FB side to be sure.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,657,973
|
codecache: Remove cpp_prefix.h duplication per build, then precompile it
|
benjaminglass1
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Prior to this PR, `_inductor/codegen/cpp_prefix.h` was copied into a new temporary directory on every inductor run utilizing the CPP backend, then included in the output source code. Instead, this puts it in an appropriate place in the torch includes, and includes it from there. This allows us to precompile it in cpp_wrapper and AOT inductor mode, saving significant compilation time.
Note for reviewers: I believe I've traced down all the implications of this change for FBCode here on the OSS side, but someone will need to run this through tests on the FB side to be sure.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,656,197
|
[FSDP2] model weights are not DTensor after forward pass
|
qsh-zh
|
closed
|
[
"triaged",
"module: fsdp"
] | 5
|
NONE
|
### 🐛 Describe the bug
When using FSDP2, certain parameters are not DTensor before the backward pass, but appear as DTensor after the backward pass.
The inconsistency surprises users. For example, when I need to copy the Exponential Moving Average (EMA) model weights to the regular model after the regular model has run some forward steps for evaluation, it causes errors as EMA model weights are in DTensor while some parameters are not DTensor in regular model.
```python
"""
torchrun --standalone --nproc_per_node=2 test.py
"""
import os
import torch
import torch.distributed as dist
from torch.distributed._tensor.api import DTensor
from torch.distributed._composable.fsdp import fully_shard, MixedPrecisionPolicy
from torch.testing._internal.distributed._tensor.common_dtensor import (
ModelArgs,
Transformer,
TransformerBlock,
)
def main():
torch.manual_seed(42)
model_args = ModelArgs(
n_layers=12,
vocab_size=50304,
n_heads=32,
dim=2048,
max_seq_len=2048,
dropout_p=0.0,
)
model = Transformer(model_args)
mp_policy = MixedPrecisionPolicy(param_dtype=torch.bfloat16)
fsdp_cfg = {"mp_policy": mp_policy}
for module in model.modules():
if isinstance(module, TransformerBlock):
fully_shard(module, **fsdp_cfg)
fully_shard(model, **fsdp_cfg)
optim = torch.optim.AdamW(model.parameters(), lr=1e-2)
inp = torch.randint(0, model_args.vocab_size, (8, 1024), device="cuda")
loss = model(inp).sum()
for name, param in model.named_parameters():
if not isinstance(param, DTensor):
if dist.get_rank() == 0:
print(f"Before backward Non-DTensor: {name}")
loss.backward()
for name, param in model.named_parameters():
if not isinstance(param, DTensor):
if dist.get_rank() == 0:
print(f"After backward Non-DTensor: {name}")
optim.step()
if __name__ == "__main__":
dist.init_process_group(backend="nccl")
gpu_id = int(os.environ["LOCAL_RANK"])
device = f"cuda:{gpu_id}"
torch.cuda.set_device(device)
rank = gpu_id
main()
dist.destroy_process_group()
````
output
```shell
Before backward Non-DTensor: tok_embeddings.weight
Before backward Non-DTensor: pos_embeddings.weight
Before backward Non-DTensor: norm.weight
Before backward Non-DTensor: norm.bias
```
Should not all paramters be with DTensor type?
### Versions
```
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1055-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] DISTS-pytorch==0.1
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cudnn-frontend==1.6.0
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvidia-pytriton==0.5.11
[pip3] nvtx==0.2.10
[pip3] onnx==1.16.2
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnxruntime==1.19.2
[pip3] open_clip_torch==2.20.0
[pip3] optree==0.12.1
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-ranger==0.1.1
[pip3] pytorch-triton==3.0.0+dedb7bdf3
[pip3] slangtorch==1.2.6
[pip3] torch==2.5.1
[pip3] torch_automated_profiler==1.10.0
[pip3] torch-fidelity==0.3.0
[pip3] torch-optimizer==0.3.0
[pip3] torch_tensorrt==2.5.0a0
[pip3] torchmetrics==1.4.2
[pip3] torchvision==0.20.0a0
[pip3] triton==3.1.0
[pip3] tritonclient==2.50.0
[conda] Could not collect
```
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang
| true
|
2,771,646,530
|
[inductor] Only call triton.compile in worker processes
|
jansel
|
closed
|
[
"Stale",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144288
Before this parallel compile would call triton.compile once in a subprocess (to warm the disk cache) then again in the parent to load the result with cache hit. This calls triton.compile once in the subprocess then pickles the result back to the parent.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,643,569
|
[BE] Actually suppress vmap warning from gradcheck
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: autograd"
] | 8
|
CONTRIBUTOR
|
This is the much safer change compared to https://github.com/pytorch/pytorch/pull/144283
Before:
```
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_optim.py -k TestDifferentiableOptimizer.test_sgd
/data/users/janeyx/pytorch/torch/autograd/gradcheck.py:1156: FutureWarning: Please use torch.vmap instead of torch._vmap_internals.vmap.
result = vmap(vjp)(torch.stack(grad_outputs))
/data/users/janeyx/pytorch/torch/autograd/gradcheck.py:1156: FutureWarning: Please use torch.vmap instead of torch._vmap_internals.vmap.
result = vmap(vjp)(torch.stack(grad_outputs))
.
----------------------------------------------------------------------
Ran 1 test in 0.028s
```
(the env vars aren't necessary)
After:
```
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_optim.py -k TestDifferentiableOptimizer.test_sgd
.
----------------------------------------------------------------------
Ran 1 test in 0.028s
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144287
| true
|
2,771,625,343
|
[DTensor] Add sharding strategy to aten.view.dtype
|
cassanof
|
closed
|
[
"oncall: distributed",
"module: dtensor"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Would be great to have a sharding strategy for aten.view.dtype. Currently, when doing .view(dtype) on a DTensor, you get the following error:
```
Operator aten.view.dtype does not have a sharding strategy registered.
```
This has caused issues for FSDP2 + stochastic rounding for me. See: https://github.com/pytorch/ao/issues/1505
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,771,572,647
|
[CD] Aarch64 builds should not override `OVERRIDE_PACKAGE_VERSION` envvar
|
atalman
|
closed
|
[
"Merged",
"release notes: releng",
"topic: binaries"
] | 10
|
CONTRIBUTOR
|
Currently our nightly aarch64 binaries have correct suffixes +cpu or +cu126. But release binaries are missing these suffixes. Hence to correct this, make sure are nightly and release binaries are consistent, I propose this change.
I see that override is already set correctly in release workflow:
https://github.com/pytorch/pytorch/actions/runs/12383179841/job/34565381200
For CPU:
```
OVERRIDE_PACKAGE_VERSION="2.6.0+cpu"
```
For CUDA:
```
OVERRIDE_PACKAGE_VERSION="2.6.0+cu126"
```
The removed code will set : OVERRIDE_PACKAGE_VERSION="2.6.0" for both cuda and cpu builds for release binaries.
cc @tinglvv
| true
|
2,771,560,932
|
[reland][export] don't decompose custom triton op when exporting
|
ydwu4
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Summary:
A reland of https://github.com/pytorch/pytorch/pull/142426.
Copying the description over here:
For torch.export (strict and non-strict), we don't do functional decomposition. Instead, we preserve the custom triton ops as custom ops. This is because we want the exported program to be high-level and serializable.
The alternative:
If we decompose the custom op to a functional hop and make it a node in exported program, we need to figure out ways of serializing the hop and its arguments, which can be triton.jited python functions and triton dtypes. This is undesireble because:
it can be tedious to maintain layer that serialize the jited function (e.g. with a string) and dtypes.
changes to triton or the serialization logic for triton arguments can be BC breaking
exported program will expose the implementation detail (i.e. triton source code) for a specific backend (GPU) to users, which mixes levels of abstraction.
Future plans:
After this PR, in the short term, we expect users to have a seperate aot_compile stage that compiles the exported program into a Cubin file on the same machine that users call export, which does autotuning and removes triton dependency and serve the model with Cubin. This guarantees that triton changes won't break BC.
In the long term, we may export multiple cubins for the triton op directly.
Test Plan: see new tests.
Differential Revision: D67879685
| true
|
2,771,558,100
|
[BE] stop using deprecated _vmap in our library
|
janeyx99
|
closed
|
[
"release notes: autograd",
"topic: improvements"
] | 5
|
CONTRIBUTOR
|
This was causing warnings for those using gradcheck among other APIs potentially, thanks @EmmettBicker for raising this to our awareness in https://github.com/pytorch/pytorch/pull/143938!
Tests should pass as this should be a functional no-op, and there should be fewer printouts. => I was wrong. Jeffrey is right to be scared of this change. I've opened #144287 instead.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144283
| true
|
2,771,445,451
|
[EZ][BE] Fix E226 flake8 violation
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Not sure why CI did not complain about it, but it my local runs it clearly says
```
Advice (FLAKE8) E226
missing whitespace around arithmetic operator
See https://www.flake8rules.com/rules/E226.html
268 | with code.indent():
269 | if len(idx_var_names) > 1:
270 | for idx, name in enumerate(idx_var_names):
>>> 271 | code.writeline(f"auto {name} = thread_pos.{chr(120+idx)};")
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,443,024
|
[MPSInductor] Add `nan` constant generation
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
If val is not equal to self, it's a nan (which is spelled as `NAN` in Metal)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,366,317
|
Fix broken links in markdown files
|
AlexanderDokuchaev
|
closed
|
[
"oncall: jit",
"module: mkldnn",
"open source",
"Stale",
"topic: not user facing"
] | 5
|
NONE
|
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @mingfeima @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,771,339,048
|
`kl_divergence` can produce incorrect results
|
randolf-scholz
|
closed
|
[
"module: nn",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
from torch.distributions import *
p = Uniform(0,1)
q = Beta(1,1) # special case, equal to Uniform(0,1)
kl_divergence(p, q) # tensor(nan) ❌
kl_divergence(p, p) # tensor(0) ✅
```
This is caused by incorrect implementation of the mathematical function $f(x)=x⋅\log(x)$, it should use `torch.special.xlogy` to get the correct value at $x=0$.
### Versions
<details>
Collecting environment information...
PyTorch version: 2.7.0a0+git82b08a2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.1.40093-bd86f1708
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-50-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: NVIDIA GeForce RTX 3090 (NVIDIA GeForce RTX 3090)
Nvidia driver version: 565.57.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 3900X 12-Core Processor
CPU family: 23
Model: 113
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4672,0698
CPU min MHz: 2200,0000
BogoMIPS: 7599.56
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 64 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+git82b08a2
[conda] Could not collect
</details>
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,771,336,244
|
fix non-strict placeholder naming with kwargs
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144278
Fixes https://github.com/pytorch/pytorch/issues/143732
Differential Revision: [D67872055](https://our.internmc.facebook.com/intern/diff/D67872055/)
| true
|
2,771,334,829
|
[Inductor] [bc-breaking] Node Level provenance tracking
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor",
"suppress-bc-linter"
] | 6
|
CONTRIBUTOR
|
Summary:
- use GraphTransformObserver + replace_node hooks to track node sources when they are replaced
- add pre_grad_graph tracking to tlparse
- add the node provenance information to post_grad_graph tlparse. This is for the frontend to create a mapping between pre_grad and post_grad graph. See an example frontend (this is just a prototype) here: https://drive.google.com/file/d/1cMHH_0y4FJUSS9tATwGQvA72O0Lth8eh/view?usp=sharing
- change "action" of NodeSource from a single action to a list of actions.
- It's BC-Breaking because we removed `GraphTransformObserver`'s class methods `on_node_erase` and `on_node_erase` .
https://docs.google.com/document/d/1dGh9myqNhywmbfP0Quzx_f04bghDFlj8cawj8MopiO8/edit?tab=t.0
The front-end code that takes in the tlparse result is in https://github.com/yushangdi/compiler_explorer.
ghstack-source-id: 260390519
Test Plan:
```
buck2 run mode/dev-nosan fbcode//caffe2/test:fx -- -r test_graph_transform_observer
buck run mode/dev-nosan fbcode//caffe2/test:fx -- -r node_source
buck run mode/dev-nosan fbcode//caffe2/test:fx -- -r graph_provenance
```
Front-end example screenshots on a real model, 93% coverage rate between pre_grad_graph and post_grad_graph
{F1973584210}{F1973584209}
```
buck2 build --show-output mode/opt -c=python.package_style=inplace -c fbcode.enable_gpu_sections=true -c fbcode.platform=platform010 -c fbcode.split-dwarf=true -c fbcode.nvcc_arch=a100,h100 caffe2/torch/fb/model_transform/experimental/benchmark:mts_gpu_benchmark
MODEL_ENTITY_ID=644688112
SNAPSHOT_ID=32
MODULE=merge
TORCH_COMPILE_DEBUG=1 CUDA_VISIBLE_DEVICES=7 TORCH_LOGS="+inductor,+schedule,output_code,graph_code" TORCHINDUCTOR_MAX_AUTOTUNE=1 TORCHINDUCTOR_UNIQUE_KERNEL_NAMES=1 ../buck-out/v2/gen/fbcode/ec86b05dd59e84db/caffe2/torch/fb/model_transform/experimental/benchmark/__mts_gpu_benchmark__/mts_gpu_benchmark.par --local-model /home/bahuang/models/${MODEL_ENTITY_ID}/${SNAPSHOT_ID}/gpu_lowering/input.predictor.disagg.gpu.merge --lower-backend AOT_INDUCTOR_EP --gpu-trace --aot-inductor-config="{'max_autotune':
True}"
buck2 run mode/dev-nosan fbcode//caffe2/test/inductor:auto_functionalize
```
Differential Revision: D65006709
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,320,134
|
[functorch] clean up asserts in `test_dims.py`
|
eqy
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"module: functorch",
"module: python version"
] | 9
|
COLLABORATOR
|
For better debuggability of issues encountered in e.g., #141730 when trying to migrate to python 3.12/3.13
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,771,301,328
|
Dynamo fails on `types.UnionType`s
|
ar0ck
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class Mod(torch.nn.Module):
def forward(self):
int | float
torch.export.export(Mod(), ())
```
### Error logs
```python
Traceback (most recent call last):
File "bug.py", line 7, in <module>
torch.export.export(Mod(), ())
File "/.../lib/python3.12/site-packages/torch/export/__init__.py", line 368, in export
return _export(
^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1031, in wrapper
raise e
File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1004, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1961, in _export
return _export_for_training(
^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1031, in wrapper
raise e
File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1004, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1825, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1283, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 667, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 1583, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 725, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1414, in transform_code_object
transformations(instructions, code_options)
File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/.../lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/.../lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1076, in run
while self.step():
^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 986, in step
self.dispatch_table[inst.opcode](self, inst)
File "/.../lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2317, in BINARY_OP
return _binary_op_lookup[inst.arg](self, inst)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 339, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 1003, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 851, in builtin_dispatch
rv = fn(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 831, in constant_fold_handler
return VariableTracker.build(tx, res)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/variables/base.py", line 452, in build
return builder.SourcelessBuilder.create(tx, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 2996, in create
unimplemented(
File "/.../lib/python3.12/site-packages/torch/_dynamo/exc.py", line 356, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Unexpected type in sourceless builder types.UnionType
from user code:
File "bug.py", line 5, in forward
int | float
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250106+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-1370P
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 13%
CPU max MHz: 5200.0000
CPU min MHz: 400.0000
BogoMIPS: 4377.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] torch==2.7.0.dev20250106+cpu
[conda] Could not collect
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,771,275,741
|
update expected results
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144274
this PR https://github.com/pytorch/pytorch/commit/f6488d85a013e0ec9d5415c29d78ec3f93b3c0ec made it +1.3% < 1.5%.
once we have the API from dev infra and change the test this wont be happening.
<img width="364" alt="Screenshot 2025-01-06 at 11 01 15 AM" src="https://github.com/user-attachments/assets/401b2d11-e400-49d6-b6f9-8e10ca141cb0" />
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,771,248,350
|
The axis name set by `torch.export.Dim` is not in ExportedProgram
|
titaiwangms
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 4
|
COLLABORATOR
|
### 🐛 Describe the bug
```python
import torch
class Model(torch.nn.Module):
def forward(self, x, y):
return x + y
dim = torch.export.Dim("batch", min=1, max=6)
ep = torch.export.export(
Model(),
(torch.randn(2, 3), torch.randn(2, 3)),
dynamic_shapes=[{0: dim}, {0: dim}],
)
```
```
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, x: "f32[s0, 3]", y: "f32[s0, 3]"):
# File: /home/titaiwang/pytorch/test_export.py:5 in forward, code: return x + y
add: "f32[s0, 3]" = torch.ops.aten.add.Tensor(x, y); x = y = None
return (add,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='y'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {s0: VR[1, 6]}
```
The graph module and range constraints still use s0 to represent the dynamic axis.
### Versions
Nightly
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-coding==1.3.3
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] model-explorer-onnx==0.3.0
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241216
[pip3] optree==0.13.0
[pip3] pytorch-sphinx-theme==0.0.24
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0a0+gitb5b1e94
[pip3] torchaudio==2.5.0a0+b4a286a
[pip3] torchmetrics==1.5.2
[pip3] torchvision==0.20.0a0+945bdad
[pip3] triton==3.2.0
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-sphinx-theme 0.0.24 dev_0 <develop>
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0a0+gitb5b1e94 dev_0 <develop>
[conda] torchaudio 2.5.0a0+b4a286a dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmetrics 1.5.2 pypi_0 pypi
[conda] torchvision 0.20.0a0+945bdad dev_0 <develop>
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,771,238,000
|
add inductor_triton_kernel_mapping_post_grad.json to tlparse
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 17
|
CONTRIBUTOR
|
Summary: Add debug trace artifact to inductor_triton_kernel_mapping_post_grad.json to tlparse as well
Test Plan: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpvUPL3i/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
Differential Revision: D67612181
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,195,720
|
fix torch.compile + ddp + non-reentrant AC pack hook firing count
|
xmfan
|
closed
|
[
"module: activation checkpointing",
"module: ddp",
"Merged",
"ciflow/trunk",
"release notes: distributed (ddp)",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 15
|
MEMBER
|
FIXES https://github.com/pytorch/pytorch/issues/144035
In order to preserve hook firing semantics, we disabled pack/unpack hooks for torch.compile: https://github.com/pytorch/pytorch/pull/123196. In DDP under torch.compile, there's this other callsite that we need to disable hooks for
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144271
cc @soulitzer @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,771,159,599
|
[BE][Ez]: Fix docs recommending inefficient tensor op order
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
`detach().clone()` is faster than `.clone().detatch()` since the gradients are not cloned. Let's update all the documentation and tests so that users do not use the inefficient op ordering.
| true
|
2,771,145,943
|
[BE]: Remove redundant copy in torch chunk shard
|
Skylion007
|
closed
|
[
"oncall: distributed",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Fixes an issue noticed in recent all_gather PR. Some parts of the codebase have a double copy with `clone().contiguous()` which could be fused into a single copy op.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,771,141,346
|
[pytorch][fdx] fix TSAN complains in multi-threaded environment
|
ms-meta
|
closed
|
[
"oncall: jit",
"fb-exported",
"module: amp (automated mixed precision)",
"Stale",
"ciflow/trunk",
"release notes: jit"
] | 5
|
NONE
|
TSAN finds that some `pyTorch***` components read & write few global variables from different threads
ThreadSanitizer: data race fbcode/caffe2/torch/csrc/jit/passes/autocast.cpp:513 in torch::jit::setAutocastMode(bool)
ThreadSanitizer: data race fbcode/caffe2/torch/csrc/jit/operator_upgraders/version_map.cpp:103 in torch::jit::get_operator_version_map[abi:cxx11]()
declaring variables `autocast_enabled` and `isVersionMapSorted` as `std::atomic<bool>` instead of just `bool` to make TSAN happy
Differential Revision: D67568986
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mcarilli @ptrblck @leslie-fang-intel
| true
|
2,771,091,212
|
Make all-reduce input contiguous in `distributed.nn.all_reduce`
|
awgu
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144267
Fixes https://github.com/pytorch/pytorch/issues/144060
I confirmed that the unit test fails without the `.contiguous()` fix.
cc @H-Huang @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,771,058,327
|
Update #graph breaks for moco benchmark
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144266
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,771,020,351
|
Migrate from Tuple -> tuple in torch/ao
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"fx",
"release notes: AO frontend"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144265
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,771,020,218
|
Migrate from Tuple -> tuple in torch/_inductor
|
bobrenjc93
|
closed
|
[
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144264
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,771,020,072
|
Migrate from Tuple -> tuple in torch/_functorch
|
bobrenjc93
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: AO frontend"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144263
| true
|
2,771,019,932
|
Migrate from Tuple -> tuple in torch/_export
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144265
* #144264
* #144263
* __->__ #144262
| true
|
2,771,019,797
|
Migrate from Tuple -> tuple in torch/_dynamo
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 13
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144261
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan @yf225
| true
|
2,771,019,681
|
Migrate from Tuple -> tuple in torch/_decomp
|
bobrenjc93
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ci-no-td"
] | 19
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144260
| true
|
2,771,019,546
|
Migrate from Tuple -> tuple in benchmarks
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144265
* #144264
* #144263
* #144262
* #144261
* #144260
* __->__ #144259
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,771,019,430
|
Migrate from Tuple -> tuple in torch/distributed
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (sharded)",
"topic: not user facing",
"ciflow/inductor"
] | 23
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144258
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,771,019,273
|
Migrate from Tuple -> tuple in torch/profiler
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144265
* #144264
* #144263
* #144262
* #144261
* #144260
* #144259
* #144258
* __->__ #144257
| true
|
2,771,019,158
|
Migrate from Tuple -> tuple in torch/testing
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: distributed (rpc)",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144256
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,771,019,033
|
Migrate from Tuple -> tuple in torch/utils/data
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: dataloader",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144255
| true
|
2,771,018,912
|
Migrate from Tuple -> tuple in test/distributed/_composable
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144254
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,771,018,775
|
Migrate from Tuple -> tuple in benchmarks/instruction_counts/core
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144253
| true
|
2,770,992,888
|
Update torch to py39 type annotations
|
aorenste
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"module: rocm",
"module: cpu",
"module: amp (automated mixed precision)",
"release notes: quantization",
"release notes: releng",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144252
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @mingfeima @XiaobingSuper @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan
| true
|
2,770,958,158
|
[MPS] Fix bitwise shifts for uint8
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144251
* #144250
* #144249
Previosly all bitwise operations were aliased to the same type, but this is wrong for shift ops
Rather than building an overly complex logic, let's just instantiate using shared `scalarToMetalTypeString` helper function
Fixes https://github.com/pytorch/pytorch/issues/144190
| true
|
2,770,958,039
|
[BE] Fix + parametrize `test_min_max_nan_propagation`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144251
* __->__ #144250
* #144249
- `dtype` was not passed as argument to `torch.rand` before
- Condition bfloat16 testing on MacOS14+
| true
|
2,770,957,908
|
[BE] Parametrize `test_min_max`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144251
* #144250
* __->__ #144249
It's better to have one unit test per dtype rather a combined one
| true
|
2,770,800,583
|
[inductor][cpu] Fix bmm b_index for dynamic expressions in inductor autotuner
|
pytorchbot
|
closed
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Fixes #143102
Addresses 2 problems relating to dynamic batch size in BMM autotuner:
1. With dynamic batch size, when the input is a sympy Mult expression, such as `s0*8` which occurs in many dynamo benchmark models. We address this by using `size_hints` to solve for any expressions. This is safe since this section of the code is only called to generate inputs for benchmarking.
2. Some epilogue nodes may use the dynamic batch size as part of the codegen, for example when an input to the epilogue node is transposed and has dynamic batch size in the stride of other dimensions. When these epilogue nodes exist, if the sizevar is not already present in the `kernel.args`, it will create a new sizevar with a name. It is possible that subsequent calls to `def_kernel` could overwrite this variable name, so to avoid this we pass all the sizevars as `extra_sizevars` to the calls to `def_kernel` for the GEMM functions, so no variable renaming happens later in the BMM definition.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,770,643,439
|
[inductor][cpu] torch.bitwise_and/or/xor incorrectly accepts float32 tensors
|
maybeLee
|
open
|
[
"module: error checking",
"oncall: pt2",
"oncall: cpu inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It's not a severe problem but I report this issue anyway since the actual behavior deviated from what it is expected in the documentation.
According to the documentation (e.g., https://pytorch.org/docs/stable/generated/torch.bitwise_not.html), `torch.bitwise_and/or/xor` only accepts boolean or integer tensors.
However, after `torch.compile` these three APIs can also accept float32 tensor when running on CPU. In contrast, if I run the code on torch.compile-cuda or eager mode, float tensors will be rejected by these APIs.
```
import torch
for op in [torch.bitwise_and, torch.bitwise_not, torch.bitwise_or, torch.bitwise_xor]:
cf = torch.compile(op)
for dtype in [torch.float16, torch.float32, torch.float64]:
input = torch.tensor([-1, -2, 3], dtype=dtype)
other = torch.tensor([1, 0, 3], dtype=dtype)
try:
res = cf(input, other)
print(f"[torch.compile] OP: {op.__name__} accepts dtype: {dtype}")
except:
print(f"[torch.compile] OP: {op.__name__} does not accept dtype: {dtype}")
```
Actual output:
```
[torch.compile] OP: bitwise_and does not accept dtype: torch.float16
[torch.compile] OP: bitwise_and accepts dtype: torch.float32 # incorrectly accept float32
[torch.compile] OP: bitwise_and does not accept dtype: torch.float64
[torch.compile] OP: bitwise_not does not accept dtype: torch.float16
[torch.compile] OP: bitwise_not does not accept dtype: torch.float32
[torch.compile] OP: bitwise_not does not accept dtype: torch.float64
[torch.compile] OP: bitwise_or does not accept dtype: torch.float16
[torch.compile] OP: bitwise_or accepts dtype: torch.float32 # incorrectly accept float32
[torch.compile] OP: bitwise_or does not accept dtype: torch.float64
[torch.compile] OP: bitwise_xor does not accept dtype: torch.float16
[torch.compile] OP: bitwise_xor accepts dtype: torch.float32 # incorrectly accept float32
[torch.compile] OP: bitwise_xor does not accept dtype: torch.float64
```
### Versions
PyTorch version: 2.7.0.dev20250106+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250106+cu124
[pip3] torchaudio==2.6.0.dev20250106+cu124
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250106+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250106+cu124 pypi_0 pypi
cc @malfet @chauhang @penguinwu
| true
|
2,770,509,828
|
[Inductor] Unify the data type propagation between Triton and CPP Backend
|
leslie-fang-intel
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 6
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
Previously, `dtype` is the attr of `CppCSEVariable` but not `CSEVariable`. And thus CPP backend has its data type propagation mechanism in `CppCSEVariable.update_on_args`: https://github.com/pytorch/pytorch/blob/d85ae4be734cfd53f5b893240894381ac65fe8b4/torch/_inductor/codegen/cpp_utils.py#L218
Now, triton backend has also introduced new the data type propagation in https://github.com/pytorch/pytorch/blob/d85ae4be734cfd53f5b893240894381ac65fe8b4/torch/_inductor/codegen/common.py#L1825-L1866. We may consolidate these 2 mechanisms to save redundant code.
### Alternatives
No
### Additional context
No
cc @soulitzer @chauhang @penguinwu
| true
|
2,770,403,577
|
F.interpolate returns NAN on MPS if align_corner is True.
|
benHeid
|
closed
|
[
"module: nn",
"triaged",
"module: correctness (silent)",
"module: mps"
] | 3
|
NONE
|
### 🐛 Describe the bug
When using interpolate on MPS with align_corner=True, the result consists only of NaN value, which is inconsistent to the CPU implementation.
You can replicate this by the following code snippet:
```python
import torch
import torch.nn.functional as F
test = torch.Tensor([[1],[2],[4]]).to("mps")
result = F.interpolate(test.unsqueeze(1), 3, mode="linear", align_corners=True).squeeze(1)
print(result)
# tensor([[nan, nan, nan],
# [nan, nan, nan],
# [nan, nan, nan]], device='mps:0')
test = torch.Tensor([[1],[2],[4]]).to("cpu")
result = F.interpolate(test.unsqueeze(1), 3, mode="linear", align_corners=True).squeeze(1)
print(result)
# tensor([[1., 1., 1.],
# [2., 2., 2.],
# [4., 4., 4.]])
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.9 (main, Jun 29 2024, 14:01:21) [Clang 15.0.0 (clang-1500.1.0.2.5)] (64-bit runtime)
Python platform: macOS-15.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] pytorch-forecasting==1.1.1
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch_optimizer==2.12.0
[pip3] torch==2.5.1
[pip3] torchmetrics==1.4.1
[pip3] torchvision==0.19.0
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @kulinseth @malfet @DenisVieriu97 @jhavukainen
| true
|
2,770,306,177
|
ImportError: Undefined symbol in libcusparse.so.12 with PyTorch 2.5.1 and CUDA 12.2
|
Hmiru
|
closed
|
[
"module: binaries",
"module: cuda"
] | 1
|
NONE
|
### 🐛 Describe the bug
### Problem Description
I encountered an `ImportError` when running a PyTorch script with CUDA 12.2 and PyTorch 2.5.1. The error indicates an undefined symbol in `libcusparse.so.12`.
### Error
```bash
python -c "import torch; print('CUDA Available:', torch.cuda.is_available())"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/root/.cache/pypoetry/virtualenvs/soccer-eventpred-YF5lB--H-py3.10/lib/python3.10/site-packages/torch/__init__.py", line 367, in <module>
from torch._C import * # noqa: F403
ImportError: /root/.cache/pypoetry/virtualenvs/soccer-eventpred-YF5lB--H-py3.10/lib/python3.10/site-packages/torch/lib/../../nvidia/cusparse/lib/libcusparse.so.12: undefined symbol: __nvJitLinkComplete_12_4, version libnvJitLink.so.12
```
### Environment
- **PyTorch version**: 2.5.1
- **CUDA version**: 12.2
- **cudnn version**: 8.9.4
- **NVIDIA driver version**: 535.161.07
- **Python version**: 3.10
- **OS**: Ubuntu 22.04
- **Virtual Environment**: Poetry
- **GPU**: 3x NVIDIA RTX A6000
-
### What I Did
- Verified CUDA installation with `nvcc --version` (CUDA 12.2 is installed). rtx a6000 3개
- Checked NVIDIA driver compatibility using `nvidia-smi` (Driver version: 535.161.07).
- Confirmed that `libcusparse.so.12` exists in `/usr/local/cuda-12.2/lib64/`.
I would greatly appreciate it if you could provide some guidance on this.
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
Nvidia driver version: 535.161.07
cuDNN version: Probably one of the following:
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy==1.5.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.0.9
[pip3] torch==2.5.1
[pip3] torchmetrics==1.2.0
[pip3] triton==3.1.0
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.5.1+cu124 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @seemethere @malfet @osalpekar @atalman @ptrblck @msaroufim @eqy
| true
|
2,770,295,254
|
[Inductor][CPP] Fix outer loop fusion buffer removed
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144243
**Summary**
Fix issue: https://github.com/pytorch/pytorch/issues/144186. For the test case reported in the issue, we have saw some nodes with `LoopNest`
- `LoopNest(loops=[LoopLevel(var=x0, size=8, offset=0, tiled_size=0, steps=1, parallel=0, simd_omp=False, simd_vec=False, collapsed=False, is_reduction=False), LoopLevel(var=x1, size=8, offset=0, tiled_size=0, steps=1, parallel=0, simd_omp=False, simd_vec=False, collapsed=False, is_reduction=True)], kernel=<torch._inductor.codegen.cpp.CppKernelProxy object at 0x7fc724426680>)`
- `LoopNest(loops=[LoopLevel(var=x0, size=8, offset=0, tiled_size=0, steps=16, parallel=0, simd_omp=False, simd_vec=True, collapsed=False, is_reduction=False), LoopLevel(var=x1, size=8, offset=0, tiled_size=0, steps=16, parallel=0, simd_omp=False, simd_vec=True, collapsed=False, is_reduction=True)], kernel=<torch._inductor.codegen.cpp.CppKernelProxy object at 0x7fc75c2cae60>)`
Although, these 2 `LoopNest` have same `range` and `var`, but different `steps` 1 and 16. So, they will fail to be merged with outer loops. And since when we localize the buffer, we have removed the global buffers. We need to restore the status of `V.graph.removed_buffers` before fallback to codegen without outer loop fusion.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_cpu_repro.py -k test_outer_loop_fusion_buffer_remove
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,770,268,197
|
Exporting the operator 'aten::_transformer_encoder_layer_fwd' to ONNX opset version 17 is not supported
|
jayakommuru
|
open
|
[
"module: onnx",
"triaged"
] | 24
|
NONE
|
### 🐛 Describe the bug
Using the following code to convert torch model (torch.nn.Module) to onnx but getting `Exporting the operator 'aten::_transformer_encoder_layer_fwd' to ONNX opset version 17 is not supported`
```
input_names = ["input__0", "input__1", "input__2"]
output_names = ["output__0"]
with torch.no_grad():
torch.onnx.export(model_sigmoid, (entity_emb_1, entity_emb_2, raw_data),
f="/tmp/rough.onnx",
verbose=False,
do_constant_folding=True,
input_names=input_names,
output_names=output_names,
export_params=True,
dynamic_axes={'input__0' : {0 : 'batch_size'},
'input__1' : {0 : 'batch_size'},
'input__2' : {0 : 'batch_size'},
'output__0' : {0 : 'batch_size'}}
)
```
Have tried with dynamo export as well, by setting `dynamo=True` in the above code, but did not help and getting the following error with dynamo
```
ConstraintViolationError: Constraints violated (batch_size)! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of batch_size = L['entity_emb_1'].size()[0] in the specified range are valid because batch_size was inferred to be a constant (1).
- Not all values of batch_size = L['entity_emb_2'].size()[0] in the specified range are valid because batch_size was inferred to be a constant (1).
- Not all values of batch_size = L['data'].size()[0] in the specified range are valid because batch_size was inferred to be a constant (1).
Suggested fixes:
batch_size = 1
During handling of the above exception, another exception occurred:
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.28.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.100+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 535.183.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20250106
[pip3] torch==2.5.1
[pip3] torcheval==0.0.7
[pip3] torchlars==0.1.2
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.15.2+cu118
[pip3] triton==3.1.0
[conda] Could not collect
| true
|
2,770,237,255
|
Core dumped happens when avg_pool1d with torch.compile receives uint tensor with specific tensor shape
|
maybeLee
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Program crashes when running the following code:
```python
# toy.py
import torch
torch.random.manual_seed(0)
input = torch.randn(1,8,1).to(torch.uint8)
kernel_size = 4
stride = 46
padding = 2
f = torch.nn.functional.avg_pool1d
cf = torch.compile(f)
cf(input,kernel_size,stride,padding)
```
And the error message is non-deterministic (maybe it is related to my system status).
```shell
>>> python toy.py
free(): invalid next size (fast)
Aborted (core dumped)
>>> python toy.py
corrupted size vs. prev_size
Aborted (core dumped)
>>> python toy.py
free(): invalid pointer
Aborted (core dumped)
>>> python toy.py
free(): invalid next size (fast)
Aborted (core dumped)
```
- This issue only occurs on my CPU. The code will normally raise an Exception (`BackendCompilerFailed`) if I ran it on gpu.
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gitace645a
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gitace645a
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gitf611e8c pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng @gujinghui @fengyuan14 @guangyey
| true
|
2,770,148,047
|
[Intel GPU] add tf32 support for matmul on XPU
|
jianyizh
|
open
|
[
"module: cpu",
"triaged",
"open source",
"topic: not user facing",
"release notes: xpu"
] | 32
|
CONTRIBUTOR
|
Support xpu tf32 matmul using torch.bachend.mkldnn.allow_tf32, we will discuss in future if we need a new api to control matmul only
~~Support xpu tf32 matmul using torch.set_float32_matmul_precision. For conv, check https://github.com/pytorch/pytorch/pull/137570
We decide not following torch.backends.cuda.matmul.allow_tf32 because this API actually calls setAllowTF32CuBLAS to set matmul_precison to high. We also avoid other related tf32 changes (i.e. in inductor) by not introducing new API.~~
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,770,124,096
|
Integrate ULFM and FTI for OpenMPI (torch.distributed)
|
Belegkarnil
|
open
|
[
"oncall: distributed",
"feature"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
I am working on a distributed framework and I want to be able to manage node failures. Is is possible to integrate the use of _User Level Failure Mitigation_ ([ULFM](https://github.com/ICLDisco/ulfm-testing)) and _Fault Tolerance Interface_ ([FTI](https://github.com/leobago/fti)) libraries, please ?
### Alternatives
Create a manual checkpoint with `torch.save` or with Berkeley Lab Checkpoint/Restart (BLCR).
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,770,074,566
|
[Break XPU][Inductor UT] Remove excepted failure for aoti test_fft_c2c
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144238
Since #143223 enabled runtime dispatch for fft_c2c in AOTI mod, for XPU, we can fallback fft_c2c which has no XPU implementation to CPU and pass the case now.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,770,045,309
|
[Profiler] Fix device setting error of other backends in torch.profiler
|
fmo-mt
|
closed
|
[
"open source",
"oncall: profiler",
"Merged",
"ciflow/trunk",
"release notes: profiler"
] | 8
|
CONTRIBUTOR
|
In earlier implementation, if `self.use_device != "cuda"` and `device is None`, we would get a `device = "cpu"` from line401, which is not as expected.
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,770,042,945
|
Update slow tests
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 3
|
COLLABORATOR
|
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
| true
|
2,769,996,663
|
[TreeSpec] Support enum in defaultdict
|
henryhu6
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: pytree"
] | 8
|
CONTRIBUTOR
|
Summary: Followup from D66269157, add support for enum in defaultdict.
Test Plan: Added unit test
Differential Revision: D67832100
cc @zou3519 @XuehaiPan
| true
|
2,769,883,522
|
increase max value for triton block check
|
mengluy0125
|
closed
|
[
"fb-exported",
"Stale",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 4
|
CONTRIBUTOR
|
Differential Revision: D67844450
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,769,847,614
|
remove allow-untyped-defs from torch/_prims/executor.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144233
| true
|
2,769,847,557
|
remove allow-untyped-defs from ao/nn/sparse/quantized/utils.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: AO frontend"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144233
* __->__ #144232
| true
|
2,769,847,486
|
remove allow-untyped-defs from torch/nn/utils/_deprecation_utils.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144231
| true
|
2,769,847,424
|
remove allow-untyped-defs from torch/export/_remove_auto_functionalized_pass.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: export"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144230
| true
|
2,769,798,593
|
Issues with onnx package for pytorch build in WIndows 11
|
jenetscaria-mcw
|
open
|
[
"module: onnx",
"module: build",
"module: windows",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
Pytorch 2.5.1 build with clang fails in Windows 11 with unknown type name errors for various onnx sub modules.
```
C:/Users/mcw/Documents/Jenet/pytorch/third_party/onnx/onnx/checker.h:148:27: error: unknown type name 'SequenceProto'
148 | void check_sequence(const SequenceProto& sequence, const CheckerContext&);
| ^
C:/Users/mcw/Documents/Jenet/pytorch/third_party/onnx/onnx/checker.h:149:22: error: unknown type name 'MapProto'
149 | void check_map(const MapProto& map, const CheckerContext&);
| ^
C:/Users/mcw/Documents/Jenet/pytorch/third_party/onnx/onnx/checker.h:150:27: error: unknown type name 'OptionalProto'
150 | void check_optional(const OptionalProto& opt, const CheckerContext&);
| ^
```
Versions:
Pytorch : 2.5.1
Clang: 19.1.3
ONNX module: 1.16.2
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home (10.0.22631 64-bit)
GCC version: (x86_64-win32-seh-rev0, Built by MinGW-Builds project) 14.2.0
Clang version: 19.1.3 (https://github.com/llvm/llvm-project.git ab51eccf88f5321e7c60591c5546b254b6afab99)
CMake version: version 3.31.0-rc2
Libc version: N/A
Python version: 3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Name: AMD Ryzen 5 9600X 6-Core Processor
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3900
MaxClockSpeed: 3900
L2CacheSize: 6144
L2CacheSpeed: None
Revision: 17408
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy==1.8.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] numpydoc==1.5.0
[pip3] onnx==1.16.0
[pip3] optree==0.13.1
[pip3] torch==2.1.2
[conda] _anaconda_depends 2024.02 py311_mkl_1
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46358
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-service 2.4.0 py311h2bbff1b_1
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] mkl_fft 1.3.8 py311h2bbff1b_0
[conda] mkl_random 1.2.4 py311h59b6b97_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] numpydoc 1.5.0 py311haa95532_0
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.1.2 pypi_0 pypi
```
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,769,759,041
|
grad_fn function disobeys broadcast rules
|
Girafboy
|
open
|
[
"module: autograd",
"triaged",
"needs research"
] | 0
|
NONE
|
### grad_fn function disobeys broadcast rules
In the following code, `z.grad_fn` is `MulBackward0`. It should be the inverse of multiplication. However, the shapes of `x` and `x_` differ.
```
import torch
x = torch.randn(2, 1, requires_grad=True)
y = torch.randn(2, 3, requires_grad=True)
z = x * y
x_, y_ = z.grad_fn(z)
print(x_.shape, y_.shape) # Output: torch.Size([2, 3]) torch.Size([2, 3])
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.10 (Green Obsidian) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.28
Python version: 3.11.10 (main, Nov 5 2024, 07:57:54) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)] (64-bit runtime)
Python platform: Linux-4.18.0-553.27.1.el8_10.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7763 64-Core Processor
Stepping: 1
CPU MHz: 3243.397
BogoMIPS: 4890.99
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
NUMA node4 CPU(s): 64-79
NUMA node5 CPU(s): 80-95
NUMA node6 CPU(s): 96-111
NUMA node7 CPU(s): 112-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] No relevant packages
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,769,738,907
|
[mps/inductor] MPSBasicTests.test_max_min fails on macOS
|
dcci
|
closed
|
[
"needs reproduction",
"triaged",
"module: mps",
"module: inductor"
] | 6
|
MEMBER
|
### 🐛 Describe the bug
Log:
```
% python test/inductor/test_mps_basic.py MPSBasicTests.test_max_min
/Users/davidino/pytorch/pytorch/torch/utils/_config_module.py:440: UserWarning: Skipping serialization of skipfiles_inline_module_allowlist value {}
warnings.warn(
Finline_call []
stats [('calls_captured', 6), ('unique_graphs', 3)]
inductor [('fxgraph_cache_miss', 2), ('fxgraph_cache_hit', 1)]
aot_autograd [('total', 3), ('ok', 3), ('autograd_cache_miss', 2), ('autograd_cache_saved', 2), ('autograd_cache_hit', 1)]
======================================================================
FAIL: test_max_min (__main__.MPSBasicTests.test_max_min)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/davidino/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 3114, in wrapper
method(*args, **kwargs)
File "/Users/davidino/pytorch/pytorch/test/inductor/test_torchinductor.py", line 1401, in test_max_min
self.common(fn, (t1, t2))
File "/opt/homebrew/anaconda3/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/Users/davidino/pytorch/pytorch/test/inductor/test_torchinductor.py", line 619, in check_model_gpu
check_model(
File "/Users/davidino/pytorch/pytorch/test/inductor/test_torchinductor.py", line 501, in check_model
self.assertEqual(
File "/Users/davidino/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 4022, in assertEqual
raise error_metas.pop()[0].to_error(
AssertionError: Tensor-likes are not close!
Mismatched elements: 2 / 8 (25.0%)
Greatest absolute difference: nan at index (0,) (up to 1e-05 allowed)
Greatest relative difference: nan at index (0,) (up to 1.3e-06 allowed)
The failure occurred for item [0]
To execute this test, run the following from the base repo dir:
python test/inductor/test_mps_basic.py MPSBasicTests.test_max_min
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 1 test in 0.941s
FAILED (failures=1)
```
It passed yesterday so it must be a recent regression, I think. I can bisect.
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.9 (v3.10.9:1dd9be6584, Dec 6 2022, 14:37:36) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,769,723,124
|
Enable bugprone-unchecked-optional-access
|
cyyever
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cpp"
] | 6
|
COLLABORATOR
|
We can actually enable bugprone-unchecked-optional-access without the risk of hang.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.