id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,869,337,941
|
torch.sort: Optimize memory usage with (dtype_indices: ScalarType, dynamic_indices_dtype: bool) options
|
voidbag
|
open
|
[
"module: cpu",
"triaged",
"open source",
"release notes: mps",
"module: inductor"
] | 14
|
NONE
|
Fixes #147628
The static dtype of indices was kLong(64bit), consuming exccessive memory.
It reduces memory usage, determining the dtype of indices dynamically.
The dtype is one of Byte, UInt16, UInt32 and UInt64.
- ~This PR makes at::arange support uint16, uint32 and uint64 (`at::arange( uint(16|32|64) )->uint(16|32|64)`~
- ~It also changes existing behavior of at::arange, `at::arange(uint8)->int64` into `at::arange(uint8)->uint8`~
- This PR makes at::linspace support uint16, uint32 and uint64
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,869,324,770
|
Optimize memory usage of torch.sort significantly, with dynamic dtype indices
|
voidbag
|
open
|
[
"triaged",
"enhancement",
"module: python frontend"
] | 3
|
NONE
|
### 🚀 The feature, motivation and pitch
I optimized torch.sort to return indices with dynamic dtype, not fixed **64bit** torch.long
This proposal can save GPU memory usage significantly.
1.example:
Boolean matrix can express graph.
`ret = torch.sort(torch.zeros((69878, 10677), dtype=torch.bool, device="cuda:0"))`
number of elements: 746,087,406
1. existing torch return value
- `ret = torch.sort(torch.zeros((69878, 10677), dtype=torch.bool, device="cuda:0"))`
- Peak GPU memory usage: **19,517MiB** (data + cache)
- Final GPU memory usage: **6,707MiB** (data)
- values: ((69878, 10677), dtype=torch.bool)
- indices: ((69878, 10677), dtype=**torch.long**)
2. proposal result: indices:
- `ret = torch.sort(torch.zeros((69878, 10677), dtype=torch.bool, device="cuda:0"), dynamic_indices_type=True)`
- Peak GPU memory usage: **6,709MiB** (data + cache)
- Final GPU memory usage: **2,437MiB** (data)
- values: ((69878, 10677), dtype=torch.bool)
- indices: ((69878, 10677), dtype=**torch.uint16**)
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD
| true
|
2,869,292,323
|
Enforce full FIPS compliance with hashlib - ruff rule S324 on python 3.9+
|
Skylion007
|
closed
|
[
"good first issue",
"module: lint",
"triaged",
"enhancement",
"actionable"
] | 0
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
This is to more broadly address the issue here. We need to add a special flag for compliance reasons that the hashlib function is not being used for cryptographic applications. More details can be found here: https://github.com/pytorch/pytorch/issues/147236 . In most builds of Python, this will serve as a noop.
We can enforce FIPS compliance o the hashlib this codebase wide pretty easily with ruff rule [S324](https://docs.astral.sh/ruff/rules/hashlib-insecure-hash-function/) We just need to have someone go and manually add usedforsecurity=False to all the hashlib calls in the codebase. The documentation for the rule doesn't mention the usedforsecurity suppression so I added it to the ruff docs so I opened an issue to improve the docs: https://github.com/astral-sh/ruff/issues/16188
Steps to resolve this issue:
* Enable S324 in the pyproject.toml for RUFF
* Scan the codebase for all flagged issues. Double check they are not in fact being used for a cryptographically secure application. If not add usedforsecurity=False, if so please flag.
* Open a PR with the ruff rule enabled, and all the fixes linking back to this issue.
### Alternatives
Leave as is.
### Additional context
_No response_
| true
|
2,869,258,327
|
HIP error: invalid device function on ROCm RX 7600XT
|
JackBinary
|
closed
|
[
"module: binaries",
"module: rocm",
"triaged"
] | 5
|
NONE
|
### 🐛 Describe the bug
#### **Issue Summary**
When attempting to perform any GPU compute task using PyTorch with the ROCm/HIP backend, I encounter the following error:
```
RuntimeError: HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with TORCH_USE_HIP_DSA to enable device-side assertions.
```
This occurs on the first attempt to allocate a tensor on the GPU.
#### **Minimal Reproducible Example (PyTorch)**
```python
import torch
print("PyTorch detects GPU:", torch.cuda.is_available())
device = torch.device("cuda")
print("Allocating tensors on GPU...")
a = torch.randn((1000, 1000), device=device, dtype=torch.float32)
b = torch.randn((1000, 1000), device=device, dtype=torch.float32)
print("Running matrix multiplication...")
result = torch.matmul(a, b)
torch.cuda.synchronize()
print("✅ PyTorch HIP execution successful!")
```
This fails immediately when attempting to allocate a tensor on the GPU.
#### **Full Stack Trace**
```
PyTorch detects GPU: True
Allocating tensors on GPU...
Traceback (most recent call last):
File "/home/jgarland/test_gpu.py", line 7, in <module>
a = torch.randn((1000, 1000), device=device, dtype=torch.float32)
RuntimeError: HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
```
---
#### **Expected Behavior**
PyTorch should correctly allocate and compute tensors on the AMD GPU using the HIP backend.
#### **Actual Behavior**
- PyTorch detects the GPU (`torch.cuda.is_available()` returns `True`).
- The first attempt to allocate a tensor on the GPU results in an **invalid device function** error.
---
#### **Comparison with HIP Native Code**
The following equivalent HIP C++ program runs without issue, indicating that the GPU and ROCm environment are functioning correctly:
```cpp
#include <hip/hip_runtime.h>
#include <iostream>
#include <vector>
#define MATRIX_SIZE 1024
__global__ void matmul(const float* A, const float* B, float* C, int width) {
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
if (row < width && col < width) {
float sum = 0.0f;
for (int k = 0; k < width; k++) {
sum += A[row * width + k] * B[k * width + col];
}
C[row * width + col] = sum;
}
}
int main() {
int matrix_size = MATRIX_SIZE * MATRIX_SIZE;
int data_size = matrix_size * sizeof(float);
float *d_A, *d_B, *d_C;
hipMalloc((void**)&d_A, data_size);
hipMalloc((void**)&d_B, data_size);
hipMalloc((void**)&d_C, data_size);
hipLaunchKernelGGL(matmul, dim3(32, 32), dim3(32, 32), 0, 0, d_A, d_B, d_C, MATRIX_SIZE);
hipDeviceSynchronize();
std::cout << "✅ HIP matrix multiplication successful!" << std::endl;
}
```
---
### Versions
```
PyTorch version: 2.5.1+rocm6.2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.2.41133-dd7f95766
OS: Fedora Linux 41 (Forty One) (x86_64)
GCC version: (GCC) 14.2.1 20250110 (Red Hat 14.2.1-7)
Clang version: Could not collect
CMake version: version 3.30.7
Libc version: glibc-2.40
Python version: 3.10.16 (main, Feb 10 2025, 00:00:00) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)] (64-bit runtime)
Python platform: Linux-6.12.15-200.fc41.x86_64-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 7600 XT (gfx1102)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.2.41133
MIOpen runtime version: 3.2.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7900X 12-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 45%
CPU max MHz: 5733.0000
CPU min MHz: 400.0000
BogoMIPS: 9381.75
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] pytorch-triton-rocm==3.1.0
[pip3] torch==2.5.1+rocm6.2
[pip3] torchaudio==2.5.1+rocm6.2
[pip3] torchvision==0.20.1+rocm6.2
[conda] Could not collect
```
cc @seemethere @malfet @osalpekar @atalman @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,869,203,135
|
aten.grid_sampler_3d.default is missing a c-shim implementation, using proxy executor as fallback
|
bhack
|
open
|
[
"good first issue",
"triaged",
"oncall: pt2",
"module: inductor",
"oncall: export",
"module: aotinductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Do we need any action item here?
### Error logs
```python
site-packages/torch/_inductor/ir.py:6638] [0/0] aten.grid_sampler_3d.default is missing a c-shim implementation, using proxy executor as fallback
```
### Versions
nightly
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @yushangdi @benjaminglass1 @yf225
| true
|
2,869,162,898
|
[MPS] Rand is broken for 5D+ tensors
|
andreabosisio
|
closed
|
[
"high priority",
"triage review",
"module: random",
"module: mps"
] | 2
|
NONE
|
### 🐛 Describe the bug
While trying to generate different samples with a diffusion model, I noticed the following problem with the `torch.randn` function when using MPS:
```python
import torch as th
rand_cpu_5d = th.randn((2, 1, 32, 32, 32), device="cpu")
print(th.allclose(rand_cpu_5d[0], rand_cpu_5d[1])) # False, as desired (tested multiple times)
rand_mps_5d = th.randn((2, 1, 32, 32, 32), device="mps")
print(th.allclose(rand_mps_5d[0], rand_mps_5d[1])) # True, not desired (tested multiple times)
```
But, at the same time:
```python
rand_cpu_4d = th.randn((2, 32, 32, 32), device="cpu")
print(th.allclose(rand_cpu_4d[0], rand_cpu_4d[1])) # False, as desired (tested multiple times)
rand_mps_4d = th.randn((2, 32, 32, 32), device="mps")
print(th.allclose(rand_mps_4d[0], rand_mps_4d[1])) # False, as desired (tested multiple times)
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:25:29) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] pytorch3d==0.7.8
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[conda] numpy 2.2.2 pypi_0 pypi
[conda] pytorch 2.5.1 py3.10_0 pytorch
[conda] pytorch3d 0.7.8 pypi_0 pypi
[conda] torchaudio 2.5.1 py310_cpu pytorch
[conda] torchvision 0.20.1 py310_cpu pytorch
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @pbelevich @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,869,016,582
|
torch.export.export creates guards that denies exporting.
|
JibAxelera
|
closed
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 6
|
NONE
|
### 🐛 Describe the bug
### Problem
Trying to export a conv neural network using torch.export.export. If the model and the input tensors are on the GPU and I have a batchnorm layer in the model, it creates guards that make exporting fail in any case.
### Standalone code to reproduce :
Just run the following python code :
```python
import torch
from torch.export import Dim
class ConvNN(nn.Module):
def __init__(self, params):
super(ConvNN, self).__init__()
self.num_output = 0
self.num_classes = params['num_classes']
self.confidence_threshold = params['confidence_threshold']
self.relu = nn.ReLU()
self.conv1 = nn.Conv2d(3,64, kernel_size=3,padding=1, stride=1)
self.batch_norm1 = nn.BatchNorm2d((64))
self.conv2 = nn.Conv2d(64,64, kernel_size=3,padding=1, stride=1)
self.batch_norm2 = nn.BatchNorm2d((64))
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
self.early_exit1 = nn.Linear(16384, self.num_classes)
def forward(self, x):
x= self.conv1(x)
x=self.batch_norm1(x)
x= self.relu(x)
x= self.conv2(x)
x=self.batch_norm2(x)
x= self.maxpool1(x)
x= self.relu(x)
return x, nn.functional.softmax(self.early_exit1(x.clone().view(x.size(0), -1)), dim=1)
###-- Main
def main():
x = torch.rand(32,3,32,32).to('cuda')
params = {}
params['num_classes'] = 10
params['confidence_threshold'] = 0.5
model = ConvNN(params)
model.cuda()
model.eval()
batch = Dim("batch")
dynamic_shapes = {"x": {0: batch}}
torch.export.export(model, (x,) , dynamic_shapes=dynamic_shapes)
if __name__ == '__main__':
main()
```
This outputs the following error :
```
assert op == "==", t
^^^^^^^^^^
AssertionError: batch < 2147483647/65536
```
I believe these number will depend on the kind of GPU you have.
Making the batch size bigger than this number does not solve the error as I get:
```
assert op == "==", t
^^^^^^^^^^
AssertionError: 2147483647/65536 <= batch
```
Adding torch dynamo logs I can see :
```
E0221 13:10:09.298000 131711 site-packages/torch/_guards.py:295] [0/0] - Not all values of batch = L['x'].size()[0] in the specified range satisfy the generated guard 65536*L['x'].size()[0] >= 2147483647.
E0221 13:10:09.298000 131711 site-packages/torch/_guards.py:295] [0/0] - Not all values of batch = L['x'].size()[0] in the specified range satisfy the generated guard 2 <= L['x'].size()[0] and L['x'].size()[0] <= 65535
```
### Additional informations :
If I remove the batch norm layer or put my model and input sample on CPU, the error is gone.
### Versions
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxoptimizer==0.3.13
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime-gpu==1.19.0
[pip3] onnxscript==0.1.0
[pip3] torch==2.6.0
[pip3] torch-mlir==20250127.357
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] numpy 2.2.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torch-mlir 20250127.357 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,869,014,377
|
Build a storage reader/writer to write checkpoints in HF format
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: new features",
"topic: not user facing",
"ci-no-td",
"oncall: distributed checkpointing"
] | 12
|
CONTRIBUTOR
|
Title - we want to write checkpoints in HF format with DCP, this diff allows this for the non-distributed use case.
Copy of [D68444967](https://www.internalfb.com/diff/D68444967) (https://github.com/pytorch/pytorch/pull/146352). That diff got reverted because of lint errors. The lint error was due to having imports of uninstalled libraries. This was on purpose because we don't want to install safetensors and huggingface, this new diff explicitly ignores this lint so that we don't have the error.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,868,996,465
|
[Inductor] Update `set_driver_to_gpu` code to avoid backend re-initialization with new Triton
|
anmyachev
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 7
|
COLLABORATOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,868,888,671
|
Enabled force_shape_pad for triton tests in test_kernel_benchmark
|
iupaikov-amd
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"rocm"
] | 18
|
CONTRIBUTOR
|
During ROCm runs we naturally have those tests show that padding path will be slower for our archs and the pad_mm chooses to opt out of padding thus failing those tests.
Reasoning for this is per my understanding those tests don't check IF the operation should be padded in the first place, but HOW is it padded and if it's done in a correct way. More than that the tests shouldn't really be hardware dependent or have some condition for them.
Similar PR for reference: https://github.com/pytorch/pytorch/pull/141768
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,868,774,546
|
[Triton 3.3] [ROCm] Enabled split_scan support for ROCm builds
|
iupaikov-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: rocm",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 9
|
CONTRIBUTOR
|
Fixes issue https://github.com/pytorch/pytorch/issues/133228
Enabled split_scan support for ROCm builds.
Must be handled in a non BC breaking way so this functionality is enabled conditionalised on triton version.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,868,664,454
|
Document patched podman build for s390x runners
|
AlekseiNikiforovIBM
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
COLLABORATOR
|
Podman patches from upstream are needed to resolve a couple of issues hit when using it.
Document automated build of podman
with applied patches fixing those issues.
| true
|
2,868,554,982
|
[ONNX] GNN model inaccuracy: scatter_reduce need to be fixed
|
canon-cmre-kamil-jacek
|
closed
|
[
"module: onnx",
"triaged"
] | 11
|
NONE
|
### 🐛 Describe the bug
A pytorch-geometric model (GAT) produces different results after conversion to ONNX.
I would not mind minor differences but depending on input data, these can be very large.
Code to reproduce:
```
import logging
import onnxruntime
import numpy as np
import torch
from torch_geometric.nn import GAT
logger = logging.getLogger(__name__)
logger.info("Prepare model")
num_features = 23
num_classes = 12
torch_path = "model.txt"
onnx_path = "model.onnx"
model = GAT(in_channels=num_features, out_channels=num_classes, heads=4,
hidden_channels=16, num_layers=1, v2=True, dropout=0.0)
best_model_ckpt = torch.load(torch_path, weights_only=False)
model.load_state_dict(best_model_ckpt)
model.eval()
device = torch.device("cpu")
model = model.to(device)
logger.info("Generating dummy data for ONNX exporter")
num_segments = 30
x = torch.randn(num_segments, num_features).to(device)
edge_index = torch.randint(num_segments, size=(2, 58)).to(device)
logger.info("Running torch model on dummy data")
with torch.no_grad():
result_torch = model(x, edge_index).numpy()
logger.info("Exporting")
opset_version = 16
dynamic_axes = {'x': {0: 'dynamic_input_features'}, 'edge_index': {1: 'dynamic_input_edge_connection'},
'output': {0: 'dynamic_output_segment'}}
torch.onnx.export(model,
(x, edge_index),
onnx_path,
verify=True,
output_names=['output'],
input_names=['x', 'edge_index'],
dynamic_axes=dynamic_axes,
opset_version=opset_version,
dynamo=True,
report=True
)
logger.info("Running ONNX inference")
ort_session = onnxruntime.InferenceSession(onnx_path, providers=['CPUExecutionProvider'])
inputs = {'x': x.numpy(), 'edge_index': edge_index.numpy()}
result = ort_session.run(['output'], inputs)
result_onnx = torch.Tensor(result[0]).numpy()
diff = result_torch - result_onnx
logger.warning(f"Results difference: {diff}")
logger.warning(f"Max, Min: {np.max(diff)}, {np.min(diff)}")
```
Output:
```
C:\Projects\ia_gnn_coronary_labelling\env\python.exe C:\Projects\ia_gnn_coronary_labelling\onnx_export_issue.py
C:\Projects\ia_gnn_coronary_labelling\env\lib\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
C:\Projects\ia_gnn_coronary_labelling\env\lib\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
C:\Projects\ia_gnn_coronary_labelling\env\lib\site-packages\torch\onnx\_internal\exporter\_compat.py:271: UserWarning: # 'dynamic_axes' is not recommended when dynamo=True, and may lead to 'torch._dynamo.exc.UserError: Constraints violated.' Supply the 'dynamic_shapes' argument instead if export is unsuccessful.
warnings.warn(
[torch.onnx] Obtain model graph for `GAT(23, 12, num_layers=1)` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `GAT(23, 12, num_layers=1)` with `torch.export.export(..., strict=False)`... ✅
[torch.onnx] Run decomposition...
[torch.onnx] Run decomposition... ✅
[torch.onnx] Translate the graph into ONNX...
[torch.onnx] Translate the graph into ONNX... ✅
[torch.onnx] Check the ONNX model...
[torch.onnx] Check the ONNX model... ✅
[torch.onnx] Execute the model with ONNX Runtime...
[torch.onnx] Execute the model with ONNX Runtime... ✅
[torch.onnx] Verify output accuracy...
[torch.onnx] Verify output accuracy... ❌
W0221 09:13:06.817510 16848 env\Lib\site-packages\torch\onnx\_internal\exporter\_core.py:1518] Output 'output' has a large absolute difference of 3.925521.
W0221 09:13:06.817510 16848 env\Lib\site-packages\torch\onnx\_internal\exporter\_core.py:1525] Output 'output' has a large relative difference of 6.541924.
[torch.onnx] Export report has been saved to 'onnx_export_2025-02-21_09-13-05-590746_accuracy.md'.
Results difference: [[ 1.1920929e-07 9.5367432e-07 -9.5367432e-07 1.1920929e-07
-3.5762787e-07 -1.9073486e-06 1.9073486e-06 -4.7683716e-07
-2.3841858e-07 2.3841858e-07 2.3841858e-07 0.0000000e+00]
[-5.9604645e-07 0.0000000e+00 2.3841858e-07 -5.9604645e-07
0.0000000e+00 0.0000000e+00 4.7683716e-07 0.0000000e+00
0.0000000e+00 0.0000000e+00 -2.3841858e-07 -2.3841858e-07]
[ 0.0000000e+00 -7.1525574e-07 3.5762787e-07 -2.3841858e-07
0.0000000e+00 9.5367432e-07 0.0000000e+00 -9.5367432e-07
0.0000000e+00 0.0000000e+00 -1.1920929e-07 -2.3841858e-07]
[ 0.0000000e+00 -5.9604645e-08 0.0000000e+00 0.0000000e+00
0.0000000e+00 1.1920929e-07 0.0000000e+00 0.0000000e+00
-4.7683716e-07 -2.3841858e-07 0.0000000e+00 2.3841858e-07]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[-9.5367432e-07 2.3841858e-07 7.1525574e-07 2.1457672e-06
1.0728836e-06 0.0000000e+00 -9.5367432e-07 9.5367432e-07
-2.3841858e-07 -2.8610229e-06 0.0000000e+00 -9.5367432e-07]
[ 1.9073486e-06 8.3446503e-07 -1.1920929e-06 9.5367432e-07
2.3841858e-07 -1.4305115e-06 0.0000000e+00 9.5367432e-07
4.7683716e-07 0.0000000e+00 1.4305115e-06 1.9073486e-06]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 2.3841858e-07
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[ 9.5367432e-07 0.0000000e+00 9.5367432e-07 -1.9073486e-06
-3.8146973e-06 9.5367432e-07 1.9073486e-06 -4.7683716e-07
0.0000000e+00 9.5367432e-07 2.3841858e-07 1.0132790e-06]
[ 4.7683716e-07 0.0000000e+00 -2.3841858e-07 1.7881393e-07
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
2.3841858e-07 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[ 0.0000000e+00 -2.3841858e-07 -2.3841858e-07 -4.7683716e-07
-2.3841858e-07 4.7683716e-07 4.7683716e-07 -8.9406967e-08
-1.7881393e-07 0.0000000e+00 -4.7683716e-07 2.3841858e-07]
[ 0.0000000e+00 0.0000000e+00 -4.7683716e-07 3.5762787e-07
-2.3841858e-07 -4.7683716e-07 1.9073486e-06 0.0000000e+00
4.7683716e-07 4.7683716e-07 4.7683716e-07 9.5367432e-07]
[ 0.0000000e+00 -1.9073486e-06 4.7683716e-07 1.4305115e-06
2.3841858e-06 0.0000000e+00 -3.8146973e-06 9.5367432e-07
4.7683716e-07 1.9073486e-06 1.7285347e-06 1.4305115e-06]
[-1.9073486e-06 -4.7683716e-07 1.4305115e-06 -3.5762787e-07
0.0000000e+00 9.5367432e-07 -1.9073486e-06 4.7683716e-07
-2.3841858e-07 0.0000000e+00 -5.9604645e-07 -7.1525574e-07]
[ 2.3841858e-07 -9.5367432e-07 -2.3841858e-07 2.3841858e-07
-5.9604645e-08 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 -4.7683716e-07 4.7683716e-07 7.1525574e-07]
[ 2.6105111e+00 1.5927434e-01 -1.8088472e+00 2.3992982e+00
1.2503064e+00 -3.1788063e-01 -5.5680227e-01 -2.1198511e-01
4.1892463e-01 3.6400440e-01 -3.5966778e-01 2.0951438e-01]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[-8.5174835e-01 -3.1561234e+00 -2.8051062e+00 -7.7328217e-01
2.6446767e+00 2.5995994e+00 5.1092625e-01 1.3140759e+00
1.5741320e+00 2.0076184e+00 2.7780089e+00 3.9255209e+00]
[ 0.0000000e+00 9.5367432e-07 -4.7683716e-07 0.0000000e+00
-1.1920929e-07 -9.5367432e-07 1.1920929e-06 2.3841858e-07
4.7683716e-07 9.5367432e-07 2.3841858e-07 0.0000000e+00]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[-9.5367432e-07 0.0000000e+00 0.0000000e+00 5.9604645e-08
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 -2.3841858e-07 -9.5367432e-07]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
-1.1920929e-07 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[-4.7683716e-07 9.5367432e-07 0.0000000e+00 -1.1920929e-07
4.7683716e-07 0.0000000e+00 4.7683716e-07 2.3841858e-07
4.7683716e-07 0.0000000e+00 1.1920929e-07 -4.7683716e-07]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[-3.3378601e-06 7.1525574e-07 1.0728836e-06 -1.4305115e-06
-2.3841858e-07 0.0000000e+00 0.0000000e+00 -4.7683716e-07
-1.1920929e-07 0.0000000e+00 -9.5367432e-07 -3.8146973e-06]]
Max, Min: 3.925520896911621, -3.156123399734497
```
Report: https://gist.github.com/canon-cmre-kamil-jacek/10739f707815aa2439e55462cd200cc0
Model: [model.txt](https://github.com/user-attachments/files/18904471/model.txt) - without it differences are minimal.
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250127+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Enterprise (10.0.19045 64-bit)
GCC version: Could not collect
Clang version: 17.0.6
CMake version: version 3.30.5
Libc version: N/A
Python version: 3.9.21 (main, Dec 11 2024, 16:35:24) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 11.6.55
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA T1200 Laptop GPU
Nvidia driver version: 511.23
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Xeon(R) W-11955M CPU @ 2.60GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2611
MaxClockSpeed: 2611
L2CacheSize: 10240
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.19.2
[pip3] onnxscript==0.2.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.7.0.dev20250127+cu118
[pip3] torch-geometric==2.6.1
[pip3] torchaudio==2.6.0.dev20250128+cu118
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.22.0.dev20250128+cu118
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46358
[conda] mkl-service 2.4.0 py39h827c3e9_2
[conda] mkl_fft 1.3.11 py39h827c3e9_0
[conda] mkl_random 1.2.8 py39hc64d2fc_0
[conda] numpy 2.0.2 py39h055cbcc_0
[conda] numpy-base 2.0.2 py39h65a83cf_0
[conda] pytorch-lightning 2.5.0.post0 pypi_0 pypi
[conda] torch 2.7.0.dev20250127+cu118 pypi_0 pypi
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250128+cu118 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250128+cu118 pypi_0 pypi
| true
|
2,868,504,276
|
Remove useless options for third-party ONNX build
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic"
] | 8
|
COLLABORATOR
|
Treat ONNX CMake targets properly and remove unneeded options.
| true
|
2,868,486,048
|
Update merge rules for oneDNN part
|
EikanWang
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147615
| true
|
2,868,473,116
|
[Intel GPU] Enable SDPA on XPU
|
DDEle
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"keep-going",
"ciflow/xpu"
] | 20
|
CONTRIBUTOR
|
Motivation
===
This PR is part of the plan of OneDNN Upstreaming, as #114848 [(comment)](https://github.com/pytorch/pytorch/issues/114848#issuecomment-2451553203) stated. The support of SDPA is via the overridable variance on XPU backend. Beside the added `Attention.cpp` file, `Graph.h` is added to hold utils for OneDNN graph including those for kernel/compile graph caching. In addition, a selection of testcases in `test/test_transformers.py` are copied into the new `test/xpu/test_transformers.py` and modified accordingly to provide additional tests beyond `./third_party/torch-xpu-ops/test/xpu/test_ops_xpu.py`.
Depends on OneDNN version v3.7 upgrade in #147498
Depends on BUILD_GRAPH switch in #147608
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,868,465,777
|
torch.nn.AvgPool2d fails with stride >= 2^31 on CUDA
|
jiren-the-gray
|
open
|
[
"module: nn",
"module: cuda",
"triaged",
"module: 64-bit",
"module: pooling",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
Running `torch.nn.AvgPool2d` with a stride of 2^31 or larger fails on CUDA but works on CPU. [colab](https://colab.research.google.com/drive/1n27_nl_NrOtP0H2qAngBVE2jQcvGi4Pa?usp=sharing)
Minimal reproduction:
```python
import torch
m = torch.nn.AvgPool2d(3, stride=2**31)
input = torch.randn(20, 16, 50, 32)
out_cpu = m(input) # No error
out_gpu = m.cuda()(input.cuda()) # RuntimeError: integer out of range
```
PS: Might be related to https://github.com/pytorch/pytorch/issues/113833
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.24
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.14.0
[pip3] torch==2.5.1+cu124
[pip3] torchaudio==2.5.1+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim @eqy
| true
|
2,868,465,485
|
[Intel GPU] Add SDPA implementation on XPU with OneDNN
|
DDEle
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 9
|
CONTRIBUTOR
|
Add XPU implementation of OneDNN based SDPA operator. Will be integrated and enabled later.
Depends on BUILD_GRAPH switch in #147608
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,868,457,345
|
[Minor] Fix minor mistake in docstring of replace_pattern
|
xwu99
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 4
|
NONE
|
Fixes #147610
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,868,455,320
|
Minor mistake in docstring of replace_pattern in torch/fx/subgraph_rewriter.py
|
xwu99
|
closed
|
[
"module: docs",
"triaged"
] | 0
|
NONE
|
def pattern(w1, w2):
return torch.cat([w1, w2]).sum()
def replacement(w1, w2):
return torch.stack([w1, w2])
it should not have extra `sum()` according to the following generated code:
def forward(self, x, w1, w2):
stack_1 = torch.stack([w1, w2])
sum_1 = stack_1.sum()
stack_2 = torch.stack([w1, w2])
sum_2 = stack_2.sum()
max_1 = torch.max(sum_1)
add_1 = x + max_1
max_2 = torch.max(sum_2)
add_2 = add_1 + max_2
return add_2
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke
| true
|
2,868,446,431
|
Adapt test_misc.py to HPUs
|
amathewc
|
closed
|
[
"triaged",
"open source",
"topic: not user facing",
"module: dynamo"
] | 3
|
CONTRIBUTOR
|
This PR is related to https://github.com/pytorch/pytorch/pull/145476 . That PR had two files (test_functions.py and test_misc.py) . test_functions was causing CI/rebase/merge issues and hence removed for now. This PR contains only test_misc.py.
This is a continuation of https://github.com/pytorch/pytorch/pull/144387 .
# MOTIVATION
We recently integrated support for Intel Gaudi devices (identified as 'hpu') into the common_device_type framework via the pull request at https://github.com/pytorch/pytorch/pull/126970. This integration allows tests to be automatically instantiated for Gaudi devices upon loading the relevant library. Building on this development, the current pull request extends the utility of these hooks by adapting selected CUDA tests to operate on Gaudi devices. Additionally, we have confirmed that these modifications do not interfere with the existing tests on CUDA devices.
Other accelerators can also extend the functionality by adding the device in the devices list. ( For eg: xpu )
# CHANGES
Create a separate class for test functions running on CUDA devices
Extend the functionality of these tests to include HPUs
Use instantiate_device_type_tests with targeted attributes to generate device-specific test instances within the new classes
Apply skipIfHPU decorator to bypass tests that are not yet compatible with HPU devices
cc: @ankurneog , @EikanWang , @yanboliang , @guangyey
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,868,374,447
|
[Intel GPU] Enable BUILD_GRAPH for xpu_mkldnn
|
DDEle
|
closed
|
[
"module: mkldnn",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 4
|
CONTRIBUTOR
|
For preparation of OneDNN based XPU SDPA enabling.
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,868,317,440
|
Deprecate sm70 for cuda 12.8 binary
|
tinglvv
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
follow up for https://github.com/pytorch/pytorch/pull/146265/files, dropping sm_70 as well, since "Architecture support for Maxwell, Pascal, and Volta is considered feature-complete and will be frozen in an upcoming release."
https://github.com/pytorch/pytorch/issues/145570
cc @ptrblck @atalman @nWEIdia
| true
|
2,868,275,720
|
[ONNX] aten_pow_scalar failure on dynamo export with dynamic shapes
|
borisfom
|
closed
|
[
"module: onnx",
"triaged"
] | 13
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Here, encountered this error when trying to export DiffusionTransformer module. Same module exported fine with no dynamic shapes:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/torch/onnx/_internal/exporter/_core.py", line 519, in _handle_call_function_node_with_lowering
outputs = onnx_function(*onnx_args, **onnx_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/onnxscript/values.py", line 635, in __call__
return self.func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/onnxscript/function_libs/torch_lib/ops/core.py", line 6632, in aten_pow_scalar
return op.Pow(op.Cast(self, to=exponent.dtype), exponent)
^^^^^^^^^^^^^^
AttributeError: 'int' object has no attribute 'dtype'
```
Dynamo report is attached.
Here are my dynamic shapes :
```
seq_dim = torch.export.Dim("seq_len") # , min=16, max=768)
st = torch.export.Dim.STATIC
token_dynamic_shapes = {
'a' : {1:seq_dim},
's' : {1:seq_dim},
'z' : {1:seq_dim, 2:seq_dim},
'mask' : {1:seq_dim, 2:st},
'multiplicity' : None,
'model_cache' : None,
}
```
Also, possibly related:
1. onnx.export() made me define keys for all inputs in dynamic shapes dict, even non-tensor ones (see None above).
2. Even though 'mask' input is only 2 dimensional, I was forced to add static Dim for third dimension (2:st) key in its dict, otherwise onnx.export was failing even earlier with this error, trying to access dict entry for key 2:
```
def make_constraints(
fake_mode: FakeTensorMode,
gm: torch.fx.GraphModule,
combined_args: dict[str, Any],
dynamic_shapes: Union[dict[str, Any], tuple[Any], list[Any], None],
num_lifted_inputs: int,
):
....
for i, d in enumerate(node.meta["val"].shape):
if isinstance(d, torch.SymInt) and not d.node.expr.is_number:
# Look up the range constraint for the symbol corresponding to this shape dimension
# and store it indexed by the symbolic expression corresponding to it.
# NOTE(avik): Use node._expr instead of node.expr for the lookup here because
# we want the symbol, not its replacement, which could be an expression. Maybe
# there's a better way to do this, e.g., by (re)computing value ranges for expressions?
> dim = shape_spec[i] if shape_spec else None
E KeyError: 2
```
repro is convoluted - hopefully the report would be enough to start ?
@justinchuby @xadupre
[conversion.md](https://github.com/user-attachments/files/18902915/conversion.md)
### Versions
Pytorch nightly
| true
|
2,868,264,783
|
[Docs] Add `OpDTypes.any_common_cpu_cuda_one`
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,868,248,550
|
RuntimeError when running profiler in a loop
|
HardysJin
|
open
|
[
"oncall: profiler"
] | 0
|
NONE
|
### 🐛 Describe the bug
Hi,
This bug not always happen, if I use profiler to export chrome trace in a loop, this is likely to happen.
Code:
```
import vllm
import torch
import time
def add_requests(llm, num_tokens=4096, batch_size=1, max_out_tokens=8 ):
print(f"start inference for batch_size[{batch_size}], num_tokens[{num_tokens}], max_out_tokens[{max_out_tokens}]")
words = ['test'] * num_tokens
sample_prompt = ' '.join(words)
inp_050 = {
'prompt_token_ids': [0] * num_tokens
}
sampling_params = vllm.SamplingParams(
n=1,
temperature=0.0,
top_p=1.0,
ignore_eos=True,
max_tokens=max_out_tokens,
)
for _ in range(batch_size):
if vllm.__version__.startswith('0.5.0'):
llm._add_request(inputs=inp_050, params=sampling_params)
else:
llm._add_request(prompt=sample_prompt, params=sampling_params )
def calculate_latency(llm, num_tokens=4096, batch_size=1, max_out_tokens=8, ):
add_requests(llm, num_tokens, batch_size, max_out_tokens)
step_times = []
chunk_times = []
while llm.llm_engine.has_unfinished_requests():
st = time.time()
step_outputs = llm.llm_engine.step()
chunk_times.append(time.time() - st)
if all(out.finished for out in step_outputs):
step_times.append(chunk_times)
chunk_times = []
# torch.cuda.empty_cache()
# torch.cuda.reset_peak_memory_stats()
# torch.cuda.synchronize()
return step_times
model = vllm.LLM(
model="/nvmedata/model/Qwen2-72B-Instruct",
dtype=torch.float16,
tensor_parallel_size = 4,
trust_remote_code = True,
download_dir = None,
load_format = 'dummy', # initialize the weights with random values, which is mainly for profiling.
gpu_memory_utilization = 0.9, # args.gpu_memory_utilization,
enforce_eager = True,
max_num_batched_tokens = 4096,
max_model_len = 4096,
)
input_combo = [(1, 4096), (1, 4095)]
for i in range(10):
with torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
],
on_trace_ready=torch.profiler.tensorboard_trace_handler("trace_result"),
with_stack=True,
) as prof:
for b_size, n_toks in input_combo:
with torch.no_grad():
step_times = calculate_latency(model, num_tokens=n_toks,
batch_size=b_size, max_out_tokens=1)
```
Error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[3], [line 4](vscode-notebook-cell:?execution_count=3&line=4)
[2](vscode-notebook-cell:?execution_count=3&line=2) import gc
[3](vscode-notebook-cell:?execution_count=3&line=3) for i in range(10):
----> [4](vscode-notebook-cell:?execution_count=3&line=4) with torch.profiler.profile(
[5](vscode-notebook-cell:?execution_count=3&line=5) activities=[
[6](vscode-notebook-cell:?execution_count=3&line=6) torch.profiler.ProfilerActivity.CPU,
[7](vscode-notebook-cell:?execution_count=3&line=7) torch.profiler.ProfilerActivity.CUDA,
[8](vscode-notebook-cell:?execution_count=3&line=8) ],
[9](vscode-notebook-cell:?execution_count=3&line=9) on_trace_ready=torch.profiler.tensorboard_trace_handler("trace_result"),
[10](vscode-notebook-cell:?execution_count=3&line=10) with_stack=True,
[11](vscode-notebook-cell:?execution_count=3&line=11) ) as prof:
[12](vscode-notebook-cell:?execution_count=3&line=12) for b_size, n_toks in input_combo:
[13](vscode-notebook-cell:?execution_count=3&line=13) with torch.no_grad():
File ~/anaconda3/envs/py310/lib/python3.10/site-packages/torch/profiler/profiler.py:748, in profile.__exit__(self, exc_type, exc_val, exc_tb)
[747](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f686172647973227d-0040ssh-002dremote-002b7-002e216-002e52-002e89.vscode-resource.vscode-cdn.net/nvmedata/hardys/git/vidur/~/anaconda3/envs/py310/lib/python3.10/site-packages/torch/profiler/profiler.py:747) def __exit__(self, exc_type, exc_val, exc_tb):
--> [748](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f686172647973227d-0040ssh-002dremote-002b7-002e216-002e52-002e89.vscode-resource.vscode-cdn.net/nvmedata/hardys/git/vidur/~/anaconda3/envs/py310/lib/python3.10/site-packages/torch/profiler/profiler.py:748) self.stop()
[749](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f686172647973227d-0040ssh-002dremote-002b7-002e216-002e52-002e89.vscode-resource.vscode-cdn.net/nvmedata/hardys/git/vidur/~/anaconda3/envs/py310/lib/python3.10/site-packages/torch/profiler/profiler.py:749) prof.KinetoStepTracker.erase_step_count(PROFILER_STEP_NAME)
[750](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f686172647973227d-0040ssh-002dremote-002b7-002e216-002e52-002e89.vscode-resource.vscode-cdn.net/nvmedata/hardys/git/vidur/~/anaconda3/envs/py310/lib/python3.10/site-packages/torch/profiler/profiler.py:750) if self.execution_trace_observer:
File ~/anaconda3/envs/py310/lib/python3.10/site-packages/torch/profiler/profiler.py:764, in profile.stop(self)
[762](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f686172647973227d-0040ssh-002dremote-002b7-002e216-002e52-002e89.vscode-resource.vscode-cdn.net/nvmedata/hardys/git/vidur/~/anaconda3/envs/py310/lib/python3.10/site-packages/torch/profiler/profiler.py:762) if self.record_steps and self.step_rec_fn:
[763](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f686172647973227d-0040ssh-002dremote-002b7-002e216-002e52-002e89.vscode-resource.vscode-cdn.net/nvmedata/hardys/git/vidur/~/anaconda3/envs/py310/lib/python3.10/site-packages/torch/profiler/profiler.py:763) self.step_rec_fn.__exit__(None, None, None)
...
--> [359](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f686172647973227d-0040ssh-002dremote-002b7-002e216-002e52-002e89.vscode-resource.vscode-cdn.net/nvmedata/hardys/git/vidur/~/anaconda3/envs/py310/lib/python3.10/site-packages/torch/autograd/profiler.py:359) self.kineto_results = _disable_profiler()
[360](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f686172647973227d-0040ssh-002dremote-002b7-002e216-002e52-002e89.vscode-resource.vscode-cdn.net/nvmedata/hardys/git/vidur/~/anaconda3/envs/py310/lib/python3.10/site-packages/torch/autograd/profiler.py:360) t1 = perf_counter_ns()
[361](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f686172647973227d-0040ssh-002dremote-002b7-002e216-002e52-002e89.vscode-resource.vscode-cdn.net/nvmedata/hardys/git/vidur/~/anaconda3/envs/py310/lib/python3.10/site-packages/torch/autograd/profiler.py:361) self._stats.profiler_disable_call_duration_us = int((t1 - t0) / 1000)
RuntimeError: !stack.empty() INTERNAL ASSERT FAILED at "../torch/csrc/autograd/profiler_python.cpp":981, please report a bug to PyTorch. Python replay stack is empty.
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4080 SUPER
GPU 1: NVIDIA GeForce RTX 4080 SUPER
Nvidia driver version: 550.120
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.1
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn.so.8.9.0
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.0
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.0
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.0
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.0
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.0
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-14900KF
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 6000.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.2
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.8.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] cuda-cudart-dev_linux-64 12.8.57 h3f2d84a_1 conda-forge
[conda] cuda-cudart-static_linux-64 12.8.57 h3f2d84a_1 conda-forge
[conda] cuda-cudart_linux-64 12.8.57 h3f2d84a_1 conda-forge
[conda] cuda-nvrtc 12.8.61 hbd13f7d_0 conda-forge
[conda] libcublas 12.8.3.14 h9ab20c4_0 conda-forge
[conda] libcufft 11.3.3.41 hbd13f7d_0 conda-forge
[conda] libcurand 10.3.9.55 hbd13f7d_0 conda-forge
[conda] libcusolver 11.7.2.55 h9ab20c4_0 conda-forge
[conda] libcusparse 12.5.7.53 hbd13f7d_0 conda-forge
[conda] libnvjitlink 12.8.61 hbd13f7d_0 conda-forge
[conda] mkl 2024.2.2 ha957f24_16 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] numpydoc 1.8.0 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,868,226,483
|
[dtensor][cp] experiment: try e2e cp flex_attention
|
XilunWu
|
open
|
[
"oncall: distributed",
"topic: not user facing",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147603
* #147517
* #147516
* #147515
* #147514
* #145353
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,868,207,478
|
UnsupportedOperatorError: Exporting the operator 'aten::_make_per_tensor_quantized_tensor ' to ONNX opset version 11
|
wangqianscu
|
open
|
[
"module: onnx",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
When I export the torch model to onnx by torch.onnx.export(...), it raise error: UnsupportedOperatorError: Exporting the operator 'aten::_make_per_tensor_quantized_tensor ' to ONNX opset version 11.
So I tried the opset 12, 17 it also not support.
Then I try to use custom ops:
```
def make_per(g, x, scale, zp):
return g.op('QuantizeLinear', x, scale, zp, dtype_i=torch.onnx.symbolic_helper.cast_pytorch_to_onnx['Byte'])
torch.onnx.register_custom_op_symbolic('aten::_make_per_tensor_quantized_tensor', make_per, 11)
torch.onnx.export(...)
```
it raised error: "RuntimeError: ArrayRef: invalid index Index = 11; Length = 11".
how should I deal with it?
### Versions
onnx version: 1.12.0 and 1.14.0 both not worked . torch version: 2.2
| true
|
2,868,164,569
|
[CI] Reduce the AOT target list to reduce build time
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,868,150,722
|
DISABLED test_sdpa_rewriter_14_cuda (__main__.SDPAPatternRewriterCudaDynamicTests)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sdpa_rewriter_14_cuda&suite=SDPAPatternRewriterCudaDynamicTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37581378760).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sdpa_rewriter_14_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 707, in _test_sdpa_rewriter_14
self._check_common(dot_prod_attention)
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 85, in _check_common
self.assertGreaterEqual(counters["inductor"]["fuse_attention"], 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1250, in assertGreaterEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 0 not greater than or equal to 1
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_fused_attention.py SDPAPatternRewriterCudaDynamicTests.test_sdpa_rewriter_14_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_fused_attention.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,868,144,188
|
Fixed abnormal behavior of LazyLinear when using LayzLinear and load_state together
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 15
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147599
Update Points:
- Update the logic of ``initialize_parameters``
- Add new testcases
The ISSUE Related:
https://github.com/pytorch/pytorch/issues/147389
| true
|
2,868,123,392
|
Turn onnx functions into static
|
cyyever
|
closed
|
[
"oncall: jit",
"open source",
"Merged",
"release notes: jit"
] | 6
|
COLLABORATOR
|
To avoid exposing ONNX symbols.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,868,111,928
|
[Dynamo] WeakRefVariable doesn't use the most updated python referent when call_function is executed
|
yanboliang
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```
import torch
@torch.compile(backend="eager", fullgraph=True)
def fn(y):
obj = torch.tensor([1.0, 2.0])
weak_ref = weakref.ref(obj)
if weak_ref() is None:
a = y + 1
else:
a = y - 1
del obj
if weak_ref() is None:
b = y + 1
else:
b = y - 1
return a, b
y = torch.ones(2, 3)
print(fn(y))
```
Compiled output:
```
(tensor([[0., 0., 0.],
[0., 0., 0.]]), tensor([[0., 0., 0.],
[0., 0., 0.]]))
```
Eager output:
```
(tensor([[0., 0., 0.],
[0., 0., 0.]]), tensor([[2., 2., 2.],
[2., 2., 2.]]))
```
I think the root cause is, we should check the original python referent every time when ```WeakRefVariable.call_function``` is executed.
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,868,089,014
|
The performance of model with TP worse than without TP in CPU
|
jiqing-feng
|
closed
|
[
"oncall: distributed",
"triaged"
] | 21
|
NONE
|
### 🐛 Describe the bug
model: meta-llama/Llama-3.1-8B-Instruct
input shape: [1, 512]
latency is forward latency
instance: Intel 4th Gen Xeon SPR (1 numa node for 1 socket)
base image: gar-registry.caas.intel.com/pytorch/pytorch-ipex-spr:cpu-device
torch version:
intel_extension_for_pytorch 2.6.0
torch 2.6.0+cpu
CCL need this [PR](https://github.com/intel-innersource/frameworks.ai.pytorch.torch-ccl/pull/231)
transformers need this [PR](https://github.com/huggingface/transformers/pull/36299)
To build CCL, better with intel oneapi
```
apt-get update && apt-get install -y gpg-agent
wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB | gpg --dearmor | tee /usr/share/keyrings/oneapi-archive-keyring.gpg > /dev/null && echo "deb [signed-by=/usr/share/keyrings/oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main" | tee /etc/apt/sources.list.d/oneAPI.list
apt-get update
apt-get install -y numactl build-essential python3-dev git
apt-get install -y intel-basekit
source /opt/intel/oneapi/setvars.sh
```
CMD:
2TP: `OMP_NUM_THREADS=56 numactl -C 0-55 -m 0 torchrun --nnodes=2 --node_rank=0 --master_addr="127.0.0.1" --master_port=29500 --nproc-per-node 1 tp_hf.py & OMP_NUM_THREADS=56 numactl -C 56-111 -m 1 torchrun --nnodes=2 --node_rank=1 --master_addr="127.0.0.1" --master_port=29500 --nproc-per-node 1 tp_hf.py & wait`
no TP: `OMP_NUM_THREADS=56 numactl -C 0-55 -m 0 python tp_hf.py`
```python
import os
import torch.distributed as dist
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel
import time
import torch
import torch.profiler
import oneccl_bindings_for_pytorch
from pyinstrument import Profiler
import torch
import os
import torch.backends.mkldnn
import torch.backends.openmp
print(f"Using {torch.get_num_threads()} threads (PyTorch)")
print(f"OMP_NUM_THREADS={os.getenv('OMP_NUM_THREADS')}")
# # Ensure PyTorch respects the OMP setting
# torch.set_num_threads(int(os.getenv("OMP_NUM_THREADS", "56")))
# print(f"Now using {torch.get_num_threads()} threads after setting manually")
model_id = "meta-llama/Llama-3.1-8B-Instruct"
def main(is_tp, rank, world_size) -> None:
backend = "ccl"
print(is_tp)
if is_tp:
dist.init_process_group(backend)
model_kwargs = dict(torch_dtype=torch.bfloat16)
if is_tp:
model_kwargs["tp_plan"] = "auto"
else:
model_kwargs["device_map"] = "cpu"
# Retrieve tensor parallel model
model = AutoModelForCausalLM.from_pretrained(model_id, **model_kwargs)
print(model.dtype)
# Prepare input tokens
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Can I help" * 200
inputs = tokenizer(prompt, return_tensors="pt", max_length=512).input_ids.to(model.device)
print(f"inpu shape is {inputs.shape}")
# model = torch.compile(model)
# warm-up
if is_tp:
dist.barrier()
for i in range(5):
with torch.no_grad():
outputs = model(inputs)
if is_tp:
dist.barrier()
# profiler = Profiler()
# profiler.start()
# with torch.profiler.profile(
# activities=[
# torch.profiler.ProfilerActivity.CPU,
# torch.profiler.ProfilerActivity.CUDA,
# ],
# ) as prof:
for i in range(5):
with torch.no_grad():
start = time.time()
outputs = model(inputs)
end = time.time()
print(f"time cost {(end-start)*1000} ms")
# print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
# profiler.stop()
# with open(f"profile_tp_{is_tp}_backend_{backend}_rank_{rank}.html", "w") as f:
# f.write(profiler.output_html())
count = 0
for name, parameter in model.named_parameters():
if isinstance(parameter.data, torch.distributed.tensor.DTensor):
print(f"name: {name}\nparameter: {parameter}")
original_shape = parameter.data.shape
shape = parameter.data.to_local().shape
print(f"paramater local shape is {shape}")
print(f"paramater original shape is {original_shape}")
count += 1
if count > 2:
break
print(outputs)
if __name__ == "__main__":
rank = int(os.environ["RANK"]) if "RANK" in os.environ else 0
world_size = int(os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1
is_tp = "RANK" in os.environ
main(is_tp, rank, world_size)
```
```
| tp_size | no | 2 |
| speed-up | 1.0x | 0.29x |
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-553.16.1.el8_10.x86_64-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Xeon(R) Platinum 8480L
BIOS Model name: Intel(R) Xeon(R) Platinum 8480L
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 7
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.6.0
[pip3] numpy==2.1.3
[pip3] torch==2.6.0+cpu
[pip3] torchaudio==2.6.0+cpu
[pip3] torchvision==0.21.0+cpu
[conda] intel-extension-for-pytorch 2.6.0 pypi_0 pypi
[conda] mkl 2025.0.1 pypi_0 pypi
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.6.0+cpu pypi_0 pypi
[conda] torchaudio 2.6.0+cpu pypi_0 pypi
[conda] torchvision 0.21.0+cpu pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,868,089,001
|
Non-Determinism in Faster R-CNN Despite Setting All Deterministic Flags
|
mbar0075
|
open
|
[
"triaged",
"module: determinism"
] | 0
|
NONE
|
### 🐛 Describe the bug
I am encountering a `RuntimeError` when running Faster R-CNN with `torch.use_deterministic_algorithms(True)`. Despite setting all known deterministic flags, the following error persists:
```
RuntimeError: roi_align_backward_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'
```
Here is the code I use to enforce determinism:
```python
import torch
import os
import random
import numpy as np
seed = 42
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # For multi-GPU
torch.use_deterministic_algorithms(True)
os.environ['CUBLAS_WORKSPACE_CONFIG'] = ':4096:8'
os.environ['CUDNN_DETERMINISTIC'] = '1'
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False # Ensures full determinism
random.seed(seed)
np.random.seed(seed)
```
Despite applying these settings, the script still throws the `roi_align_backward_kernel` error, indicating that the RoI Align backward pass lacks a deterministic implementation. I am using PyTorch 2.5.1+cu124. Are there any known workarounds for enforcing determinism while using Faster R-CNN?
### Versions
Non-Determinism in Faster R-CNN Despite Setting All Deterministic Flags
Thank you in advance for your help.
cc @mruberry @kurtamohler
| true
|
2,868,069,405
|
[Dtensor] Pass device information in OffsetBasedRNGTracker
|
ankurneog
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/147584
```OffsetBasedRNGTracker``` called without arguments will set default device type to cuda
https://github.com/pytorch/pytorch/blob/533b884870acd951e684e0bf551eb76904dec047/torch/distributed/tensor/_random.py#L105
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,868,046,768
|
Define USE_C10D_XCCL and USE_XCCL in pytorch
|
Chao1Han
|
open
|
[
"open source",
"release notes: xpu"
] | 23
|
CONTRIBUTOR
|
### Motivation:
Add `USE_XCCL` and `USE_C10D_XCCL` to enable support of XCCL backend building in stock PyTorch, similar to `USE_NCCL` and `USE_C10D_NCCL`.
By default, `USE_XCCL` is OFF and allowed set to ON explicitly.
| true
|
2,868,019,075
|
Fix log2, PowByNatural printing
|
isuruf
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147592
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,991,595
|
[outdated][experimental] delayed compile
|
bobrenjc93
|
closed
|
[
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147591
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D69996869](https://our.internmc.facebook.com/intern/diff/D69996869)
| true
|
2,867,990,997
|
[cutlass backend] cache_clear algorithm select cache on fresh inductor cache
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147590
Differential Revision: [D69959917](https://our.internmc.facebook.com/intern/diff/D69959917/)
AlgorithmSelectorCache is a cache. The expectation is that when we force disable cache + clear inductor caches, it would be clear. However that is not the case.
The reason why this is a problem can be seen by following this repro:
What we will see is
```
SingleProcess AUTOTUNE benchmarking takes 6.2202 seconds and 46.0568 seconds precompiling for 36 choices
SingleProcess AUTOTUNE benchmarking takes 492.3141 seconds and 0.0010 seconds precompiling for 36 choices
```
The root cause is, while precompiling is skipped, due to it being cache, autotuning isn't skipped since we force disable it.
repro:
```
import logging
import os
os.environ["TORCH_LOGS"] = "+output_code,+benchmarking,+inductor"
import torch
import torch._inductor.config
from torch._inductor.utils import clear_inductor_caches
torch._inductor.config.max_autotune = True
torch._inductor.config.force_disable_caches = True
torch._inductor.config.autotune_num_choices_displayed = None
torch._inductor.config.max_autotune_gemm_backends = "CUTLASS"
torch._inductor.config.autotune_fallback_to_aten = False
torch._inductor.config.cuda.cutlass_instantiation_level = "0001"
def main():
M, N, K = 2048, 2048, 2048
dtype = torch.bfloat16
A = torch.randn(M, K, device="cuda", dtype=dtype)
B = torch.randn(K, N, device="cuda", dtype=dtype)
for _ in range(2):
torch._dynamo.reset()
clear_inductor_caches()
compiled_model = torch.compile(torch.mm, fullgraph=True)
_ = compiled_model(A, B)
print("done")
if __name__ == "__main__":
main()
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,986,203
|
check if force_disable_caches before using precompile cache
|
henrylhtsang
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147589
Differential Revision: [D69966889](https://our.internmc.facebook.com/intern/diff/D69966889/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,982,584
|
Also support non-contiguous activation for torch._weight_int8pack_mm on CPU
|
sanchitintel
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel"
] | 9
|
COLLABORATOR
|
### Problem
Non-contiguous activation for `torch._weight_int8pack_mm` is unsupported on CPU.
So, with int8 WoQ with B16 activation with torchao, for batch-size 2 & above, an assertion is hit regarding non-contiguous A being unsupported. Such an issue was encountered with LLaMA models.
### Solution
Also support non-contiguous activation for `torch._weight_int8pack_mm`, so long as it's contiguous on the last dimension & remove the assertion that requires contiguous activation.
### Alternative solutions considered
Could modify LLaMA model in transformers library to call `contiguous` after obtaining the final hidden state, just before computing logits with the LM head. However, [it](https://github.com/huggingface/transformers/pull/36078) might cause some regression for other users of that code.
Another aspect to this issue is - is latency always lower if we make an activation tensor contiguous before linear or `torch._weight_int8pack_mm` is called on CPU? I guess we need some data-points to analyze this part, although I think the performance should be good enough with this patch, since the first cache lines of rows of A are being explicitly prefetched in the existing code (and it also avoids copy, which a `contiguous` call would do).
cc @jgong5 @mingfeima @XiaobingSuper @ashokei @jingxu10
| true
|
2,867,981,679
|
Add unique kernel name support for user defined triton kernel
|
muchulee8
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary:
Add unique_user_kernel_names which mimics what unique_kernel_names do, but for user defined Triton kernels.
This does rewrite the copied kernel src, and modifies non-Inductor generated code, so we split it out from unique_kernel_names, where we have more control over all namings and generations.
Test Plan: Only used for debug purpose
Differential Revision: D69966608
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
2,867,975,674
|
[cutlass backend] clear_on_fresh_inductor_cache when generatings cutlass ops
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147586
Differential Revision: [D69966732](https://our.internmc.facebook.com/intern/diff/D69966732/)
This is needed if we want to generate cutlass ops with different instantiation level in one session.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,958,778
|
[SDPA backward error]Error detected in ScaledDotProductEfficientAttentionBackward0 when input seqlen is very long and with attn_mask
|
tianyan01
|
open
|
[
"triaged",
"module: sdpa"
] | 0
|
NONE
|
### 🐛 Describe the bug
Here is the minimal example. When I set the seqlen=53936, and input a attn_mask, it will send me an error "Error detected in ScaledDotProductEfficientAttentionBackward0". But when I set seqlen=46344, or remove the attn_mask, it will run ok. The threshold of the seqlen is 46344, once seqlen > 46344, it will not run ok.
```
import torch
import torch.nn.functional as F
from torch.nn.attention import SDPBackend, sdpa_kernel
torch.autograd.set_detect_anomaly(True)
dtype = torch.bfloat16
seqlen = 53936 # seqlen = 46344 # no error
query = torch.randn(1, 3, seqlen, 128, device="cuda", dtype=dtype, requires_grad=True)
key = torch.randn(1, 3, seqlen, 128, device="cuda", dtype=dtype, requires_grad=True)
value = torch.randn(1, 3, seqlen, 128, device="cuda", dtype=dtype, requires_grad=True)
condition_sequence_length = 256
latent_sequence_length = seqlen - condition_sequence_length
attention_mask = torch.zeros(
1, seqlen, seqlen, device=query.device, dtype=torch.bool
)
effective_condition_sequence_length = 255 # suppose
effective_sequence_length = latent_sequence_length + effective_condition_sequence_length
attention_mask[0, : effective_sequence_length, : effective_sequence_length] = True
attention_mask = attention_mask.unsqueeze(1) # attention_mask = None # no error
res = F.scaled_dot_product_attention(
query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
)
loss = res.mean()
loss.backward()
```
bug meesage:
```
lib/python3.10/site-packages/torch/autograd/graph.py:825: UserWarning: Error detected in ScaledDotProductEfficientAttentionBackward0.
....
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (GCC) 12.2.0
Clang version: 3.8.0 (tags/RELEASE_380/final)
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.0-2.0.0.1-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: CF-NG-HZZ1-O
GPU 1: CF-NG-HZZ1-O
GPU 2: CF-NG-HZZ1-O
GPU 3: CF-NG-HZZ1-O
GPU 4: CF-NG-HZZ1-O
GPU 5: CF-NG-HZZ1-O
GPU 6: CF-NG-HZZ1-O
GPU 7: CF-NG-HZZ1-O
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.9.1
/usr/lib/libcudnn_adv_infer.so.8.9.1
/usr/lib/libcudnn_adv_train.so.8.9.1
/usr/lib/libcudnn_cnn_infer.so.8.9.1
/usr/lib/libcudnn_cnn_train.so.8.9.1
/usr/lib/libcudnn_ops_infer.so.8.9.1
/usr/lib/libcudnn_ops_train.so.8.9.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8468V
Stepping: 8
CPU MHz: 2900.000
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 4.5 MiB
L1i cache: 3 MiB
L2 cache: 192 MiB
L3 cache: 195 MiB
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi avx512vbmi umip pku waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.3.0.75
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchao==0.7.0
[pip3] torchvision==0.19.1
[pip3] triton==3.1.0
[conda] numpy 2.2.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.3.0.75 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchao 0.7.0 pypi_0 pypi
[conda] torchvision 0.19.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
| true
|
2,867,942,439
|
[Distributed Tensor]OffsetBasedRNGTracker instantiation always try to create with CUDA backend
|
dayanandav
|
closed
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 2
|
NONE
|
### 🐛 Describe the bug
OffsetBasedRNGTracker create with always CUDA backend and cause problem when try to create with other backend(HPU)
[random._rng_tracker = random.OffsetBasedRNGTracker()](https://github.com/pytorch/pytorch/blob/5ef94ca8162c541bced46ecd4e31dfd9d524ac51/torch/distributed/tensor/_api.py#L1028) this LOC always try to allocate CUDA backend cause problem with other backend
StackTrace:
E File "/root/repos/pytorch-training-tests/tests/pytorch/v2.6.0/distributed_hpu/tensor/test_random_ops.py", line 390, in test_deterministic_rand_1d
E dtensor = fn(size, device_mesh=device_mesh, placements=[Shard(1)])
E File "/usr/local/lib/python3.10/dist-packages/torch/distributed/tensor/_api.py", line 1195, in rand
E return _dtensor_init_helper(
E File "/usr/local/lib/python3.10/dist-packages/torch/distributed/tensor/_api.py", line 1004, in _dtensor_init_helper
E random._rng_tracker = random.OffsetBasedRNGTracker()
E File "/usr/local/lib/python3.10/dist-packages/torch/distributed/tensor/_random.py", line 165, in __init__
E super().__init__(device_type)
E File "/usr/local/lib/python3.10/dist-packages/torch/distributed/tensor/_random.py", line 109, in __init__
E raise RuntimeError(
E RuntimeError: OffsetBasedRNGTracker instantiation requires the presence of CUDA/CUDA-like device
### Versions
Collecting environment information...
PyTorch version: 2.6.0+hpu_1.21.0-138.git5f97358
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-126-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 160
On-line CPU(s) list: 0-159
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 40
Socket(s): 2
Stepping: 6
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3.8 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 100 MiB (80 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-39,80-119
NUMA node1 CPU(s): 40-79,120-159
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] habana-torch-dataloader==1.21.0.138
[pip3] habana-torch-plugin==1.21.0.138
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.6.0+hpu.1.21.0.138.git5f97358
[pip3] torch_tb_profiler==0.4.0
[pip3] torchaudio==2.6.0+cpu
[pip3] torchdata==0.10.1+cpu
[pip3] torchmetrics==1.6.1
[pip3] torchtext==0.18.0+cpu
[pip3] torchvision==0.21.0+cpu
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,867,903,495
|
[import][inductor] Simplify grid handling
|
jansel
|
open
|
[
"module: rocm",
"fb-exported",
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"skip-pr-sanity-checks",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ci-no-td",
"ciflow/inductor-rocm"
] | 28
|
CONTRIBUTOR
|
Before this PR, calling a triton kernel would look like:
```py
kernel.run(a, b, xnumel, grid=grid(xnumel), stream=stream0)
```
where the `grid=` was passed as a callable (function closure) arg. This PR removes the grid arg:
```py
kernel.run(a, b, xnumel, stream=stream0)
```
instead now the grid computation is included in the kernel launcher, with something like:
```py
def launcher(in_ptr0, out_ptr0, xnumel, stream):
grid_0 = ((xnumel + 1023) >> 10)
grid_1 = 1
grid_2 = 1
runner(grid_0, grid_1, grid_2, stream, function, metadata, None, launch_enter_hook, launch_exit_hook, in_ptr0, out_ptr0, xnumel)
```
This should be faster, since we remove multiple function/dict calls and are able to specialize the grid computation for each `triton.Config`.
It also allows us to unify the handling of grids between the Python and C++ wrapper code. Before this, C++ wrapper code didn't actually support dynamic grid sizes and instead burned in a static grid.
This unification allows this PR to be a net deletion of code.
Note the attached diff contains some minor fbcode-only changes.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,854,134
|
Refactor typing: Replace Any with ParamSpec for better type safety
|
devsashidhar
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 3
|
NONE
|
Description
This PR refactors function signatures by replacing *args: Any and **kwargs: Any with ParamSpec to improve type safety and preserve argument information. This enhances the ability of static type checkers like mypy to provide better error detection and improves code maintainability.
Motivation
Many functions in PyTorch currently use Any for variable-length arguments (*args and **kwargs), which erases type information. Using typing_extensions.ParamSpec allows better type inference, reducing accidental type mismatches and improving developer experience.
Changes Made
Replaced *args: Any, **kwargs: Any with P.args and P.kwargs where applicable.
Removed legacy _F = TypeVar("_F", bound=Callable[..., Any]) usage where redundant.
Ensured decorators preserve function signatures for better introspection.
Issue Reference
Fixes #146018
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,867,827,953
|
cpp libtorch transformerimpl lack some parameter between with python pytorch
|
mullerhai
|
open
|
[
"module: cpp",
"module: nn",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
Hi,
I find libtorch some layer impl not the same as python pytorch ,like transformer layer
https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html
in python
Transformer
CLASStorch.nn.Transformer(d_model=512, nhead=8, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, activation=<function relu>, custom_encoder=None, custom_decoder=None, layer_norm_eps=1e-05, batch_first=False, norm_first=False, bias=True, device=None, dtype=None)
in cpp TransformerImpl not have [layer_norm_eps=1e-05, batch_first=False, norm_first=False, ] ,three parameter not supply ,so how to pass these parameter in cpp
thanks
### Versions
libtorch latest pytorch 2.6
cc @jbschlosser @albanD @mruberry @walterddr @mikaylagawarecki
| true
|
2,867,798,923
|
Fix issue #146018: Improve CachingAutotuner handling
|
devsashidhar
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 5
|
NONE
|
Fixes #146018
### Summary:
This PR addresses issue #146018 where `CachingAutotuner` fails when running on the `meta` device due to size inference issues. The fix ensures that dynamic shape handling works correctly when multiple calls with different tensor sizes are made.
### Changes:
- Improved handling of `CachingAutotuner` when using meta tensors.
- Ensured shape inference logic doesn't break when multiple calls use different shapes.
### Testing:
- Ran tests to verify functionality.
- Ensured behavior remains consistent on `cuda` and other devices.
| true
|
2,867,782,426
|
[Easy][optim] Add LBFGS params optional desc
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: optim"
] | 3
|
CONTRIBUTOR
|
[LBFGS docs](https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html#torch.optim.LBFGS) missing `optional` description for params in compare with other optimizer docs, like [Adam](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html)
## Test Result
### Before

### After

| true
|
2,867,740,968
|
[distributed] Register sharding strategy for aten.amax.default to support float8 rowwise scaling
|
danielvegamyhre
|
closed
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 0
|
CONTRIBUTOR
|
**Summary**
While debugging an [issue](https://github.com/pytorch/torchtitan/issues/864) in torchtitan related to float8 with rowwise scaling + async TP + torch.compile, I found a different issue:
With eager mode + float8 rowwise + vanilla TP, we get a different error:
`Operator aten.amax.default does not have a sharding strategy registered.`
See root cause analysis [here](https://github.com/pytorch/torchtitan/issues/864#issuecomment-2672953761).
TL;DR is we need to register a sharding strategy for `aten.amax.default` in Dtensor.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,867,697,013
|
demo myst_nb with compile tutorial
|
williamwen42
|
open
|
[
"Stale"
] | 4
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147577
| true
|
2,867,689,045
|
[ONNX][demo] Rotary embedding
|
justinchuby
|
open
|
[
"open source",
"Stale",
"release notes: onnx"
] | 4
|
COLLABORATOR
|
This change gives users the ability to use onnx ops directly with `torch.ops.onnx.*` and showcases an implementation for RotaryEmbedding. The operators are native pytorch which play well with the ecosystem.
| true
|
2,867,680,043
|
ncclUnhandledCudaError
|
youreternity1997
|
closed
|
[
"oncall: distributed",
"module: c10d"
] | 0
|
NONE
|
### 🐛 Describe the bug
| true
|
2,867,671,792
|
[export] don't use unbacked_renamings in export
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"fx",
"ciflow/inductor",
"release notes: export"
] | 8
|
CONTRIBUTOR
|
Plan: avoid the use of unbacked renamings, and introduce a pass run in `_produce_aten_artifact` that recomputes unbacked bindings. Decided to do this because in we don't serialize unbacked renamings (or any ShapeEnv state), so this used to compose poorly with de/serialization. This hopefully establishes the invariant that the unbacked binding keys are always in sync with the example values (i.e. same indices, and removed if the symbol is replaced / specialized).
For de/serialization, we don't stored unbacked bindings, and just rerun the pass.
Involved a refactor of compute_unbacked_bindings.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,867,667,255
|
export method
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147573
The `export` API takes a `nn.Module` and traces its `forward` method. However sometimes it is useful to export different methods of a `nn.Module`, either as a one-off for debugging or as a set of methods that are called in some sequence outside `export` (e.g., `encode` / `decode`). When multiple methods of the same module instance are exported, they should share the same of the common module instance.
This PR adds a couple of utils in `torch._export.utils` for this workflow.
The `wrap_method` util wraps a method as a `nn.Module` that can then be exported. See included test. We recommend using the same module instance to export multiple methods on that instance, in which case they are guaranteed to share state. On serde, this state sharing is lost, so we provide another util, `sync_state`, to re-sync the state.
These utils are meant to be eventually replaced by API-level changes, but for now this can unblock users who need this workflow. In particular, in the future we can accept one or multiple method entrypoints, with their own args / kwargs / dynamic shape specifications, which can create a variant of `ExportedProgram` with multiple graphs that share state; then we can automatically ensure that the state sharing is preserved through serde.
Differential Revision: [D69960801](https://our.internmc.facebook.com/intern/diff/D69960801/)
@diff-train-skip-merge
| true
|
2,867,661,880
|
[dynamo] Support reads to global/captured tensors in `nonstrict_trace`-ed function
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147572
* #147571
* #146950
* #146367
* #146714
As title. Without this patch we get the following error:
Tweaking the `allow_non_fake_inputs` flag on tensor mode doesn't quite
work for AOTAutograd, which also needs to fake-tensor-propagate the
`nonstrict_trace`-ed function, but that's _after_ Dynamo has handled the
`nonstrict_trace` processing and put the `flat_apply(...)` node into the graph.
So we can't easily to temporarily enable the `allow_non_fake_inputs`
flag on current fake mode, when AOTAutograd processes a `flat_apply`
node from Dynamo's `nonstrict_trace` handling. And after discussing
with zou3519, I decided to add a global `FakeTensorTLS` that contains a
`allow_non_fake_inputs_override` flag, and patch the `nonstrict_trace`-ed
function to temporarily tweak this flag during its execution.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,867,661,811
|
[dynamo] Support `nonstrict_trace` on class method
|
StrongerXi
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147572
* __->__ #147571
* #146950
* #146367
* #146714
As title, also see
1. new test `test_nonstrict_trace_on_method` for example.
2. newly added comments for why we need special treatment on methods.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,867,646,768
|
`view()` + modify-in-place fails silently with DTensor
|
ad8e
|
open
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Run command on 2-GPU machine: `torchrun --standalone --nnodes=1 --nproc-per-node=2 my_file.py`
```
import torch
import torch.nn as nn
from torch.distributed._tensor import DTensor, Shard, Replicate, distribute_tensor, distribute_module, init_device_mesh
from torch.distributed._composable.fsdp import fully_shard, MixedPrecisionPolicy
import os
import torch.distributed as dist
import datetime
world_size = int(os.getenv("WORLD_SIZE", None))
local_rank = int(os.getenv("LOCAL_RANK", None))
global_rank = int(os.getenv("RANK", None))
print(f"world_size: {world_size}, local_rank: {local_rank}, global_rank: {global_rank}")
dist.init_process_group(
backend="cuda:nccl",
init_method=None,
world_size=world_size,
rank=global_rank,
device_id=torch.device(f"cuda:{local_rank}"),
timeout=datetime.timedelta(seconds=120),
)
torch.cuda.set_device(local_rank)
device_mesh = init_device_mesh("cuda", mesh_shape=(2,), mesh_dim_names=("fsdp",))
placements = (Shard(dim=0),)
works = False
with torch.device('meta'):
model = torch.nn.Linear(9, 8, bias=False)
fully_shard(model, mesh=device_mesh)
model.to_empty(device=local_rank)
nn.init.ones_(model.weight)
print("weight", model.weight)
with torch.no_grad():
if works:
multiplier = distribute_tensor(torch.arange(4, device=f"cuda:{local_rank}", dtype=torch.float32).view(4, 1), device_mesh=device_mesh, placements=placements)
model.weight.view(4, 18).mul_(multiplier)
else:
multiplier = distribute_tensor(torch.arange(3, device=f"cuda:{local_rank}", dtype=torch.float32).view(3, 1), device_mesh=device_mesh, placements=placements)
model.weight.view(3, 24).mul_(multiplier)
print("weight2", model.weight)
```
Output:
```
world_size: 2, local_rank: 1, global_rank: 1
world_size: 2, local_rank: 0, global_rank: 0
weight weight DTensor(local_tensor=tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.]], device='cuda:0'), device_mesh=DeviceMesh('cuda', [0, 1], mesh_dim_names=('fsdp',)), placements=(Shard(dim=0),))
DTensor(local_tensor=tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.]], device='cuda:1'), device_mesh=DeviceMesh('cuda', [0, 1], mesh_dim_names=('fsdp',)), placements=(Shard(dim=0),))
weight2 weight2 DTensor(local_tensor=tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.]], device='cuda:0'), device_mesh=DeviceMesh('cuda', [0, 1], mesh_dim_names=('fsdp',)), placements=(Shard(dim=0),))
DTensor(local_tensor=tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.]], device='cuda:1'), device_mesh=DeviceMesh('cuda', [0, 1], mesh_dim_names=('fsdp',)), placements=(Shard(dim=0),))
```
weight2 is all-ones, which is wrong. This signifies that `model.weight.view(3, 24).mul_(multiplier)` did nothing. If you set `works = True`, you'll see it produces numbers 0 to 3:
```
weight2 weight2 DTensor(local_tensor=tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1.]], device='cuda:0'), device_mesh=DeviceMesh('cuda', [0, 1], mesh_dim_names=('fsdp',)), placements=(Shard(dim=0),))
DTensor(local_tensor=tensor([[2., 2., 2., 2., 2., 2., 2., 2., 2.],
[2., 2., 2., 2., 2., 2., 2., 2., 2.],
[3., 3., 3., 3., 3., 3., 3., 3., 3.],
[3., 3., 3., 3., 3., 3., 3., 3., 3.]], device='cuda:1'), device_mesh=DeviceMesh('cuda', [0, 1], mesh_dim_names=('fsdp',)), placements=(Shard(dim=0),))
```
### Versions
```
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] gpytorch==1.14
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.17.4+torch260cu128
[pip3] numpy==1.24.4
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0+git35c6c7c6
[pip3] welford-torch==0.2.5
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,867,636,366
|
constexpr all the things in irange.h
|
swolchok
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147569
I got complaints while irangeifying some files in ExecuTorch
that irange could not be used in a constexpr function. This made the
complaints go away.
I added a constexpr function in irange_test that used to fail to build
with `error: variable of non-literal type 'iterator' (aka
'integer_iterator<int, true>') cannot be defined in a constexpr
function before C++23` and now builds fine.
Differential Revision: [D69959614](https://our.internmc.facebook.com/intern/diff/D69959614/)
| true
|
2,867,626,074
|
`copy_()` fails with HSDP in FSDP2
|
ad8e
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp",
"module: dtensor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Run on a 2-GPU machine: `torchrun --standalone --nnodes=1 --nproc-per-node=2 this_file.py`
```
import torch
from torch.distributed._tensor import DTensor, Shard, Replicate, distribute_tensor, distribute_module, init_device_mesh
from torch.distributed._composable.fsdp import fully_shard, MixedPrecisionPolicy
import os
import torch.distributed as dist
import datetime
world_size = int(os.getenv("WORLD_SIZE", None))
local_rank = int(os.getenv("LOCAL_RANK", None))
global_rank = int(os.getenv("RANK", None))
print(f"world_size: {world_size}, local_rank: {local_rank}, global_rank: {global_rank}")
dist.init_process_group(
backend="cuda:nccl",
init_method=None,
world_size=world_size,
rank=global_rank,
device_id=torch.device(f"cuda:{local_rank}"),
timeout=datetime.timedelta(seconds=120),
)
torch.cuda.set_device(local_rank)
# this crashes
device_mesh = init_device_mesh("cuda", mesh_shape=(1, 2), mesh_dim_names=("dp", "fsdp"))
placements = (Replicate(), Shard(dim=0))
# this works
# device_mesh = init_device_mesh("cuda", mesh_shape=(2,), mesh_dim_names=("fsdp",))
# placements = (Shard(dim=0),)
@torch.no_grad()
def init_broadcast(tensor):
full_tensor = torch.zeros(tensor.shape, dtype=tensor.dtype, device=tensor.device)
tensor.copy_(distribute_tensor(full_tensor, device_mesh=tensor.device_mesh, placements=tensor.placements))
with torch.device('meta'):
model = torch.nn.Linear(8, 8, bias=False)
fully_shard(model, mesh=device_mesh)
model.to_empty(device=local_rank)
init_broadcast(model.weight)
out = model(torch.randn(8, 8, device=f"cuda:{local_rank}"))
print(f"out {global_rank} {out}")
```
Output:
```
[kevin-h100-0:55567:0:55567] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x78b35)
[kevin-h100-0:55568:0:55568] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x78b35)
==== backtrace (tid: 55567) ====
0 0x0000000000042520 __sigaction() ???:0
1 0x000000000005f530 pncclGetUniqueId() ???:0
2 0x000000000005b385 pncclRedOpDestroy() ???:0
3 0x0000000000056e27 pncclResetDebugInit() ???:0
4 0x000000000004b0e7 pncclBroadcast() ???:0
5 0x000000000004b6bd pncclBcast() ???:0
6 0x000000005f4a9f36 c10d::ProcessGroupNCCL::broadcast() ???:0
7 0x0000000007bde936 c10d::ops::(anonymous namespace)::broadcast_CUDA() Ops.cpp:0
8 0x0000000007bf2986 c10::impl::make_boxed_from_unboxed_functor<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<std::tuple<std::vector<at::Tensor, std::allocator<at::Tensor> >, c10::intrusive_ptr<c10d::Work, c10::detail::intrusive_target_default_null_type<c10d::Work> > > (*)(c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long, long, bool, long), std::tuple<std::vector<at::Tensor, std::allocator<at::Tensor> >, c10::intrusive_ptr<c10d::Work, c10::detail::intrusive_target_default_null_type<c10d::Work> > >, c10::guts::typelist::typelist<c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long, long, bool, long> >, false>::call() Ops.cpp:0
9 0x0000000007150286 torch::autograd::basicAutogradNotImplementedFallbackImpl() autograd_not_implemented_fallback.cpp:0
10 0x0000000007bfefbc c10::impl::BoxedKernelWrapper<std::tuple<std::vector<at::Tensor, std::allocator<at::Tensor> >, c10::intrusive_ptr<c10d::Work, c10::detail::intrusive_target_default_null_type<c10d::Work> > > (c10::ArrayRef<at::Tensor>, c10::intrusive_ptr<c10d::ProcessGroup, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroup> > const&, long, long, bool, long), void>::call() ProcessGroup.cpp:0
11 0x0000000007c10de7 c10d::ProcessGroup::broadcast() ProcessGroup.cpp:0
...
```
Fails in 2.6, succeeds in 2.5.1.
### Versions
```
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] gpytorch==1.14
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.17.4+torch260cu128
[pip3] numpy==1.24.4
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0+git35c6c7c6
[pip3] welford-torch==0.2.5
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang @tianyu-l @XilunWu
| true
|
2,867,624,771
|
[cond] support mismatched output in inductor
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147567
In this PR, we extract `codegen_unbacked_symbol_defs` of FallbackKernel out as a `codegen_unbacked_symbol_defs_for_outputs` method in wrapper. With it, HOPs can support the case where the subgraph returns a tensor with unbacked symints. This PR only do it for cond, we'll have follow up PRs for others (e.g. while_loop) as well.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,590,155
|
Add support for non functional collectives under FakeTensorMode and fake_pg for memory tracking
|
sanketpurandare
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"release notes: distributed (c10d)"
] | 5
|
CONTRIBUTOR
|
This PR adds support for non-functional collectives under `FakeTensorMode` and `fake_pg`. It helps eliminate the patching of collectives for memory and runtime estimation.
It also modifies the `ModTracker` to enable the post-backward hook call for modules whose inputs don't require gradients but parameters do.
For the memory tracking, we now enable tracking DTensor dispatcher for custom dispatch functions like `entropy_loss`.
Dispatcher is only enabled for the memory tracking part and disabled as soon as it is done.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @weifengpy
| true
|
2,867,589,930
|
[dynamo][checkpoint] non-reentrant checkpoint + ambient saved tensor hooks is silently incorrect
|
xmfan
|
open
|
[
"module: activation checkpointing",
"triaged",
"oncall: pt2",
"module: dynamo",
"module: higher order operators",
"module: pt2-dispatcher"
] | 0
|
MEMBER
|
### 🐛 Describe the bug
```python
# test/test_autograd.py:test_save_on_cpu_and_checkpoint
a = torch.randn(2, 2, requires_grad=True)
with torch.autograd.graph.save_on_cpu():
h = a.pow(2)
h = checkpoint(lambda x: x.pow(2).pow(2), h, use_reentrant=False)
# h = checkpoint(torch.compile(lambda x: x.pow(2).pow(2), backend="aot_eager"), h, use_reentrant=False)
c = h.pow(2)
c.sum().backward()
c_grad = a.grad.clone()
a.grad.zero_()
```
adding some logging to the involved pack/unpack hooks:
```
Eager Compile
pack_to_cpu pack_to_cpu
pack_to_cpu pack_to_cpu
pack_to_cpu pack_to_cpu
cp pack hook cp pack hook
cp pack hook pack_to_cpu
pack_to_cpu unpack_from_cpu
unpack_from_cpu cp unpack hook
cp unpack hook unpack_from_cpu
unpack_from_cpu unpack_from_cpu
unpack_from_cpu unpack_from_cpu
cp unpack hook unpack_from_cpu
unpack_from_cpu
```
### Versions
main
cc @soulitzer @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @ydwu4 @bdhirsh
| true
|
2,867,527,259
|
[Inductor][NFC] Remove unused functions from `compile_tasks.py`
|
anmyachev
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 11
|
COLLABORATOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,507,261
|
torch.distributed.elastic.multiprocessing.start_process description does not reflect API
|
ntw-au
|
open
|
[
"oncall: distributed",
"triaged",
"module: elastic"
] | 1
|
NONE
|
### 📚 The doc issue
The `tee` parameter to `torch.distributed.elastic.multiprocessing.start_process()` was removed in #120691 and released in PyTorch 2.3.0. However, the [2.3 documentation](https://pytorch.org/docs/2.3/elastic/multiprocessing.html#torch.distributed.elastic.multiprocessing.start_processes) (and subsequent versions) still refers to the `tee` parameter in the description on the web page and function documentation in code. This outdated documentation still exists in the latest stable documentation and `HEAD`.
### Suggest a potential alternative/fix
Remove references to `tee` in PyTorch 2.3+ and replace with modern API usage, which presumably is to use the `logs_specs` parameter.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @dzhulgakov
| true
|
2,867,485,736
|
[dynamo] Save/restore system random state more carefully [attempt 3]
|
williamwen42
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147562
Attempt 3 at https://github.com/pytorch/pytorch/issues/145329
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,867,471,025
|
[partitioner] always ban compiler-driven recompute of collectives by default
|
bdhirsh
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (miscellaneous)"
] | 6
|
CONTRIBUTOR
|
This should fix the hang in https://fb.workplace.com/groups/1075192433118967/permalink/1603268720311333/
The argument here is that:
(1) in general, it is not safe for the partitioner to sometimes choose to recompute collectives in the backward. Why? If we are running a distributed job, where many ranks are compiling at the same time, we need every rank to make a consistent decision about which collectives are recomputed for backward. If we let each compiler instance make its own choice without any cross-rank communication, they can make different choices and cause NCCL hangs (see the link above)
(2) later on, we'll want an `spmd_mode` flag that causes the compiler to issue collectives and communicate info across ranks. Once we have such a config, then turning it on should make it safe for the partitioner to potentially choose to recompute collectives (and agree on the binary "recompute-or-save" choice across all ranks)
(3) even without an `spmd_mode`, users can override this choice by using `torch.utils.checkpoint()` in their user code. User checkpointing generally always overrides the partitioner, and this should be safe because we expect the user to apply checkpointing consistently across ranks
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #133044
* #148922
* __->__ #147561
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,867,465,431
|
Fix import of getArtifactLogger for ir_pre_fusion and ir_post_fusion
|
dulinriley
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Fixes #147002
There was an issue with the previous PR https://github.com/pytorch/pytorch/pull/147248 that didn't show up in CI,
where a logging import was not complete in torch/_inductor/debug.py before importing it.
This only happened if someone directly imported the file without doing any other imports before.
Also set to off_by_default by request to reduce log spew.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,460,817
|
[inductor][subgraph] Plumbing to get ShapeAsConstantBuffer from subgraph to main graph output
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #139325
* __->__ #147559
I am unable to create a test case that fails without the next PR. The idea is to have a symint which is returned by the inner subgraph and then returned by the forward graph after partitioning.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,425,685
|
[export] Remove report from draft-export output
|
angelayi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Summary: This matches the export API. To print the report, people can just do `print(ep._report)`. This information is also displayed in the terminal after the draft_export call.
Test Plan: CI
Reviewed By: SherlockNoMad
Differential Revision: D69689154
| true
|
2,867,414,339
|
use statically_known_true instead of guard_size_oblivious in pattern matcher
|
bobrenjc93
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147557
We shouldn't add guards here. Use statically_known_true instead. Internal xref: https://fb.workplace.com/groups/1075192433118967/?multi_permalinks=1609560723015466&comment_id=1610040026300869¬if_id=1740082892544333¬if_t=work_feedback_reaction_generic&ref=notif
Differential Revision: [D69950122](https://our.internmc.facebook.com/intern/diff/D69950122/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,371,825
|
[caffe2] Ignore compiler option when building using clang
|
Nicoshev
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"merging"
] | 10
|
CONTRIBUTOR
|
Summary:
Skip adding unrecognized option optimize("-fno-tree-loop-vectorize") when building using clang
This piece of code began to be compiled after armv9a has been set as default compilation profile
Test Plan: buck2 run mode/opt -c python.package_style=inplace -c fbcode.enable_gpu_sections=true -c fbcode.platform010_cuda_version=12 lego/scripts:lego_cli -- run-locally --model_entity_id ${MODEL} --config_version ${CONFIG_VERSION} --disable_generate_new_checkpoint --checkpoint_version 0 --publish_context OFFLINE_PUBLISH --lego_pipeline aiplatform.modelstore.model_generation.lego.lego_pipeline_builder.gmpp_lego_pipeline --gmpp_config '{"gmpp_pipeline_descriptor": "aiplatform.modelstore.model_generation.v1.ads_pipelines.aimp_pyper_pipeline.model_generation_pipeline", "worker_process_number":12, "worker_thread_per_process_number": 6, "use_work_assignment": true}' 2>&1 | tee aimp_697790515.log
Reviewed By: andrewjcg
Differential Revision: D69947027
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,867,360,966
|
[codemod] Fix unused-value issue in caffe2/aten/src/ATen/cuda/detail/CUDAHooks.cpp +4
|
r-barnes
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"topic: improvements",
"topic: not user facing"
] | 13
|
CONTRIBUTOR
|
Summary:
LLVM has a warning `-Wunused-value` which we treat as an error because it's so often diagnostic of a code issue. Unused values often indicate a programming mistake, but can also just be unnecessary cruft that harms readability and performance.
For questions/comments, contact r-barnes.
- If you approve of this diff, please use the "Accept & Ship" button :-)
Test Plan: Sandcastle
Differential Revision: D69945678
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,867,347,134
|
[cutlass backend] Fix standalone runner test after swizzle became a runtime parameter
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147554
Differential Revision: [D69945114](https://our.internmc.facebook.com/intern/diff/D69945114/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,337,196
|
ROCm MX-FP8 Gemm
|
petrex
|
open
|
[
"module: rocm",
"module: mkldnn",
"open source"
] | 3
|
CONTRIBUTOR
|
TLDR: MX-FP8 matrix multiplications through hipblaslt (require AMD gfx950 && ROCm 6.5+)
This pull request introduces several changes to enhance support for the MX format on ROCm, particularly for the gfx950 device. Key changes include adding validation for matrix dimensions and setting block sizes for the MX format, as well as updating the scaling logic to accommodate new requirements.
### Enhancements for MX format on ROCm:
* [`aten/src/ATen/cuda/CUDABlas.cpp`](diffhunk://#diff-74fcb26047c1df4024105d36ce22a36b77cf8cc93c28631d743e639b3d6066aeR1552-R1569): Added validation for matrix dimensions and set block sizes for MX format when using ROCm version 6.5 or later on gfx950 devices. [[1]](diffhunk://#diff-74fcb26047c1df4024105d36ce22a36b77cf8cc93c28631d743e639b3d6066aeR1552-R1569) [[2]](diffhunk://#diff-74fcb26047c1df4024105d36ce22a36b77cf8cc93c28631d743e639b3d6066aeR1606-R1617)
* [`aten/src/ATen/cuda/tunable/GemmHipblaslt.h`](diffhunk://#diff-bfa1a3b5d4bef1892bf50338775f3b0fd8cd31fc1868148f3968b98aefb68e3fL512-R530): Included validation and block size settings for MX format in the `HipblasltGemmOp` class.
* [`aten/src/ATen/native/cuda/Blas.cpp`](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abL1210-R1260): Added validation for MX format requirements and updated scaling logic for block-wise scaling on gfx950 devices. [[1]](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abL1210-R1260) [[2]](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abR1403-R1423)
### Refactoring and utility functions:
* [`aten/src/ATen/cuda/tunable/GemmMxUtils.h`](diffhunk://#diff-5e7883d306b6944d1f413707847c4a7d599f77f608e7f621e1f45ec0a4897d35R1-R37): Introduced helper functions `IsGfx950Device` and `ValidateMXFormatRequirements` to cache device properties and validate MX format requirements.
* [`aten/src/ATen/native/cuda/Blas.cpp`](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abR1096-R1112): Added a helper function `IsGfx950Device` to cache device properties and updated the `_scaled_mm_out_cuda` function to include MX format validation. [[1]](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abR1096-R1112) [[2]](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abR1339-R1344)
### Other changes:
* [`torch/testing/_internal/common_cuda.py`](diffhunk://#diff-fe348e24069d43bc7c6913174b038fcc5880a3281bdc0e8e217cf210bd0935e5L105-R112): Updated platform support check for MX GEMM to include gfx950 devices on ROCm.
* [`torch/utils/hipify/cuda_to_hip_mappings.py`](diffhunk://#diff-85bd10d67a85149584e7d7a8cba533241f7ad14450e5d54ffec23da34032429aR7325-R7327): Added mappings for new MX format attributes and scaling modes.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,867,329,276
|
Fix sympy float priting
|
isuruf
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"module: dynamic shapes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147552
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ezyang @penguinwu @bobrenjc93 @voznesenskym @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Fixes https://github.com/pytorch/pytorch/pull/147261
| true
|
2,867,308,530
|
FlexAttention compiled has illegal memory access or device-side assert even though all tensors are contiguous
|
leijurv
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 19
|
NONE
|
### 🐛 Describe the bug
```python
import torch
import torch.nn.attention.flex_attention
flex_compiled = torch.compile(torch.nn.attention.flex_attention.flex_attention)
torch.set_default_device("cuda")
print(torch.__version__)
if False:
# these params trigger device-side assert:
BATCH = 64
HEADS = 64
SEQ_LEN = 64
D_HEAD = 64
BAND = 64
else:
# these params trigger illegal memory access:
BATCH = 64
HEADS = 64
SEQ_LEN = 256
D_HEAD = 64
BAND = 64
Q = torch.randn((BATCH, HEADS, SEQ_LEN, D_HEAD), requires_grad=True)
K = torch.randn((BATCH, HEADS, SEQ_LEN, D_HEAD), requires_grad=True)
V = torch.randn((BATCH, HEADS, SEQ_LEN, D_HEAD), requires_grad=True)
rel_bias = torch.randn((BATCH, HEADS, SEQ_LEN, BAND), requires_grad=True)
def score_mod(score, b, h, q_idx, kv_idx):
return score + rel_bias[b, h, q_idx, q_idx - kv_idx]
def local_mask(b, h, q_idx, kv_idx):
causal_mask = q_idx >= kv_idx
window_mask = q_idx - kv_idx < BAND
return causal_mask & window_mask
flex_compiled(Q, K, V, score_mod=score_mod,
block_mask=torch.nn.attention.flex_attention.create_block_mask(
local_mask, B=BATCH, H=HEADS, Q_LEN=SEQ_LEN, KV_LEN=SEQ_LEN
),
)
```
Run with `CUDA_LAUNCH_BLOCKING=1`
```
2.7.0.dev20250220+cu124
Traceback (most recent call last):
File "/home/ubuntu/repro2.py", line 35, in <module>
flex_compiled(Q, K, V, score_mod=score_mod,
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py", line 1161, in flex_attention
def flex_attention(
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 764, in _fn
return fn(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1199, in forward
return compiled_fn(full_args)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 312, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 100, in g
return f(*args)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1847, in forward
fw_outs = call_func_at_runtime_with_args(
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 492, in wrapper
return compiled_fn(runtime_args)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 686, in inner_fn
outs = compiled_fn(args)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2348, in run
return model(new_inputs)
File "/tmp/torchinductor_ubuntu/hl/chlfd7hdtv2xyfko4hkkkgsy232itg4iclcanvyruwr6r2ortqxu.py", line 548, in call
triton_tem_fused_0.run(primals_1, primals_2, primals_3, buf0, primals_5, primals_4, primals_7, primals_8, primals_6, buf1, grid=torch._inductor.kernel.flex_attention.flex_attention_grid(64, 64, 256, 64, meta0), stream=stream0)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 956, in run
return launcher(
File "<string>", line 6, in launcher
File "/home/ubuntu/.local/lib/python3.10/site-packages/triton/backends/nvidia/driver.py", line 444, in __call__
self.launch(*args, **kwargs)
RuntimeError: Triton Error [CUDA]: an illegal memory access was encountered
```
If I add extra padding to either side of `rel_bias`, like this: `rel_bias = torch.randn((BATCH+2, HEADS, SEQ_LEN, BAND), requires_grad=True)[1:-1]`, it "fixes" the illegal memory access case, but not the device-side assert case.
Additionally, even with that `rel_bias` padding workaround, these parameters cause the illegal memory access to occur now in the backward pass (just add `.sum().backward()`):
```
BATCH = 128
HEADS = 6
SEQ_LEN = 128
D_HEAD = 16
BAND = 16
```
<details><summary>Backward pass log</summary>
```
2.7.0.dev20250220+cu124
Traceback (most recent call last):
File "/home/ubuntu/repro2.py", line 45, in <module>
).sum().backward()
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_tensor.py", line 639, in backward
return handle_torch_function(
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/overrides.py", line 1721, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/utils/_device.py", line 104, in __torch_function__
return func(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1982, in backward
return impl_fn()
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1968, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2088, in _backward_impl
out = call_func_at_runtime_with_args(
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 764, in _fn
return fn(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2348, in run
return model(new_inputs)
File "/tmp/torchinductor_ubuntu/gh/cgh4rpnwzkagkm7cznka43dnr5bxpncxulpr3n2tdnbcrst2hpqc.py", line 1027, in call
triton_tem_fused_zeros_2.run(primals_1, primals_2, primals_3, getitem_1, buf2, tangents_1, buf4, buf5, primals_5, primals_4, primals_9, primals_10, primals_7, primals_8, primals_11, primals_12, primals_6, buf0, buf6, grid=torch._inductor.kernel.flex_attention.flex_attention_backward_grid(128, 6, 128, 16, 6, 128, meta0), stream=stream0)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 956, in run
return launcher(
File "<string>", line 6, in launcher
File "/home/ubuntu/.local/lib/python3.10/site-packages/triton/backends/nvidia/driver.py", line 444, in __call__
self.launch(*args, **kwargs)
RuntimeError: Triton Error [CUDA]: an illegal memory access was encountered
```
</details>
### Versions
<details><summary>Env</summary>
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250220+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 52
On-line CPU(s) list: 0-51
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 26
Socket(s): 1
Stepping: 8
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.6 MiB (52 instances)
L1i cache: 1.6 MiB (52 instances)
L2 cache: 104 MiB (26 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-51
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] numpy==1.21.5
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250220+cu124
[pip3] triton==3.2.0+gitb99a3006
[conda] Could not collect
```
</details>
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,867,304,220
|
Define `__all__` for `torch.utils.tensorboard`
|
ringohoffman
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 8
|
CONTRIBUTOR
|
Fixes the issue:
```python
import torch.utils.tensorboard
torch.utils.tensorboard.FileWriter # pyright: "FileWriter" is not exported from module "torch.utils.tensorboard"
torch.utils.tensorboard.RecordWriter # pyright: "RecordWriter" is not exported from module "torch.utils.tensorboard"
torch.utils.tensorboard.SummaryWriter # pyright: "SummaryWriter" is not exported from module "torch.utils.tensorboard"
```
The [docs page for `torch.utils.tensorboard`](https://pytorch.org/docs/stable/tensorboard.html)
| true
|
2,867,289,003
|
Enable strobelight profiling specific compile frame ids using COMPILE_STROBELIGHT_FRAME_FILTER
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147549
* #147547
running python test/strobelight/examples/compile_time_profile_example.py
```
strobelight_compile_time_profiler, line 123, 2025-02-20 14:08:08,409, INFO: compile time strobelight profiling enabled
strobelight_compile_time_profiler, line 159, 2025-02-20 14:08:08,409, INFO: Unique sample tag for this run is: 2025-02-20-14:08:081656673devgpu005.nha1.facebook.com
strobelight_compile_time_profiler, line 160, 2025-02-20 14:08:09,124, INFO: URL to access the strobelight profile at the end of the run: https://fburl.com/scuba/pyperf_experimental/on_demand/9felqj0i
strobelight_compile_time_profiler, line 205, 2025-02-20 14:08:12,436, INFO: profiling frame 0/0 is skipped due to frame_id_filter 1/.*
strobelight_compile_time_profiler, line 205, 2025-02-20 14:08:15,553, INFO: profiling frame 0/0 is skipped due to frame_id_filter 1/.*
strobelight_compile_time_profiler, line 205, 2025-02-20 14:08:16,170, INFO: profiling frame 0/0 is skipped due to frame_id_filter 1/.*
strobelight_compile_time_profiler, line 214, 2025-02-20 14:08:16,877, INFO: profiling frame 1/0
strobelight_function_profiler, line 247, 2025-02-20 14:08:19,416, INFO: strobelight run id is: 4015948658689996
strobelight_function_profiler, line 249, 2025-02-20 14:08:21,546, INFO: strobelight profiling running
strobelight_function_profiler, line 289, 2025-02-20 14:08:25,964, INFO: work function took 4.417063233006047 seconds
strobelight_function_profiler, line 230, 2025-02-20 14:08:28,310, INFO: strobelight profiling stopped
strobelight_function_profiler, line 221, 2025-02-20 14:08:44,308, INFO: Total samples: 119
strobelight_function_profiler, line 221, 2025-02-20 14:08:44,308, INFO: GraphProfiler (python stack): https://fburl.com/scuba/pyperf_experimental/on_demand/73h2f7ur
strobelight_function_profiler, line 221, 2025-02-20 14:08:44,308, INFO: Icicle view (python stack): https://fburl.com/scuba/pyperf_experimental/on_demand/zs06fi9e
strobelight_compile_time_profiler, line 167, 2025-02-20 14:08:44,308, INFO: 1 strobelight success runs out of 1 non-recursive compilation events.
```
| true
|
2,867,288,053
|
torch._scaled_mm with MXFP8
|
vkuzo
|
closed
|
[
"module: cuda",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 27
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147548
# summary
Add blockwise MXFP8 support to `torch._scaled_mm` on CUDA capability 10.0 and higher devices. If the scales for A and B are of dtype `torch.float8_e8m0fnu`, we dispatch to the blockwise kernel from cuBLAS.
This is a skeleton PR where we test basic functionality (numerics of various simple matrices, as well as one end to end quantization + gemm).
- Scales are flipped based on transpose_result
- Handles boundary conditions
Note that MXFP4 is not added in this PR - we can tackle that in a future PR.
This PR was created by taking https://github.com/pytorch/pytorch/pull/145562, switching e8m0 to in-core dtype, removing fp4 for now, and adding test cases.
# test plan
```
pytest test/test_matmul_cuda.py -k blockwise_mxfp8 -s
```
cc @ptrblck @msaroufim @eqy
| true
|
2,867,277,682
|
move _strobelight/example to avoid graph breaks
|
laithsakka
|
closed
|
[
"Merged",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147549
* __->__ #147547
| true
|
2,867,270,879
|
Add continuous run for cachebench
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: releng"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147546
* #147537
This PR adds a continuous run for cache bench.
| true
|
2,867,269,167
|
[MPS] fix attention for >4d tensors
|
Isalia20
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 3
|
COLLABORATOR
|
Fixes #147443
and adds tests for >4d tensors
| true
|
2,867,234,530
|
Adding Small Epsilon in linalg_eig_backward to Improve Numerical Stability on GPU
|
alexanderlerner
|
closed
|
[
"module: autograd",
"triaged",
"module: linear algebra"
] | 6
|
NONE
|
### 🚀 The feature, motivation and pitch
Hi PyTorch Team,
My team and I work on physics-inspired ML models where we use torch.linalg.eigh to get the eigenvector corresponding to the lowest eigenvalue of a Hermitian matrix. We sometimes run into numerical issues during backpropagation as we repeat training iterations over time. Specifically we see:
`linalg_eigh_backward: The eigenvectors in the complex case are specified up to multiplication by e^{i phi}. The specified loss function depends on this quantity, so it is ill-defined`
Upon investigation, there are not actually phase dependencies in our case. After digging into `linalg_eig_backward`, we found the issue actually might come from the backward formula that divides by `lambda_j - lambda_i` when computing `Econj` [here](https://github.com/pytorch/pytorch/blob/382fbcc1e43ae5d46ec148bdfd8dcfb73da81b77/torch/csrc/autograd/FunctionsManual.cpp#L3757C1-L3765C1). For near-degenerate eigenvalues, we suspect the small denominator might trigger unstable gradients or NaNs in lower-precision (e.g., `complex64`) calculations as we accumulate these small errors over time.
Our fix has been to add a tiny epsilon in the denominator, something like:
`const float eps = 1e-8;
auto ret = VhgV.div_(Econj + eps);`
We found that this small regularizer effectively removes NaNs and stabilizes training without altering our models (which are just subclasses of `nn.module`) .
We would like to potentially contribute a PR to PyTorch that conditionally adds this epsilon—potentially factoring in tensor precision (e.g., using a smaller epsilon for double precision).
Note that this only occurs on GPU, I'm running torch version `2.6.0` and cuda `12.3`. On CPU, the behavior shown actually throws a different error which makes more sense:
`torch._C._LinAlgError: linalg.eigh: (Batch element 0): The algorithm failed to converge because the input matrix is ill-conditioned or has too many repeated eigenvalues (error code: 7).`
Adding this epsilon in either case seemed to fix these issues -- some of our models are proprietary and we can't quite share an example, but might anyone know why we see this divergence in behavior between GPU and CPU, or at least what sorts of regimes might create this phenomenon in general?
We’re new to contributing to PyTorch and would appreciate any guidance. Thank you so much for your time, and we look forward to your feedback!
### Alternatives
_No response_
### Additional context
Some past issues referencing this same check:
https://github.com/pytorch/pytorch/pull/70528
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @jianyuh @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,867,180,616
|
No gradient for `residuals` in the return value of `torch.linalg.lstsq`
|
Bichidian
|
closed
|
[
"module: autograd",
"triaged",
"module: linear algebra"
] | 6
|
CONTRIBUTOR
|
The return value of `torch.linalg.lstsq` is a named tuple `(solution, residuals, rank, singular_values)`. I find that `solution` has gradient but `residuals` does not. Is this expected? I'm using `gels` driver.
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @jianyuh @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,867,136,691
|
Increase memory for linux binary builds
|
jeanschmidt
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Recently I detected that some linux manywheels builds are flaky ([ex](https://github.com/pytorch/pytorch/actions/runs/13438309056/job/37555475510)).
After investigating, could not detect issues when investigating the runner logs, its disk space available, network usage or CPU load. Unfortunately, memory information is not available.
But given the symptoms, the likehood of this being a OOM problem is high.
So, moving those build jobs from a `linux.12xlarge.ephemeral` to `linux.12xlarge.memory.ephemeral`.
This change depends on https://github.com/pytorch/test-infra/pull/6316
| true
|
2,867,125,401
|
Add XPU to is_compile_supported to support roi_align op in torchvision
|
frost-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"release notes: xpu"
] | 6
|
COLLABORATOR
|
Part of the required fix for https://github.com/intel/torch-xpu-ops/issues/1264.
To support `roi_align`, torchvision uses `is_compile_supported` in `torch/_dynamo/utils.py` to compile a non-deterministic version of the op for backwards passes. This PR adds XPU device to the supported compile devices.
The `is_compile_supported()` util function has extremely limited usage, only being used in `torchvision.ops.roi_align` and `torch.utils._content_store.has_storage()`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,867,109,918
|
Update ruff linter for PEP585
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
This turns on PEP585 enforcement in RUFF.
- Updates the target python version
- Stops ignoring UP006 warnings (PEP585)
- Fixes a few issues which crept into the tree in the last day
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147540
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,867,100,386
|
GRU does not return reverse hidden states when bidirectional=True
|
amitportnoy
|
closed
|
[] | 0
|
NONE
|
### (Non-issue)
In the code below `output, (h_n, c_n) = gru(x)` is my bug, since GRU does not return c_n
closing this
### 🐛 Describe the bug
Using `torch==2.5.1`, GRU with `bidirectional=True`, does not return the reverse direction hidden state in `h_n`.
(LSTM will return those states, the issue is with GRU specifically)
See reproduction code below. Thank you!
```python
import torch
gru = torch.nn.GRU(10, 40, bidirectional=True)
x = torch.randn(30, 5, 10)
output, (h_n, c_n) = gru(x)
assert list(output.shape) == [30, 5, 80] # pass
# WILL FAIL h_n.shape=torch.Size([5, 40])
assert list(h_n.shape) == [2, 5, 40]
# LSTM works fine
output, (h_n, c_n) = torch.nn.LSTM(10, 40, bidirectional=True)(x)
assert list(h_n.shape) == [2, 5, 40] # pass
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise (10.0.22631 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU
Nvidia driver version: 571.96
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 12th Gen Intel(R) Core(TM) i9-12900H
Manufacturer: GenuineIntel
Family: 207
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2500
MaxClockSpeed: 2500
L2CacheSize: 11776
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.19.2
[pip3] onnxruntime-gpu==1.20.1
[pip3] onnxscript==0.1.0.dev20250107
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.5.1+cu118
[pip3] torch-tb-profiler==0.4.3
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.20.1+cu118
[conda] Could not collect
```
| true
|
2,867,088,021
|
[fx] demote node prepend to self log from warning to debug
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 3
|
MEMBER
|
FIXES https://github.com/pytorch/pytorch/issues/147175
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147538
This is harmless, not sure why this is a user warning. Writing reordering graph passes is more concise when we ignore this warning.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,867,065,999
|
Add cachebench
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: benchmark",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147546
* __->__ #147537
This PR adds a new benchmark called cachebench in order to measure/demonstrate the prowess of PT2 caching.
```
python benchmarks/dynamo/cachebench.py --output="result.json"
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,867,064,519
|
Fix PEP585 update
|
aorenste
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: dataloader",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Summary: D69920347 causes a pyre failure due to changing a base object from typing.Iterable to abc.Iterable. For now revert that change until it can be dealt with on its own.
Test Plan:
failures from D69920347 pass locally
unit tests pass
Reviewed By: oulgen
Differential Revision: D69936518
| true
|
2,867,062,781
|
reland "[sigmoid] Test OSS model runner with test_export.py"
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export",
"ci-no-td"
] | 5
|
CONTRIBUTOR
|
Summary: There are ~260 tests for all the corner cases of export from test_export.py. utitlizing to test sigmoid in the OSS setting.
Test Plan: buck test mode/opt caffe2/test:test_export -- -r _sigmoid
Differential Revision: D69937387
| true
|
2,867,029,542
|
specify only some dimensions in shapes collection
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147534
Differential Revision: [D69936316](https://our.internmc.facebook.com/intern/diff/D69936316/)
| true
|
2,867,009,291
|
Fix register constant to be usable in exportz
|
tugsbayasgalan
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147533
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D69939737](https://our.internmc.facebook.com/intern/diff/D69939737)
@diff-train-skip-merge
| true
|
2,867,009,203
|
better error message
|
tugsbayasgalan
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147533
* __->__ #147532
Differential Revision: [D69939736](https://our.internmc.facebook.com/intern/diff/D69939736)
| true
|
2,866,960,408
|
FSDP wrapped module cannot be called with zero arguments
|
gkanwar
|
closed
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 1
|
NONE
|
### 🐛 Describe the bug
When calling an FSDP-wrapped torch module with zero arguments, an index error is thrown.
Reproducer code, which should be launched with an appropriate `torchrun`:
```
import torch
import torch.distributed
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
import os
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.a = torch.nn.Parameter(torch.tensor([1.0]))
self.batch_size = 128
def forward(self):
return torch.randn(self.batch_size) * self.a
def main():
local_rank = int(os.environ['LOCAL_RANK'])
torch.distributed.init_process_group()
device = f'cuda:{local_rank}'
model = Model().to(device)
model = FSDP(model)
print(model())
if __name__ == '__main__':
main()
```
Error:
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/lqcd/gurtej/repro_fsdp_bug.py", line 23, in <module>
[rank0]: main()
[rank0]: File "/home/lqcd/gurtej/repro_fsdp_bug.py", line 20, in main
[rank0]: print(model())
[rank0]: ^^^^^^^
[rank0]: File "/work/lqcd/d20a/users/gurtej/anaconda3/envs/torch2.5/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/work/lqcd/d20a/users/gurtej/anaconda3/envs/torch2.5/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/work/lqcd/d20a/users/gurtej/anaconda3/envs/torch2.5/lib/python3.12/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 848, in forward
[rank0]: args, kwargs = _root_pre_forward(self, self, args, kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/work/lqcd/d20a/users/gurtej/anaconda3/envs/torch2.5/lib/python3.12/site-packages/torch/distributed/fsdp/_runtime_utils.py", line 592, in _root_pre_forward
[rank0]: args = args_tuple[0]
[rank0]: ~~~~~~~~~~^^^
[rank0]: IndexError: tuple index out of range
```
This error occurs because the code here assumes exactly one argument:
https://github.com/pytorch/pytorch/blob/main/torch/distributed/fsdp/_runtime_utils.py#L599
Is there any reason this assumption needs to be made here? It seems these lines could be simply dropped for support of arbitrary numbers of args and kwargs.
There is an obvious workaround of just passing one dummy argument to such modules. We do this at the moment, but it would of course be much more natural not to have to hack the arguments list to run with FSDP.
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.11.3
Libc version: glibc-2.35
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
GPU 2: NVIDIA GeForce RTX 2080 Ti
GPU 3: NVIDIA GeForce RTX 2080 Ti
GPU 4: NVIDIA GeForce RTX 2080 Ti
GPU 5: NVIDIA GeForce RTX 2080 Ti
GPU 6: NVIDIA GeForce RTX 2080 Ti
GPU 7: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 525.125.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 7
CPU max MHz: 3900.0000
CPU min MHz: 1000.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 32 MiB (32 instances)
L3 cache: 44 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.5.0
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.10 py312h5eee18b_0
[conda] mkl_random 1.2.7 py312h526ad5a_0
[conda] numpy 1.26.4 py312hc5e2394_0
[conda] numpy-base 1.26.4 py312h0da6c21_0
[conda] pytorch 2.5.0 py3.12_cuda11.8_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtriton 3.1.0 py312 pytorch
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,866,891,445
|
[fx][dynamo][be] Don't allow arbitrary dataclass in the graph
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Right now Dynamo and fx tracing allows dataclass instances in the graph, represented as `call_function(dataclass_ctor, args...)`. Relevant PRs:
- #99576
- #134846
The issue is that dataclass constructor could have arbitrary user code.
More context: https://docs.google.com/document/d/1rgm7_tnS1Uj2srLMatu092Or3_4bUGFoYjGMOoG2az4/edit?pli=1&disco=AAABdgcYZkg.
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.