id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,001,967,148
|
[inductor][cpu] pytorch_CycleGAN_and_pix2pix AMP/AMP_FP16 multiple thread performance regression in 2025-04-07 nightly release
|
zxd1997066
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>AMP dynamic shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>pytorch_CycleGAN_and_pix2pix</td>
<td>multiple</td>
<td>1</td>
<td>2.081768</td>
<td>0.018581576</td>
<td>0.03868253030636799</td>
<td>29.278569</td>
<td>1</td>
<td>2.379611</td>
<td>0.016544913</td>
<td>0.039370456968843004</td>
<td>29.383404</td>
<td>0.87</td>
<td>1.02</td>
<td>0.89</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>AMP_FP16 dynamic shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>pytorch_CycleGAN_and_pix2pix</td>
<td>multiple</td>
<td>1</td>
<td>2.070508</td>
<td>0.019373711000000002</td>
<td>0.040113423615188</td>
<td>29.4651</td>
<td>1</td>
<td>2.226575</td>
<td>0.016641866</td>
<td>0.03705436278895</td>
<td>29.256727</td>
<td>0.93</td>
<td>0.92</td>
<td>0.86</td>
<td>0.99</td>
</tr>
</tbody>
</table>
the bad commit: 5cb5675f1390474781c0b9cfdeb7bdcc45f89c8e
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench pytorch_CycleGAN_and_pix2pix amp first dynamic
Testing with dynamic shapes.
Testing with inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval pytorch_CycleGAN_and_pix2pix
running benchmark: 100%|████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:04<00:00, 12.38it/s]
2.404x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,pytorch_CycleGAN_and_pix2pix,1,2.404379,23.334240,59.428802,0.877266,117.663744,134.125568,93,1,0,0,0,0,1
```
the last good commit: 0f12951fc2005cd5b3ee13a877567215eb5f4425
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench pytorch_CycleGAN_and_pix2pix amp first dynamic
Testing with dynamic shapes.
Testing with inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval pytorch_CycleGAN_and_pix2pix
running benchmark: 100%|████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:03<00:00, 13.35it/s]
2.572x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,pytorch_CycleGAN_and_pix2pix,1,2.571518,21.150980,50.992881,0.917315,117.857075,128.480461,93,1,0,0,0,0,1
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>373ffb19</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>d98575806ba3f2b67439c241e980df8f98923f44</td>
<td>main</td>
<td>f80bee4934dc2d6c8031f481d699cd4832a1a932</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+bccaa45</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance torchbench pytorch_CycleGAN_and_pix2pix amp first dynamic
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/5cb5675f1390474781c0b9cfdeb7bdcc45f89c8e
[torchbench-pytorch_CycleGAN_and_pix2pix-inference-amp-dynamic-default-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/19791971/torchbench-pytorch_CycleGAN_and_pix2pix-inference-amp-dynamic-default-multiple-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129
| true
|
3,001,908,347
|
Cuda error on RTX 5090d: ImportError: ImportError: cannot import name 'EPOCH_OUTPUT' from 'pytorch_lightning.utilities.types'
|
paomian001
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
ImportError: cannot import name 'EPOCH_OUTPUT' from 'pytorch_lightning.utilities.types'

### Versions
PyTorch version: 2.8.0.dev20250416+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 5090 D
Nvidia driver version: 570.133.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 32
在线 CPU 列表: 0-31
厂商 ID: GenuineIntel
型号名称: Intel(R) Core(TM) i9-14900KF
CPU 系列: 6
型号: 183
每个核的线程数: 2
每个座的核数: 24
座: 1
步进: 1
CPU 最大 MHz: 6000.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 6374.40
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tart arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 896 KiB (24 instances)
L1i 缓存: 1.3 MiB (24 instances)
L2 缓存: 32 MiB (12 instances)
L3 缓存: 36 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.8.0.87
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] pytorch-lightning==2.5.1
[pip3] pytorch-msssim==1.0.0
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250416+cu128
[pip3] torch-geometric==2.6.1
[pip3] torchaudio==2.6.0.dev20250416+cu128
[pip3] torchmetrics==1.7.1
[pip3] torchvision==0.22.0.dev20250416+cu128
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.8.0.87 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] pytorch-lightning 2.5.1 pypi_0 pypi
[conda] pytorch-msssim 1.0.0 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0.dev20250416+cu128 pypi_0 pypi
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250416+cu128 pypi_0 pypi
[conda] torchmetrics 1.7.1 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250416+cu128 pypi_0 pypi
| true
|
3,001,789,741
|
Fix `InstanceNorm` wrong suggestion in warning message
|
zeshengzong
|
open
|
[
"triaged",
"open source"
] | 5
|
CONTRIBUTOR
|
Fixes #109652
## Changes
- Change misleadning suggestion in warning message of `_InstanceNorm`
## Test Result
```python
import torch
m = torch.nn.InstanceNorm1d(64)
input = torch.randn(4, 80, 300)
output = m(input)
/home/zong/code/pytorch/torch/nn/modules/instancenorm.py:115: UserWarning: input's size at dim=1 does not match num_features. Since affine=False, num_features is not used in the normalization process. You can safely ignore this warning.
```
| true
|
3,001,767,845
|
[Intel GPU] Enable XPU depthwise convolution
|
ZhiweiYan-96
|
open
|
[
"module: cpu",
"open source",
"topic: not user facing",
"ciflow/inductor",
"ciflow/xpu",
"module: xpu"
] | 7
|
COLLABORATOR
|
# Motivation
This PR enables XPU depthwise convolution by using overrideable backend implemented at `aten/src/ATen/native/mkldnn/xpu/Conv.cpp`. The implementations would treat it as a common convolution with `groups=channels_in`.
# Verification
```
DNNL_VERBOSE=1 python test/xpu/test_conv.py TestConvolutionNNDeviceTypeXPU -k test_Conv2d_depthwise_naive_groups_xpu
```
```
onednn_verbose,v1,primitive,exec,gpu:0,convolution,jit:ir,forward_training,src:f32::blocked:abcd::f0 wei:f32::blocked:abcde::f0 bia:f32::blocked:a::f0 dst:f32::blocked:abcd::f0,attr-scratchpad:user,alg:convolution_direct,g2mb2_ic2oc4_ih6oh4kh3sh1dh0ph0_iw6ow4kw3sw1dw0pw0,0.117188
onednn_verbose,v1,primitive,exec,gpu:0,convolution,jit:ir,backward_data,src:f32::blocked:abcd::f0 wei:f32::blocked:abcde::f0 bia:undef::undef::: dst:f32::blocked:abcd::f0,attr-scratchpad:user,alg:convolution_direct,g2mb2_ic2oc4_ih6oh4kh3sh1dh0ph0_iw6ow4kw3sw1dw0pw0,0.165039
```
`g2mb2_ic2` Shows that, the group size is same as input channels, which aligns to the depthwise convolution definition.
FIX #151308
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151533
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,001,744,590
|
[Easy] Optimize `clip_grad` param description
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: docs"
] | 5
|
CONTRIBUTOR
|
Fix missing optional description in `clip_grad_norm_` and `clip_grad_value_`
## Test Result
### Before


### After


| true
|
3,001,676,707
|
[WIP] Deprecate getPinnedMemoryAllocator use getHostAllocator instead
|
guangyey
|
open
|
[
"open source",
"release notes: cpp"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151531
* #151916
* #151913
| true
|
3,001,670,073
|
Add pack support and use micro gemm for Half flex attention on CPU
|
CaoE
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Add pack support and use micro gemm for the second gemm to improve the performance for Half flex attention on CPU.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,621,435
|
Add hint message when parameters is empty in clip_grad_norm_
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fixes #148259
## Changes
- Add print warning message when `parameters` generator exhausted
## Test Result
### print warning
```python
import torch
import torch.nn as nn
import torch.optim as optim
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.fc = nn.Linear(10, 1)
def forward(self, x):
return self.fc(x)
model = SimpleModel()
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)
inputs = torch.randn(16, 10)
targets = torch.randn(16, 1)
outputs = model(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
params_to_clip = model.parameters()
for p in params_to_clip:
print(p.shape)
max_norm = 1.0
norm_type = 2.0
total_norm = nn.utils.clip_grad_norm_(params_to_clip, max_norm, norm_type)
print(f"total_norm: {total_norm}")
```
```bash
/home/zong/code/pytorch/torch/nn/utils/clip_grad.py:222: UserWarning: `parameters` is an empty generator, no gradient clipping will occur.
warnings.warn(
total_norm: 0.0
```
### UT
```bash
pytest test/test_nn.py -k test_clip_grad_norm
```

| true
|
3,001,582,503
|
[Inductor] Suppress cuda init error for CPU only Inductor
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150669
* __->__ #151528
**Summary**
After https://github.com/pytorch/pytorch/pull/151255, invoking `torch.compile` on a non-CUDA device prints the following error:
`E0416 23:39:55.953000 418833 torch/_inductor/codegen/cuda/cuda_env.py:22] Error getting cuda arch: Torch not compiled with CUDA enabled.`
This PR updates the code to initialize `PRESETS` only when CUDA is available, preventing this error message from being printed.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,530,205
|
Use device agnostic APIs and variable names for dtensor
|
amathewc
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
This PR contains the original three files where were added and approved in https://github.com/pytorch/pytorch/pull/148876 . During rebase, other unrelated files were added by mistake to that PR and hence it was closed before merging.
## MOTIVATION
To generalize DTensor test cases for non-CUDA devices, we are replacing certain APIs with device-agnostic alternatives. Additionally, we are refactoring the code to improve modularity.
Please refer to this RFC as well: https://github.com/pytorch/rfcs/pull/66
## CHANGES
common_dtensor.py
Use APIs like torch.get_device_module and dist.get_default_backend_for_device to dynamically determine the device and backend based on the environment.
Replace hardcoded device names with generic identifiers such as self.device_type.
In the wrapper function, use DEVICE_COUNT, which is set via DEVICE_MODULE.device_count, instead of torch.accelerator.device_count(), as the latter does not support out-of-tree devices.
test_random_ops.py & test_dtensor_config.py
Replace hardcoded device names with self.device_type.
@ankurneog , @EikanWang , @cyyever , @guangyey
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,001,510,417
|
Extend the error type for dynamo logging
|
houseroad
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 13
|
MEMBER
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,467,744
|
[inductor] [cpu] [silent incorrectness] `nn.LazyConvTranspose2d-torch.randn-F.linear-torch.argmax` output incorrect results on CPU inductor
|
shaoyuyoung
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `nn.LazyConvTranspose2d-torch.randn-F.linear-torch.argmax` output incorrect results on CPU inductor
**device backend**: only CPP
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv_transpose = torch.nn.LazyConvTranspose2d(out_channels=8, kernel_size=3, stride=2)
def forward(self, x):
x = self.conv_transpose(x)
y = torch.randn(64, x.numel() // x.shape[0], dtype=x.dtype)
x = F.linear(x.flatten(1), y)
x = torch.argmax(x, dim=1)
return x
model = Model()
x = torch.randn(1, 3, 16, 16)
inputs = [x]
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
print(torch.allclose(output, c_output, rtol=1e-3, atol=1e-3))
print(torch.max(torch.abs(c_output - output)))
fp64 = run_test(model.to(dtype=torch.float64), [x.to(dtype=torch.float64) for x in inputs], 'eager')
print(torch._dynamo.utils.same(output, c_output, fp64))
```
### Error logs
CPP
```
False
tensor(49)
E0417 13:36:22.737000 1459462 site-packages/torch/_dynamo/utils.py:2946] Accuracy failed: allclose not within tol=0.0001
False
```
triton
```
True
tensor(0, device='cuda:0')
True
```
### Versions
nightly 20250414
cc @chauhang @penguinwu
| true
|
3,001,424,823
|
[inductor] [silent incorrectness] Multiple internal `torch.rand` can lead to inconsistent results with eager
|
shaoyuyoung
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: If we just use one-time `torch.rand` in forward function, the output is right. However, output is inconsistent when we use at least two times `torch.rand`. The multiple uses of internal `torch.rand` don't respect the fallback_random (?)
**device backend**: both CPP and triton
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(nn.Module):
def __init__(self):
super().__init__()
def forward(self):
x = torch.rand(1)
x = torch.rand(1)
return x
model = Model()
inputs = []
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model()
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
print(output)
print(c_output)
```
### Error logs
```
tensor([0.7682])
tensor([0.4963])
```
### Versions
nightly 20250414
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,001,400,241
|
[inductor] [cpu] `nn.Conv2d-F.hardshrink-.view-torch.mv` throws `CppCompileError` on CPU inductor
|
shaoyuyoung
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `nn.Conv2d-F.hardshrink-.view-torch.mv` throws `CppCompileError` on CPU inductor
**device backend**: only CPP
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
import os
os.environ['TORCHDYNAMO_VERBOSE'] = '1'
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv = torch.nn.Conv2d(in_channels=3, out_channels=16, kernel_size=(1, 7), stride=(2, 1), padding=0)
def forward(self, x, weight):
x = self.conv(x)
x = F.hardshrink(x, lambd=0)
x = x.view(x.size(0), -1)
x = torch.mv(weight, x[0])
return x
model = Model()
x = torch.randn(2, 3, 127, 255)
weight = torch.randn(10, 254976)
inputs = [x, weight]
def run_test(model, inputs, backend):
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(f"succeed on {backend}")
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'inductor')
```
### Error logs
```
succeed on eager
CppCompileError: C++ compile error
```
### Versions
nightly 20250414
cc @chauhang @penguinwu
| true
|
3,001,366,652
|
[inductor] `.to_sparse()-.to_dense()` throws `LoweringException: NotImplementedError:`
|
shaoyuyoung
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `.to_sparse()-.to_dense()` throws `LoweringException: NotImplementedError:` while eager can execute successfully.
**device backend**: both CPP and triton
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x_sparse = x.to_sparse()
# print(x_sparse) # using `print` can eliminate crash
x_dense = x_sparse.to_dense()
return x_dense
model = Model()
x = torch.tensor([[1.0]])
inputs = [x]
def run_test(model, inputs, backend):
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(f"succeed on {backend}")
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'inductor')
```
### Error logs
```
succeed on eager
LoweringException: NotImplementedError: could not find kernel for aten._to_dense.default at dispatch key DispatchKey.CPU
target: aten._to_dense.default
args[0]: TensorBox(StorageBox(
MultiOutput(
python_kernel_name=None,
name=buf1,
layout=FixedLayout('cpu', torch.float32, size=[1, 1], stride=[0, 0]),
inputs=[FallbackKernel(
python_kernel_name='torch.ops.aten._to_sparse.default',
name=buf0,
layout=MultiOutputLayout(device=device(type='cpu')),
inputs=[InputBuffer(name='arg0_1', layout=FixedLayout('cpu', torch.float32, size=[1, 1], stride=[1, 1]))],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=torch.ops.aten._to_sparse.default,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=['layout', 'blocksize', 'dense_dim'],
op_overload=aten._to_sparse.default,
arg_properties=[{'name': 'self', 'type': Tensor, 'default_value': None}],
kwarg_properties=None,
unbacked_bindings=None,
mutation_outputs=[],
origin_node=_to_sparse,
origins=OrderedSet([_to_sparse])
)],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=None,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=(),
op_overload=None,
arg_properties=[{}],
kwarg_properties=None,
unbacked_bindings={},
mutation_outputs=[],
origin_node=_to_sparse,
origins=OrderedSet([_to_sparse])
)
))
```
### Versions
nightly 20250414
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
3,001,348,753
|
[FlexAttention] Remove old constraint that was causing assert failure
|
drisspg
|
closed
|
[
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"module: flex attention"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151521
* #151846
# Summary
Fixes: https://github.com/pytorch/pytorch/issues/148827
This one is strange, I could have sworn this was a real constraint, but I verified and did some performance checks and this constraint isn't required.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @Chillee @yanboliang @BoyuanFeng
| true
|
3,001,333,591
|
DISABLED test_builtin_score_mods_float16_score_mod3_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float16_score_mod3_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40676327185).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float16_score_mod3_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1109, in test_builtin_score_mods
self.run_test(score_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 460.12 MiB is free. Including non-PyTorch memory, this process has 21.59 GiB memory in use. Of the allocated memory 6.69 GiB is allocated by PyTorch, and 14.65 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_float16_score_mod3_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,333,590
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE_256_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE_256_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40688574924).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE_256_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,333,351
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE2_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE2_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40676327185).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE2_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 492.12 MiB is free. Including non-PyTorch memory, this process has 21.56 GiB memory in use. Of the allocated memory 6.77 GiB is allocated by PyTorch, and 14.52 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE2_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,333,320
|
DISABLED test_non_equal_head_dims_score_mod2_bfloat16_head_dims1_cuda_bfloat16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod2_bfloat16_head_dims1_cuda_bfloat16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40674676520).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod2_bfloat16_head_dims1_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,333,257
|
DISABLED test_builtin_score_mods_float16_score_mod0_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float16_score_mod0_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40688574924).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float16_score_mod0_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1109, in test_builtin_score_mods
self.run_test(score_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 881, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 272.12 MiB is free. Including non-PyTorch memory, this process has 21.77 GiB memory in use. Of the allocated memory 6.73 GiB is allocated by PyTorch, and 14.78 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_float16_score_mod0_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,333,256
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE_128_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE_128_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40672879519).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE_128_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,333,173
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE_256_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE_256_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40688574924).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE_256_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,333,102
|
DISABLED test_non_equal_head_dims_score_mod2_float32_head_dims0_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod2_float32_head_dims0_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40692843112).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod2_float32_head_dims0_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 2159, in test_non_equal_head_dims
self.run_test(score_mod, dtype, B, H, S, qk_d, B, H, S, V_D=v_d, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 870, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.1683 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 7, in forward
mul_2 = torch.ops.aten.mul.Tensor(arg5_1, arg0_1); arg5_1 = arg0_1 = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 190.12 MiB is free. Including non-PyTorch memory, this process has 21.85 GiB memory in use. Of the allocated memory 6.79 GiB is allocated by PyTorch, and 14.79 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_non_equal_head_dims_score_mod2_float32_head_dims0_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,333,064
|
DISABLED test_remove_noop_view_default_cpu (__main__.CpuTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 7
|
NONE
|
Platforms: mac, macos, rocm, asan, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remove_noop_view_default_cpu&suite=CpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40693605627).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remove_noop_view_default_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_compile_subprocess.py`
cc @clee2000 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,001,333,014
|
DISABLED test_remove_noop_view_default_cuda (__main__.GPUTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 6
|
NONE
|
Platforms: rocm, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remove_noop_view_default_cuda&suite=GPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40693605634).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remove_noop_view_default_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 13355, in new_test
return value(self)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 13186, in test_remove_noop_view_default
self.assertExpectedInline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3097, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 'def forward(self, arg0_1: "f32[2, 3, 2][6[155 chars]te,)' != ''
- def forward(self, arg0_1: "f32[2, 3, 2][6, 2, 1]cuda:0"):
- permute: "f32[2, 2, 3][6, 1, 2]cuda:0" = torch.ops.aten.permute.default(arg0_1, [0, 2, 1]); arg0_1 = None
- return (permute,) : To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_compile_subprocess.py GPUTests.test_remove_noop_view_default_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_compile_subprocess.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,001,249,322
|
[inductor] [cpu] [edge case] When processing `torch.nan_to_num-.long()`, inductor outputs the `reciprocal` of eager
|
shaoyuyoung
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: First, using `torch.nan_to_num` to process `float("inf")` outputs correct res. But after using `.long()` to convert the dtype. CPU inductor outputs **reciprocal** results.
**device backend**: only CPP
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = torch.nan_to_num(x, nan=0, posinf=torch.iinfo(torch.int64).max, neginf=torch.iinfo(torch.int64).min)
x = x.long()
return x
model = Model()
x = torch.tensor([[float("inf")]])
inputs = [x]
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
print(output)
print(c_output)
```
### Error logs
CPP
```
tensor([[-9223372036854775808]])
tensor([[9223372036854775807]])
```
triton
```
tensor([[9223372036854775807]], device='cuda:0')
tensor([[9223372036854775807]], device='cuda:0')
```
### Versions
nightly 20250414
cc @chauhang @penguinwu
| true
|
3,001,234,292
|
[Inductor] Remove singleton tiling splits when prefer_nd_tiling=True
|
blaine-rister
|
closed
|
[
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
# Issue
Users who want block pointers are like to use the config settings `{"trition.use_block_ptr": True, "triton.prefer_nd_tiling": True, "triton.max_tiles": 3}` . Among other things, these settings allow us to generate 3D block pointers for broadcasts. However, broadcasts often end up introducing a superfluous tiling dimension of size 1.
For example, given this function with elementwise multiplication:
```
def foo(x, y, z):
a = x * y
b = 128.0
c = a * b
d = a * z
e = x * z
return a, c, d, e
inps = [
torch.randn((8, 11, 128), device=self.device),
torch.randn((128,), device=self.device),
torch.randn((8, 11, 128), device=self.device),
]
torch.compile(foo)(*inps)
```
We get the following Triton kernels:
```
@triton.jit
def triton_poi_fused_mul_0(in_ptr0, in_ptr1, out_ptr0, znumel, ynumel, xnumel, ZBLOCK : tl.constexpr, YBLOCK : tl.constexpr, XBLOCK : tl.constexpr):
znumel = 88
ynumel = 1
xnumel = 128
zoffset = tl.program_id(2) * ZBLOCK
zindex = zoffset + tl.arange(0, ZBLOCK)[:, None, None]
zmask = zindex < znumel
yoffset = tl.program_id(1) * YBLOCK
yindex = yoffset + tl.arange(0, YBLOCK)[None, :, None]
ymask = tl.full([ZBLOCK, YBLOCK, XBLOCK], True, tl.int1)
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[None, None, :]
xmask = xindex < xnumel
x1 = xindex
z0 = zindex
tmp0 = tl.load(tl.make_block_ptr(in_ptr0, shape=[88, 128], strides=[128, 1], block_shape=[ZBLOCK, XBLOCK], order=[1, 0], offsets=[zoffset, xoffset]), boundary_check=[0, 1], eviction_policy='evict_last')[:, None, :]
tmp1 = tl.load(tl.make_block_ptr(in_ptr1, shape=[128], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0], eviction_policy='evict_last')[None, None, :]
tmp2 = tmp0 * tmp1
tl.store(tl.make_block_ptr(out_ptr0, shape=[88, 128], strides=[128, 1], block_shape=[ZBLOCK, XBLOCK], order=[1, 0], offsets=[zoffset, xoffset]), tl.reshape(tl.broadcast_to(tmp2, [ZBLOCK, YBLOCK, XBLOCK]), [ZBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
''', device_str='cuda')
@triton.jit
def triton_poi_fused_mul_1(in_ptr0, in_ptr1, in_ptr2, out_ptr0, out_ptr1, out_ptr2, xnumel, XBLOCK : tl.constexpr):
xnumel = 11264
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(tl.make_block_ptr(in_ptr0, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0])
tmp3 = tl.load(tl.make_block_ptr(in_ptr1, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0])
tmp5 = tl.load(tl.make_block_ptr(in_ptr2, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0])
tmp1 = 128.0
tmp2 = tmp0 * tmp1
tmp4 = tmp0 * tmp3
tmp6 = tmp5 * tmp3
tl.store(tl.make_block_ptr(out_ptr0, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), tl.broadcast_to(tmp2, [XBLOCK]).to(tl.float32), boundary_check=[0])
tl.store(tl.make_block_ptr(out_ptr1, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), tl.broadcast_to(tmp4, [XBLOCK]).to(tl.float32), boundary_check=[0])
tl.store(tl.make_block_ptr(out_ptr2, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), tl.broadcast_to(tmp6, [XBLOCK]).to(tl.float32), boundary_check=[0])
''', device_str='cuda')
```
Note that one kernel has `ynumel=1`. The extra dimension results in more expensive address calculations, and also seems to prevent fusion.
# Fix
To fix this, this PR filters out any splits of size 1 from the `prefer_nd_tiling` algorithm. This results in the following fused kernel, with 2D tiling:
```
@triton.jit
def triton_poi_fused_mul_0(in_ptr0, in_ptr1, in_ptr2, out_ptr0, out_ptr1, out_ptr2, out_ptr3, ynumel, xnumel, YBLOCK : tl.constexpr, XBLOCK : tl.constexpr):
ynumel = 88
xnumel = 128
yoffset = tl.program_id(1) * YBLOCK
yindex = yoffset + tl.arange(0, YBLOCK)[:, None]
ymask = yindex < ynumel
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[None, :]
xmask = xindex < xnumel
x1 = xindex
y0 = yindex
tmp0 = tl.load(tl.make_block_ptr(in_ptr0, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), boundary_check=[0, 1], eviction_policy='evict_last')
tmp1 = tl.load(tl.make_block_ptr(in_ptr1, shape=[128], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0], eviction_policy='evict_last')[None, :]
tmp5 = tl.load(tl.make_block_ptr(in_ptr2, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), boundary_check=[0, 1], eviction_policy='evict_last')
tmp2 = tmp0 * tmp1
tmp3 = 128.0
tmp4 = tmp2 * tmp3
tmp6 = tmp2 * tmp5
tmp7 = tmp0 * tmp5
tl.store(tl.make_block_ptr(out_ptr0, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), tl.broadcast_to(tmp2, [YBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
tl.store(tl.make_block_ptr(out_ptr1, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), tl.broadcast_to(tmp4, [YBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
tl.store(tl.make_block_ptr(out_ptr2, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), tl.broadcast_to(tmp6, [YBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
tl.store(tl.make_block_ptr(out_ptr3, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), tl.broadcast_to(tmp7, [YBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
''', device_str='cuda')
```
# Test plan
Added the test case above to CI. Checked that a single kernel is generated with 2D tiling.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,225,119
|
[Inductor] Remove singleton tiling splits when prefer_nd_tiling=True
|
blaine-rister
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
# Issue
Users who want block pointers are likely to use the config settings `{"trition.use_block_ptr": True, "triton.prefer_nd_tiling": True, "triton.max_tiles": 3}` . Among other things, these settings allow us to generate 3D block pointers for broadcasts. However, broadcasts which don't truly require 3D often end up introducing a superfluous tiling dimension of size 1.
For example, given this function with elementwise multiplication:
```
def foo(x, y, z):
a = x * y
b = 128.0
c = a * b
d = a * z
e = x * z
return a, c, d, e
inps = [
torch.randn((8, 11, 128), device=self.device),
torch.randn((128,), device=self.device),
torch.randn((8, 11, 128), device=self.device),
]
torch.compile(foo)(*inps)
```
We get the following Triton kernels:
```
@triton.jit
def triton_poi_fused_mul_0(in_ptr0, in_ptr1, out_ptr0, znumel, ynumel, xnumel, ZBLOCK : tl.constexpr, YBLOCK : tl.constexpr, XBLOCK : tl.constexpr):
znumel = 88
ynumel = 1
xnumel = 128
zoffset = tl.program_id(2) * ZBLOCK
zindex = zoffset + tl.arange(0, ZBLOCK)[:, None, None]
zmask = zindex < znumel
yoffset = tl.program_id(1) * YBLOCK
yindex = yoffset + tl.arange(0, YBLOCK)[None, :, None]
ymask = tl.full([ZBLOCK, YBLOCK, XBLOCK], True, tl.int1)
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[None, None, :]
xmask = xindex < xnumel
x1 = xindex
z0 = zindex
tmp0 = tl.load(tl.make_block_ptr(in_ptr0, shape=[88, 128], strides=[128, 1], block_shape=[ZBLOCK, XBLOCK], order=[1, 0], offsets=[zoffset, xoffset]), boundary_check=[0, 1], eviction_policy='evict_last')[:, None, :]
tmp1 = tl.load(tl.make_block_ptr(in_ptr1, shape=[128], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0], eviction_policy='evict_last')[None, None, :]
tmp2 = tmp0 * tmp1
tl.store(tl.make_block_ptr(out_ptr0, shape=[88, 128], strides=[128, 1], block_shape=[ZBLOCK, XBLOCK], order=[1, 0], offsets=[zoffset, xoffset]), tl.reshape(tl.broadcast_to(tmp2, [ZBLOCK, YBLOCK, XBLOCK]), [ZBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
''', device_str='cuda')
@triton.jit
def triton_poi_fused_mul_1(in_ptr0, in_ptr1, in_ptr2, out_ptr0, out_ptr1, out_ptr2, xnumel, XBLOCK : tl.constexpr):
xnumel = 11264
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(tl.make_block_ptr(in_ptr0, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0])
tmp3 = tl.load(tl.make_block_ptr(in_ptr1, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0])
tmp5 = tl.load(tl.make_block_ptr(in_ptr2, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0])
tmp1 = 128.0
tmp2 = tmp0 * tmp1
tmp4 = tmp0 * tmp3
tmp6 = tmp5 * tmp3
tl.store(tl.make_block_ptr(out_ptr0, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), tl.broadcast_to(tmp2, [XBLOCK]).to(tl.float32), boundary_check=[0])
tl.store(tl.make_block_ptr(out_ptr1, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), tl.broadcast_to(tmp4, [XBLOCK]).to(tl.float32), boundary_check=[0])
tl.store(tl.make_block_ptr(out_ptr2, shape=[11264], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), tl.broadcast_to(tmp6, [XBLOCK]).to(tl.float32), boundary_check=[0])
''', device_str='cuda')
```
Note that one kernel has `ynumel=1`. The extra dimension results in more expensive address calculations, and also seems to prevent fusion.
# Fix
To fix this, this PR filters out any splits of size 1 from the `prefer_nd_tiling` algorithm. This results in the following fused kernel, with 2D tiling:
```
@triton.jit
def triton_poi_fused_mul_0(in_ptr0, in_ptr1, in_ptr2, out_ptr0, out_ptr1, out_ptr2, out_ptr3, ynumel, xnumel, YBLOCK : tl.constexpr, XBLOCK : tl.constexpr):
ynumel = 88
xnumel = 128
yoffset = tl.program_id(1) * YBLOCK
yindex = yoffset + tl.arange(0, YBLOCK)[:, None]
ymask = yindex < ynumel
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[None, :]
xmask = xindex < xnumel
x1 = xindex
y0 = yindex
tmp0 = tl.load(tl.make_block_ptr(in_ptr0, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), boundary_check=[0, 1], eviction_policy='evict_last')
tmp1 = tl.load(tl.make_block_ptr(in_ptr1, shape=[128], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0], eviction_policy='evict_last')[None, :]
tmp5 = tl.load(tl.make_block_ptr(in_ptr2, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), boundary_check=[0, 1], eviction_policy='evict_last')
tmp2 = tmp0 * tmp1
tmp3 = 128.0
tmp4 = tmp2 * tmp3
tmp6 = tmp2 * tmp5
tmp7 = tmp0 * tmp5
tl.store(tl.make_block_ptr(out_ptr0, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), tl.broadcast_to(tmp2, [YBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
tl.store(tl.make_block_ptr(out_ptr1, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), tl.broadcast_to(tmp4, [YBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
tl.store(tl.make_block_ptr(out_ptr2, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), tl.broadcast_to(tmp6, [YBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
tl.store(tl.make_block_ptr(out_ptr3, shape=[88, 128], strides=[128, 1], block_shape=[YBLOCK, XBLOCK], order=[1, 0], offsets=[yoffset, xoffset]), tl.broadcast_to(tmp7, [YBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
''', device_str='cuda')
```
# Test plan
Added the test case above to CI. Checked that a single kernel is generated with 2D tiling.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,217,597
|
[BE] follow autoformating and linter
|
XilunWu
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151507
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,001,210,864
|
[inductor][test] Skip triton tests for MPS as well, also change reason for skipping SM89 to not IS_BIG_GPU
|
henrylhtsang
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148622
* __->__ #151506
Differential Revision:
[D73162091](https://our.internmc.facebook.com/intern/diff/D73162091/)
Combining / improving https://github.com/pytorch/pytorch/pull/150485 and https://github.com/pytorch/pytorch/pull/150343
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,208,223
|
FSDP2 tutorial outline
|
weifengpy
|
open
|
[
"oncall: distributed",
"triaged"
] | 5
|
CONTRIBUTOR
|
### 📚 The doc issue
draft the FSDP2 turorial similar to FSDP1's https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html, with code example in [pytorch/examples](https://github.com/pytorch/examples)
basics using [Transformer model](https://github.com/pytorch/pytorch/blob/f5851efed99db3f3509982dda8680c1b60882c6e/torch/testing/_internal/distributed/_tensor/common_dtensor.py#L197)
* model init: nested wrapping, dim-0 sharding, AC
* load state dict: Dtensor version, DCP version
* forward/backward: implicit prefetch and explicit prefetch, reshard_after_forward=False/Int, mixed precision, cpu offloading
* gradient clipping, gradient scaler, optimizer with DTensor
* save state dict: DTensor version, DCP version
advanced topics
* torchrec DMP + DDP + ignored parameters for recommendation models
* HSDP
* tensor subclass extenstion point: float8 example
* dim-i sharding
* composability with TP
* gradient accumulation and composability with PP
* AG/RS buffer in memory pool and sysmetric memory
* inter-stream fragmentation: AG/RS in default pool vs separate pool
* AMD and new accelerator support
FSDP1-to-FSDP2 migration guide
* moving things from https://github.com/pytorch/torchtitan/blob/main/docs/fsdp.md
### Suggest a potential alternative/fix
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,001,203,942
|
[inductor][test] Skip triton tests for MPS as well, also
|
henrylhtsang
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151504
Differential Revision: [D73162091](https://our.internmc.facebook.com/intern/diff/D73162091/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,155,646
|
[DDP] add one option to allow skipping all reduce unused parameters
|
zhaojuanmao
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"release notes: distributed (checkpoint)"
] | 9
|
CONTRIBUTOR
|
Summary: add one option to allow skipping all reduce unused parameters, this could help improve training throughput significantly when the number of unused parameters is large in the model.
Test Plan: unit tests, CI
Differential Revision: D72282069
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,001,154,526
|
[standalone_compile] Some misc fixes
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151502
* #151551
* #151501
This PR fixes two things.
The first problem is that in the vLLM style standalone_compile is
called from within a custom torch.compile backend. If there already is a
FakeTensorMode (which there is), we shouldn't create a new
FakeTensorMode with the same shape_env, instead we should just reuse the
same FakeTensorMode.
The second thing is that compile_fx can mutate the passed in gm, so we
deepcopy (since standalone_compile should be standalone)
Test Plan:
- new test
- updated old tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,154,465
|
[standalone_compile] Don't check if path is directory if it doesn't exist
|
zou3519
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151502
* #151551
* __->__ #151501
os.path.isdir(path) will return False if the path doesn't exist.
Test Plan:
- new test
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,127,472
|
deferring unbacked floats runtime assrtions not working !
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
repo:
```
import torch
torch._dynamo.config.capture_scalar_outputs = True
@torch.compile(fullgraph=True)
def func(a, b):
# f
torch._check(b.item()*2==11)
return b*10
with fresh_inductor_cache():
func(torch.tensor([100]), torch.tensor([5.5]))
func(torch.tensor([5]), torch.tensor([1.8]))
```
cc @chauhang @penguinwu
| true
|
3,001,124,705
|
[ez] fix code owners typo
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151422
* #151421
* __->__ #151499
| true
|
3,001,119,517
|
[SymmMem] Add all-to-all
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151993
* #151819
* __->__ #151498
* #151261
Add an all-to-all impl based on NVSHMEM's on-stream API `nvshmemx_alltoallmem_on_stream`.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,001,119,495
|
[cp] dispatch flex_attention to CP impl in TorchDispatchMode
|
XilunWu
|
open
|
[
"oncall: distributed",
"ciflow/inductor",
"module: context parallel",
"release notes: context parallel"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152311
* __->__ #151497
## Test
`pytest test/distributed/tensor/test_attention.py -s -k test_ring_flex_attention`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,001,119,456
|
[BE] follow autoformating and linter
|
XilunWu
|
closed
|
[
"oncall: distributed",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151497
* __->__ #151496
* #151495
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,001,119,415
|
[dtensor][view_op] add as_strided op support to DTensor in FakeTensorMode
|
XilunWu
|
open
|
[
"oncall: distributed",
"topic: not user facing",
"ciflow/inductor",
"module: dtensor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151495
## Introduction
`flex_attention`'s FakeTensor propagation `flex_attention_fake_impl` [permutes](https://github.com/pytorch/pytorch/blob/fb6ac2f16132f7953711ce6924bc2ee4a033228c/torch/_higher_order_ops/flex_attention.py#L459) the stride of `out` (the attention score) based on `query`'s stride. To enable `flex_attention` call on DTensor, this requires us add `as_strided` support on DTensor in `FakeTensorMode`.
## Limited Support
Due to the complexity of supporting actual `as_strided` on DTensor, I choose to only enable a limited subset:
1. `as_strided` only works correctly in `FakeTensorMode` i.e. shape and strided propagation.
2. `as_strided` is only allowed in case where `size == input.shape` because this PR specifically unblocks the use case of `flex_attention_fake_impl`.
3. `as_strided` requires `storage_offset=None` because the other case is not defined in DTensor.
## Test
`pytest test/distributed/tensor/test_view_ops.py -s -k test_as_strided`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @tianyu-l
| true
|
3,001,086,170
|
Do not do proper const fold during tensorify_python_scalars
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151494
Chatting with Bob the goal of this is to const fold the floats that where tensorified by calling
guard_scalar(val) on them and then replacing their usages by their values.
Hence we do not need to do this for nodes with no float symbols.
We do not want todo proper const folding because we need to preserve statements that deferred
runtime asserts depend on. (see the added test)
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,001,062,536
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
3,001,045,667
|
Fix has_free_symbols
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151604
* #151494
* __->__ #151492
* #151171
* #151170
used to fail for
self.assertFalse(has_free_symbols(sympy.S.true))
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,001,043,324
|
[dynamic shapes] data-dependent error when backed + unbacked expression resolves statically
|
pianpwk
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Reported by @ColinPeppler
Getting this log, suggesting we can simplify the expression to False with the backed hint, but still data-dependent errors out:
```
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression False (unhinted: Ne(Mod(18*u0, ((s58*u0)//8)), 0)). (Size-like symbols: none)
Caused by: (_refs/__init__.py:3806 in _reshape_view_helper)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL=""
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
The following call raised this error:
File "/data/users/pianpwk/pytorch/custom_tests/test_s0_u0.py", line 11, in forward
return y.view(-1, 144)
To fix the error, insert one of the following checks before this call:
1. torch._check(False)
2. torch._check(True)
```
Repro:
```
import torch
from torch.export import export, Dim
class Foo(torch.nn.Module):
def forward(self, a, b):
u0 = a.item()
y = torch.zeros(u0, 18, b.shape[0])
torch._check((u0 * 18 * b.shape[0]) // 144 != u0)
torch._check(u0 % ((u0 * 18 * b.shape[0]) // 144) != 0)
return y.view(-1, 144)
ep = export(
Foo(),
(torch.tensor([6]), torch.randn(8)),
dynamic_shapes={
"a": None,
"b": (Dim.DYNAMIC,),
},
)
```
### Versions
latest nightly
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
3,001,026,302
|
faster gather implementation
|
ngimel
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cuda",
"ciflow/rocm",
"ci-no-td"
] | 12
|
COLLABORATOR
|
So far it's only for `gather`, but we'll move index_select and index to this implementation too. Torchtitan and fbgemm have noticed that gather/index_select perf is bad, this PR brings core implementation to be on par with those customized implementations. Added benefits: all dtypes are supported, a bit less strict on the tensor dimensions/contiguity because we pick the fast path after TensorIterator collapsed the dimensions.
Biggest part of this PR is not even the kernel (it's dumb, just vectorized loads are enough), but moving utilities for vectorized loads and stores from SymmetricMemory to be generally accessible in MemoryAccess.cuh.
Additional tests are coming to make sure this implementation doesn't break anything
`gather` is equivalent to x[indices] for 1d indices via
```
def fn_gather(x, indices):
return torch.gather(x, dim=0, index=indices.unsqueeze(1).expand(-1, x.shape[1]))
def fn_index(x, indices):
return x[indices]
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,001,006,754
|
Use reusable binary docker build action for manywheel
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
This is part of splitting up https://github.com/pytorch/pytorch/pull/150558 into smaller chunks, please see that for more context
Similar to https://github.com/pytorch/pytorch/pull/151483 but for manywheel
Changed the job name
s390x doesn't have access to aws ecr so it doesn't use the action. manylinuxs390x-builder ecr repo doesn't exist in docker hub so idk why the image name is that
Testing:
Can't really test since PRs don't have the credentials to push to docker io, which is the image used for everything, including PRs right now
| true
|
3,000,962,991
|
Use reusable binary docker build action for libtorch
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
This is part of splitting up https://github.com/pytorch/pytorch/pull/150558 into smaller chunks, please see that for more context
Similar to https://github.com/pytorch/pytorch/pull/151483 but for libtorch
Changed the job name
Testing:
Can't really test since PRs don't have the credentials to push to docker io, which is the image used for everything, including PRs right now
| true
|
3,000,934,385
|
Add option to use mempool on OOM
|
dsjohns2
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 16
|
CONTRIBUTOR
|
MemPool is a separate pool of memory handled by the caching allocator. This PR adds the option let the caching allocator try to use this pool as a last resort instead of OOMing by associating a use_on_oom bool with each MemPool.
Usage:
Users can optionally specify a ``use_on_oom`` bool (which is False by default) during MemPool creation. If true, then the CUDACachingAllocator will be able to use memory in this pool as a last resort instead of OOMing.
```
pool = torch.cuda.MemPool(allocator, use_on_oom=True)
with torch.cuda.use_mem_pool(pool):
a = torch.randn(40 * 1024 * 1024, dtype=torch.uint8, device="cuda")
del a
# at the memory limit, this will succeed by using pool's memory in order to avoid the oom
b = torch.randn(40 * 1024 * 1024, dtype=torch.uint8, device="cuda")
```
Testing:
```
python test/test_cuda.py -k test_mempool_limited_memory_with_allocator
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151487
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,000,910,675
|
RuntimeError: d.is_cuda() INTERNAL ASSERT FAILED at "/pytorch/c10/cuda/impl/CUDAGuardImpl.h"
|
Javen-W
|
open
|
[
"module: dataloader",
"module: cuda",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
I'm trying to train a Diffusion model but I'm inconsistently encountering segfaults, system crashes, or the following RuntimeError specifically in the `train_one_epoch()` and `mix_data()` functions, preventing any progress from being made. I've tried different versions of Python, PyTorch, Linux kernels, Nvidia drivers, Conda, but no success. I have attached a zip with all the relevant project files. The cache cleaning, synchronization calls, the redundant `.to(device)` calls, and debugging prints were added in response to this persistent issue.
```
$ python3 code/uncond_gen.py
Creating dataset...
Using device: cuda
Torch version=2.6.0+cu124, cuda_available=True
Initial memory: 670.21 MB
Loading data...
Data loaded: torch.Size([5000, 2]), took 0.00s
Memory after load: 670.48 MB
Computing noise schedule...
Noise schedule computed, took 0.04s
Memory after schedule: 748.07 MB
Initializing dataset...
Precomputing 2500000 noisy samples...
Processing sample 0/2500000, memory: 750.43 MB, GPU memory: 70.38 MB
Processing sample 100000/2500000, memory: 783.89 MB, GPU memory: 70.38 MB
Processing sample 200000/2500000, memory: 783.89 MB, GPU memory: 70.38 MB
Processing sample 300000/2500000, memory: 783.89 MB, GPU memory: 70.38 MB
Processing sample 400000/2500000, memory: 783.89 MB, GPU memory: 70.38 MB
Processing sample 500000/2500000, memory: 783.89 MB, GPU memory: 70.38 MB
Processing sample 600000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 700000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 800000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 900000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 1000000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 1100000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 1200000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 1300000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 1400000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 1500000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 1600000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 1700000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 1800000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 1900000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 2000000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 2100000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 2200000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 2300000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Processing sample 2400000/2500000, memory: 784.14 MB, GPU memory: 70.38 MB
Dataset initialized, took 72.63s
Memory after mix_data: 784.14 MB
Dataset created
Train epoch 1/5: 0%|▌ | 1/250
Error in epoch 1: d.is_cuda() INTERNAL ASSERT FAILED at "/pytorch/c10/cuda/impl/CUDAGuardImpl.h":34, please report a bug to PyTorch. 0%| | 0/5 [00:05<?, ?it/s]
Traceback (most recent call last):
File "/home/javen/Projects/CSE849/homework/hw5/code/uncond_gen.py", line 118, in <module>
train_loss = train_one_epoch(e)
^^^^^^^^^^^^^^^^^^
File "/home/javen/Projects/CSE849/homework/hw5/code/uncond_gen.py", line 68, in train_one_epoch
for batch in tqdm(train_loader, leave=False, desc=f"Train epoch {epoch + 1}/{n_epochs}"):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/javen/Projects/CSE849/homework/hw5/venv/lib/python3.12/site-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
^^^^^^^^
File "/home/javen/Projects/CSE849/homework/hw5/venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 708, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/home/javen/Projects/CSE849/homework/hw5/venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 764, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/javen/Projects/CSE849/homework/hw5/venv/lib/python3.12/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/home/javen/Projects/CSE849/homework/hw5/code/data.py", line 44, in __getitem__
return (self.all_data[idx],
~~~~~~~~~~~~~^^^^^
RuntimeError: d.is_cuda() INTERNAL ASSERT FAILED at "/pytorch/c10/cuda/impl/CUDAGuardImpl.h":34, please report a bug to PyTorch.
```
Project files:
[pytorch-code.zip](https://github.com/user-attachments/files/19785358/pytorch-code.zip)
### Versions
[collect_env.txt](https://github.com/user-attachments/files/19785460/collect_env.txt)
cc @andrewkho @divyanshk @SsnL @VitalyFedyunin @dzhulgakov @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,000,885,440
|
c10d/Store: add nonblocking mode to queue_pop
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 18
|
MEMBER
|
This adds a non-blocking mode to queue_pop. This allows for workers to poll if work is ready without blocking the main loop. This is useful for the case where you want to have a GPU have maximum utilization when something only periodically is sent on the queue.
We also expose a `torch.distributed.QueueEmptyError` so users can catch the error and handle it accordingly.
Test plan:
```
pytest test/distributed/test_store.py -k queue -v -s -x
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
3,000,873,749
|
Add torch.cuda._compile_kernel()
|
msaroufim
|
closed
|
[
"module: cuda",
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 15
|
MEMBER
|
Followup work on top https://github.com/pytorch/pytorch/pull/149480
Wrapper on top of nvrtc inspired by https://gist.github.com/malfet/2c9a25976dd7396430c38af603f791da from @malfet
Compiling toy kernels with this setup takes 0.01s vs 90s using `load_inline()` on my local H100. This was primarily motivated by the timeouts I was seeing in the popcorn leaderboard but would also be useful to integrate into KernelBench
This PR is in the same spirit as https://github.com/pytorch/pytorch/pull/148972 which was a similar UX for Metal
For now we are planning on landing this as a private function because we expect to iterate both on the user facing API and the internals implementation, will open up a seperate issue to discuss the path towards making this work public and give a broader overview of the state of custom cuda kernel authoring in PyTorch
cc @ptrblck @eqy @jerryzh168
Future work, as a prereq to making the work public
* divup primitive
* support multiple kernels
* Expose _get_nvrtc_version from native code
* interop with torch.compile
* AMD support
| true
|
3,000,858,764
|
Use reusable binary docker build action for almalinux, clean up script
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
This is part of splitting up https://github.com/pytorch/pytorch/pull/150558 into smaller chunks, please see that for more context
Use the binary docker build action from https://github.com/pytorch/pytorch/pull/151471
Change the workflow trigger to be all of .ci/docker so it will make a new image + tag whenever it changes.
build script:
* change to be independent of the CUDA_VERSION env var, since all the info should be in the imagename:tag
* remove docker push parts since that will happen during the workflow
* clean up a bit
* make the build script more like the CI build script (use a temp image name)
I don't think this image is actually used anywhere
Also push docker image to imagename:tag, I got rid of it in the PR making the reusable workflow since I thought it was not in the original scripts but it actually is there
| true
|
3,000,841,141
|
[MegaCache] Rename the PGO artifact when used between different jobs
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151482
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,000,822,982
|
inductor.config.descriptive_names = False is not actually supported (#145523) (#146051)
|
exclamaforte
|
open
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: deprecation",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Summary:
This config is not supported (it throws an error when set), and doesn't really make sense imo.
Approved by: https://github.com/eellison
Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/edf266e9bbbf6063f7c4a336ffb50234e11a0a82
Reviewed By: masnesral
Differential Revision: D68846308
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,000,812,947
|
Some way to branch on dynamic vs static shapes in user code
|
zou3519
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"vllm-compile"
] | 0
|
CONTRIBUTOR
|
Motivation: Sometimes users want to say "if shape is provably greater than X, use an optimized kernel. otherwise fall back to a more general kernel". When they're compiling over said code using dynamic shapes, it's fine to fallback to the general kernel. When they compile with static shapes, they want the best perf and the optimized kernel should apply.
It sounds like `statically_known_true` is the right API for this (or even `is_concrete_int`), we should expose it all these to be called from user code.
Repro:
```py
import torch
from torch.fx.experimental.symbolic_shapes import statically_known_true
@torch.compile(fullgraph=True)
def f(x):
if statically_known_true(x.shape[0] > 50): # causes graph_break
return x + 1
else:
return x + 2
x = torch.zeros(51)
torch._dynamo.mark_dynamic(x, 0)
result = f(x)
print(result)
```
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
3,000,812,284
|
[map] defer importing AOTConfig and create_joint dependency
|
ydwu4
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Summary:
We reverted D72896450 due to a weird error happens at a seemingly unrelated test "buck2 run apf/data/tests:preproc_state_serializer_test -- --filter-text "test_load_artifact"
"
I did some investigation and found that moving import AOTConfig and create_joint inside the create_fw_bw_grap causes a delay of importing the recursively imported modules in AOTConfig create_joint from test construction time to the test running time. The path.exists mock gets called multiple times due to the inspect.getsource calls in multiple places of torch.
Specifically, we set a breakpoint at the sideeffect of mocked os.path.exists. P1787425831 shows the importing stack trace before the change. P1787431638 shows the importing stacktrace after the change.
The notable difference is that in the second pastry, we trigger an os.path.exists when somewhere in triton we called inspect.getsourcelines when we construct OnDiskPreprocStateSerializer, which gets recorded by the mock.
Looking at the test, it seems what the test actualy wants to test is the deserialize step. So we reset_mock before the step to avoid mocking things happened at import time.
Test Plan:
buck2 run apf/data/tests:preproc_state_serializer_test -- --filter-text "test_load_artifact"
and existing tests for map.
Differential Revision: D73138415
| true
|
3,000,803,372
|
[PT2] torch.layer_norm errors in eager but runs fine in backend=aot_eager_decomp_partition
|
weifengpy
|
open
|
[
"module: error checking",
"triaged",
"enhancement",
"oncall: pt2",
"module: decompositions"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
torch.layer_norm throws error when input and weight are in different dtypes. However, it runs fine with backend=aot_eager_decomp_partition, because of decomposation of torch.layer_norm into fp32 ops
we run into this because online job disable pt2, but offline training requires pt2. ideally we want the same behavior across eager and compile
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @chauhang @penguinwu @SherlockNoMad @bdhirsh
```
# python test_layer_norm.py
import torch
def forward(input):
normalized_shape = (4, )
weight = torch.ones(4, device="cuda")
bias = torch.ones(4, device="cuda")
eps = 0.1
output = torch.layer_norm(
input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled
)
return output
x = torch.tensor([[1.0, 2.0, 3.0, 4.0],
[2.0, 4.0, 6.0, 8.0]], device="cuda")
# no error
forward_compiled = torch.compile(forward, backend="aot_eager_decomp_partition")
forward_compiled(x.to(torch.bfloat16))
# error
forward_compiled = torch.compile(forward, backend="aot_eager")
forward_compiled(x.to(torch.bfloat16))
# error
# forward(x.to(torch.bfloat16))
```
error
```
RuntimeError: expected scalar type BFloat16 but found Float
While executing %native_layer_norm : [num_users=1] = call_function[target=torch.ops.aten.native_layer_norm.default](args = (%arg0_1, [4], %ones, %ones_1, 0.1), kwargs = {})
GraphModule: class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "bf16[2, 4][4, 1]"):
# File: /data/users/weif/pytorch/test_layer_norm.py:5 in forward, code: weight = torch.ones(4, device="cuda")
ones: "f32[4][1]" = torch.ops.aten.ones.default([4], device = device(type='cuda'), pin_memory = False)
# File: /data/users/weif/pytorch/test_layer_norm.py:6 in forward, code: bias = torch.ones(4, device="cuda")
ones_1: "f32[4][1]" = torch.ops.aten.ones.default([4], device = device(type='cuda'), pin_memory = False)
# File: /data/users/weif/pytorch/test_layer_norm.py:8 in forward, code: output = torch.layer_norm(
native_layer_norm = torch.ops.aten.native_layer_norm.default(arg0_1, [4], ones, ones_1, 0.1); arg0_1 = ones = ones_1 = None
getitem: "bf16[2, 4][4, 1]" = native_layer_norm[0]; native_layer_norm = None
return (getitem,)
```
### Alternatives
_No response_
### Additional context
_No response_
| true
|
3,000,776,759
|
[fake tensor cache] Support index with non bool/int8 indices
|
anijain2305
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ci-no-td"
] | 19
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152062
* #151961
* #151957
* __->__ #151477
* #151633
* #151409
| true
|
3,000,760,837
|
[export] export doesn't save custom meta for constant tensors
|
angelayi
|
closed
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
def test_run_decomp_custom_constant(self):
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.b = torch.ones(3, 3)
def forward(self, x):
return self.b + x
ep = torch.export.export(M(), (torch.ones(3, 3), ))
print(ep)
for node in ep.graph.nodes:
node.meta["custom"] = {"moo": "moo"}
for node in ep.graph.nodes:
print(node, node.meta.get("custom"))
decomp = ep.run_decompositions()
for node in decomp.graph.nodes:
print(node, node.meta.get("custom"))
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @suo @ydwu4 @lucylq
### Versions
main
| true
|
3,000,729,945
|
[c10d][fr] Fix script for uneven reduce scatter and update test cases
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151475
Somehow the type string for reduce scatter is "REDUCE_SCATTER" not "REDUCESCATTER". This PR fixed it and added more test cases.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
Differential Revision: [D73141245](https://our.internmc.facebook.com/intern/diff/D73141245)
| true
|
3,000,721,229
|
Use more efficient row/col computation
|
aartbik
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
This change addresses the first/second time/mem "spike" observed in
https://github.com/pytorch/pytorch/issues/151351
Fixes #151351
| true
|
3,000,720,162
|
Update README.md - James has the wrong github link.
|
ebetica
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"merging"
] | 5
|
CONTRIBUTOR
|
Unless I'm wrong, the James on the pytorch paper is not the account linked to in the README.md.
| true
|
3,000,711,150
|
[MegaCache] Encode key in base64
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151472
I have noticed that there are some errors like
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x95 in position 169302: invalid start byte
```
I havent been able to repro this locally yet, this change should fix the encoding issues
| true
|
3,000,709,486
|
Action for building docker binary builds
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
This is part of splitting up https://github.com/pytorch/pytorch/pull/150558 into smaller chunks, please see that for more context
Uses calculate docker image with the new custom tag prefix, so the naming convention of the docker images is slightly different for images built on PR
based off of https://github.com/pytorch/pytorch/blob/a582f046084d1ea49b2a253ece15a4d6157f2579/.github/workflows/build-manywheel-images.yml#L101
Also moves the push of the docker images from inside the build scripts to inside the workflow
Currently not used anywhere, but the binary docker builds are very similar so I'm going to change them to use this instead
| true
|
3,000,676,892
|
Key error for _tensorify_python_scalars
|
BoyuanFeng
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 2
|
CONTRIBUTOR
|
Repro:
```python
import torch
torch._dynamo.config.capture_scalar_outputs = True
def f(x, y):
x1 = x + 1
y_scalar = y.item()
z = x1 + y_scalar
return z, y_scalar
f = torch.compile(f)
f(torch.randn(2,3, device='cuda'), torch.tensor(3.0, device='cuda'))
```
Error:
```
/data/users/boyuan/pytorch/torch/_dynamo/pgo.py:465: UserWarning: dynamo_pgo force disabled by torch._inductor.config.force_disable_caches
warn_once(
Traceback (most recent call last):
File "/home/boyuan/playground/graph_partition/reorder/weak_dep.py", line 15, in <module>
f(torch.randn(2,3, device='cuda'), torch.tensor(3.0, device='cuda'))
File "/data/users/boyuan/pytorch/torch/_dynamo/eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/data/users/boyuan/pytorch/torch/_dynamo/output_graph.py", line 1568, in _call_user_compiler
raise BackendCompilerFailed(
File "/data/users/boyuan/pytorch/torch/_dynamo/output_graph.py", line 1543, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/data/users/boyuan/pytorch/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/users/boyuan/pytorch/torch/__init__.py", line 2365, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/data/users/boyuan/pytorch/torch/_inductor/compile_fx.py", line 2168, in compile_fx
return aot_autograd(
File "/data/users/boyuan/pytorch/torch/_dynamo/backends/common.py", line 106, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/data/users/boyuan/pytorch/torch/_functorch/aot_autograd.py", line 1176, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/data/users/boyuan/pytorch/torch/_functorch/aot_autograd.py", line 1150, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/data/users/boyuan/pytorch/torch/_functorch/aot_autograd.py", line 574, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/data/users/boyuan/pytorch/torch/_functorch/aot_autograd.py", line 824, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/data/users/boyuan/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 239, in aot_dispatch_base
tensorify_python_scalars(fw_module, fake_mode.shape_env, fake_mode)
File "/data/users/boyuan/pytorch/torch/fx/passes/_tensorify_python_scalars.py", line 257, in tensorify_python_scalars
proxy = _sympy_interp(zf.node.expr)
File "/data/users/boyuan/pytorch/torch/fx/passes/_tensorify_python_scalars.py", line 145, in _sympy_interp
expr_to_sym_proxy[expr]
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
KeyError: zuf0
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
3,000,667,037
|
Include post grad gm and fx runnable in cache artifacts for tlparse
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151469
Fixed #151462
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,000,625,237
|
Bug At `d\\dependencies\\libtorch\\include\\ATen\\core\\jit_type_base.h":289`
|
tslever
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
I receive the following log when running https://github.com/tslever/Settlers_Of_Catan/tree/main/back_end as of commit `cb8e64063c91ef9c9b78829f577d5e52816b0623` in Debug mode.
`[INFO][Wed Apr 16 13:52:59 2025]d\\dependencies\\libtorch\\include\\ATen\\core\\jit_type_base.h":289, please report a bug to PyTorch.`
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home (10.0.26100 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.13.1 (tags/v3.13.1:0671451, Dec 3 2024, 19:06:28) [MSC v.1942 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.26100-SP0
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Name: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
Manufacturer: GenuineIntel
Family: 205
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2419
MaxClockSpeed: 2419
L2CacheSize: 5120
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==2.2.1
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,000,594,091
|
[nativert] Add utility function to convert strings into numbers.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 29
|
CONTRIBUTOR
|
Summary:
nativert RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md
To land the runtime into PyTorch core, we will gradually land logical parts of the code into the Github issue and get each piece properly reviewed.
This diff adds a small library to convert strings into numbers which will later be used for parsing graph IR.
Differential Revision: D73133034
## Test Plan
c10 unittests
| true
|
3,000,542,953
|
Use /var/tmp instead of /tmp for torch cache directory on fbcode
|
oulgen
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
We've been noticing that cache directory has been getting cleaned underneath us, lets use /var/tmp which is supposed to be cleaned less frequently.
https://fb.workplace.com/groups/257735836456307/posts/883428143887070
Test Plan: unit tests
Reviewed By: masnesral
Differential Revision: D73008663
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,000,535,393
|
[ROCm] Initial plumbing for CK Gemm Perf Improvement
|
alugorey
|
open
|
[
"module: rocm",
"triaged",
"open source",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 2
|
CONTRIBUTOR
|
Re-organizes CK gemm code into it's own folder as well as adds logic to call ck gemm with specific templates based on the size of the input tensors.
Logic pulled for gemm selection was pulled directly from:
https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/experimental/gen_ai/src/gemm/ck_extensions.hip#L197-L210
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,000,522,564
|
[ez] Don't always pass HF token to fsspec
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (checkpoint)"
] | 8
|
CONTRIBUTOR
|
Summary: The HF storage reader/writer component can work for any back-end in theory, so we shouldn't enforce the token to be passed into fsspecreader/writer, because the specific fsspec implementation may not handle tokens. Specifically, manifold doesn't accept a token arg, but we're passing one in always, which is throwing
Test Plan: signals
Differential Revision: D73130679
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,000,513,810
|
Inconsistent Output from nn.Conv2d with padding_mode='circular' When Using MKLDNN on a AVX2_Processor
|
alino93
|
open
|
[
"module: cpu",
"triaged",
"module: mkldnn"
] | 0
|
NONE
|
### 🐛 Describe the bug
I have encountered an issue with the `PyTorch nn.Conv2d` layer when using `padding_mode='circular'`. The output from the convolution operation differs depending on the state of `torch.backends.mkldnn.enabled`. Specifically, the outputs are inconsistent when MKLDNN is enabled versus when it is disabled on a machine with AVX2 support. The issue is not reproducible on a machine with AVX512 support.
Steps to Reproduce:
```
# Set the random seed and backend configurations for deterministic behavior
torch.manual_seed(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Define a convolutional layer with circular padding
conv = nn.Conv2d(13, 1, (1, 2), padding_mode='circular')
model = nn.Sequential(conv)
torchIn = torch.ones(1, 13, 32, 50)
#Enable MKLDNN and compute the output:
torch.backends.mkldnn.enabled = True
out1 = model(torchIn)
#Disable MKLDNN and compute the output again:
torch.backends.mkldnn.enabled = False
out2 = model(torchIn)
```
Compare out1 and out2. They differ on a machine with AVX2.
```
out1 = tensor([[[[-0.2386, -0.2386, -0.2386, ..., -0.2386, -0.2386, -0.2386],
[-0.2386, -0.2386, -0.2386, ..., -0.2386, -0.2386, -0.2386],
[-0.2386, -0.2386, -0.2386, ..., -0.2386, -0.2386, -0.2386],
...,
[-0.2386, -0.2386, -0.2386, ..., -0.2386, -0.2386, -0.2386],
[-0.2386, -0.2386, -0.2386, ..., -0.2386, -0.2386, -0.2386],
[-0.2386, -0.2386, -0.2386, ..., -0.2386, -0.2386, -0.2386]]]],
grad_fn=<ConvolutionBackward0>)
out2 = tensor([[[[-0.0434, -0.0434, -0.0434, ..., -0.0434, -0.0434, -0.0434],
[-0.0434, -0.0434, -0.0434, ..., -0.0434, -0.0434, -0.0434],
[-0.0434, -0.0434, -0.0434, ..., -0.0434, -0.0434, -0.0434],
...,
[-0.0434, -0.0434, -0.0434, ..., -0.0434, -0.0434, -0.0434],
[-0.0434, -0.0434, -0.0434, ..., -0.0434, -0.0434, -0.0434],
[-0.0434, -0.0434, -0.0434, ..., -0.0434, -0.0434, -0.0434]]]],
grad_fn=<ConvolutionBackward0>)
```
Using different input sizes might or might not reproduce the issue. With a smaller input size like `(1,1,29,49)` was consistent but a larger input size like `(1,13,37,57)` is inconsistent.
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise (10.0.26100 64-bit)
GCC version: (GCC) 7.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.5 (main, Aug 26 2023, 05:44:50) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
----------------------
Name: AMD EPYC 7513 32-Core Processor
Manufacturer: AuthenticAMD
Family: 1
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2600
MaxClockSpeed: 2600
L2CacheSize: 1024
L2CacheSpeed: None
Revision: 257
----------------------
Name: AMD EPYC 7513 32-Core Processor
Manufacturer: AuthenticAMD
Family: 1
Architecture: 9
ProcessorType: 3
DeviceID: CPU1
CurrentClockSpeed: 2600
MaxClockSpeed: 2600
L2CacheSize: 1024
L2CacheSpeed: None
Revision: 257
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
3,000,508,514
|
inductor post_grad graphs are missing from tlparse on an FxGraphCache hit
|
bdhirsh
|
closed
|
[
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
here's an example tlparse i'm trying to debug: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/f710880237-TrainingApplication_D9U2F/attempt_2/version_0/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
I would really like to see what the post_grad graphs look like for debugging purposes. It looks like when we hit the FxGraphCache, though, we *do* ensure that the inductor `output_code` shows up in tlparse, but we don't give the same treatment to other intermediate artifacts, like inductor's `post_grad` graphs
cc @chauhang @penguinwu
| true
|
3,000,493,291
|
CreateBlockMask producing invalid XBLOCK shape
|
drisspg
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: inductor",
"vllm-compile"
] | 0
|
CONTRIBUTOR
|
# Summary
Similar to: https://github.com/pytorch/pytorch/issues/145074
Repro:
https://github.com/vllm-project/vllm/pull/16078
``` Python
VLLM_USE_V1=1 VLLM_ATTENTION_BACKEND=FLEX_ATTENTION_VLLM_V1 VLLM_ENABLE_V1_MULTIPROCESSING=0 python benchmarks/benchmark_throughput.py --input-len 1024
```
Produces:
```Shell
[rank0]: mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/drisspg/.conda/envs/vllm_main/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
[rank0]: exec(code, mod.__dict__, mod.__dict__)
[rank0]: File "/home/drisspg/.cache/vllm/torch_compile_cache/e26aa097bc/rank_0_0/inductor_cache/wp/cwpx74wfjy6gw7i2gfh5al7swsf7s2oykor4o5gbbihhyfperymz.py", line 11, in <module>
[rank0]: @triton_heuristics.pointwise(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/drisspg/.conda/envs/vllm_main/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 2118, in pointwise
[rank0]: triton_config_with_settings(
[rank0]: File "/home/drisspg/.conda/envs/vllm_main/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1940, in triton_config
[rank0]: check_max_block(cfg)
[rank0]: File "/home/drisspg/.conda/envs/vllm_main/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1811, in check_max_block
[rank0]: assert val <= max_block, (
[rank0]: ^^^^^^^^^^^^^^^^
[rank0]: torch._inductor.exc.InductorError: AssertionError: 'XBLOCK' too large. Maximum: 4096. Actual: 8192.
[rank0]: Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
Kernel
```Py
import triton
import triton.language as tl
from triton.compiler.compiler import AttrsDescriptor
from torch._inductor.runtime import triton_helpers, triton_heuristics
from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math
from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties
triton_helpers.set_driver_to_gpu()
@triton_heuristics.pointwise(
size_hints={'x': 34359738368},
filename=__file__,
triton_meta={'signature': {'in_ptr0': '*i32', 'in_ptr1': '*i64', 'in_ptr2': '*i32', 'in_ptr3': '*i32', 'in_ptr4': '*i32', 'out_ptr0': '*i1', 'ks0': 'i64', 'ks1': 'i64', 'ks2': 'i64', 'ks3': 'i64', 'ks4': 'i64', 'ks5': 'i64', 'xnumel': 'i64'}, 'device': DeviceProperties(type='cuda', index=0, multi_processor_count=132, cc=90, major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor.from_dict({'arg_properties': {'tt.divisibility': (0, 1, 2, 3, 4, 5, 12), 'tt.equal_to': ()}, 'cls': 'AttrsDescriptor'})]},
inductor_meta={'grid_type': 'Grid1D', 'autotune_hints': set(), 'kernel_name': 'triton_poi_fused_constant_pad_nd_0', 'mutated_arg_names': [], 'optimize_mem': True, 'no_x_dim': False, 'num_load': 1, 'num_reduction': 0, 'backend_hash': 'A0D3A2B50857E9501D843044B01F725922648D76E6D26323B14F8A4EA4473D1B', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False},
min_elem_per_thread=0
)
@triton.jit
def triton_poi_fused_constant_pad_nd_0(in_ptr0, in_ptr1, in_ptr2, in_ptr3, in_ptr4, out_ptr0, ks0, ks1, ks2, ks3, ks4, ks5, xnumel, XBLOCK : tl.constexpr):
xoffset = tl.program_id(0).to(tl.int64) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:].to(tl.int64)
xmask = tl.full([XBLOCK], True, tl.int1)
x1 = xindex // 2408832
x0 = (xindex % 2408832)
x2 = xindex
tmp0 = x1
tmp1 = ks0
tmp2 = tmp0 < tmp1
tmp3 = x0
tmp4 = tl.full([1], 2408768, tl.int64)
tmp5 = tmp3 < tmp4
tmp6 = tmp2 & tmp5
tl.device_assert((x1 < ks1) | ~(tmp6), "index out of bounds: x1 < ks1")
tmp8 = tl.load(in_ptr0 + (x1), tmp6, eviction_policy='evict_last', other=0.0)
tmp9 = tl.broadcast_to(ks2, [XBLOCK])
tmp10 = tmp8 + tmp9
tmp11 = tmp8 < 0
tmp12 = tl.where(tmp11, tmp10, tmp8)
tl.device_assert(((0 <= tl.broadcast_to(tmp12, [XBLOCK])) & (tl.broadcast_to(tmp12, [XBLOCK]) < ks2)) | ~(tmp6), "index out of bounds: 0 <= tl.broadcast_to(tmp12, [XBLOCK]) < ks2")
tl.device_assert((x0 // 16 < 150548) | ~(tmp6), "index out of bounds: x0 // 16 < 150548")
tmp15 = tl.load(in_ptr1 + (150548*tmp12 + (x0 // 16)), tmp6, eviction_policy='evict_last', other=0.0)
tmp16 = tl.full([1], 0, tl.int64)
tmp17 = tmp15 >= tmp16
tmp18 = tl.full([1], 16, tl.int64)
tmp19 = tmp15 * tmp18
tmp20 = (x2 % 16)
tmp21 = tmp19 + tmp20
tmp22 = tl.broadcast_to(ks3, [XBLOCK])
tmp23 = tmp8 + tmp22
tmp24 = tl.where(tmp11, tmp23, tmp8)
tl.device_assert(((0 <= tl.broadcast_to(tmp24, [XBLOCK])) & (tl.broadcast_to(tmp24, [XBLOCK]) < ks3)) | ~(tmp6), "index out of bounds: 0 <= tl.broadcast_to(tmp24, [XBLOCK]) < ks3")
tmp26 = tl.load(in_ptr2 + (tl.broadcast_to(tmp24, [XBLOCK])), tmp6, eviction_policy='evict_last', other=0.0)
tmp27 = tmp26.to(tl.int64)
tmp28 = tmp21 < tmp27
tmp29 = tmp17 & tmp28
tmp30 = tmp21 >= tmp16
tmp31 = tmp29 & tmp30
tmp32 = tl.broadcast_to(ks4, [XBLOCK])
tmp33 = tmp8 + tmp32
tmp34 = tl.where(tmp11, tmp33, tmp8)
tl.device_assert(((0 <= tl.broadcast_to(tmp34, [XBLOCK])) & (tl.broadcast_to(tmp34, [XBLOCK]) < ks4)) | ~(tmp6), "index out of bounds: 0 <= tl.broadcast_to(tmp34, [XBLOCK]) < ks4")
tmp36 = tl.load(in_ptr3 + (tl.broadcast_to(tmp34, [XBLOCK])), tmp6, eviction_policy='evict_last', other=0.0)
tmp37 = tmp36.to(tl.int64)
tmp38 = x1
tmp39 = tmp38 - tmp37
tmp40 = tl.broadcast_to(ks5, [XBLOCK])
tmp41 = tmp8 + tmp40
tmp42 = tl.where(tmp11, tmp41, tmp8)
tl.device_assert(((0 <= tl.broadcast_to(tmp42, [XBLOCK])) & (tl.broadcast_to(tmp42, [XBLOCK]) < ks5)) | ~(tmp6), "index out of bounds: 0 <= tl.broadcast_to(tmp42, [XBLOCK]) < ks5")
tmp44 = tl.load(in_ptr4 + (tl.broadcast_to(tmp42, [XBLOCK])), tmp6, eviction_policy='evict_last', other=0.0)
tmp45 = tmp44.to(tl.int64)
tmp46 = tmp39 + tmp45
tmp47 = tmp46 >= tmp21
tmp48 = tl.full([1], False, tl.int1)
tmp49 = tl.where(tmp31, tmp47, tmp48)
tmp50 = tl.full(tmp49.shape, False, tmp49.dtype)
tmp51 = tl.where(tmp6, tmp49, tmp50)
tl.store(out_ptr0 + (x2), tmp51, None)
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,000,447,622
|
[MPS] Migrate `bitwise_not` to unary operator
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150661
* __->__ #151460
That kills to birds with one stone:
- Makes implementations more standartized (and faster for strided inputs/outputs)
- Fixes bug strided inplace bitwise_not
I.e. before this change
```python
import torch
x=torch.arange(32, device="mps")
x[::2].bitwise_not_()
print(x)
```
produced
```
tensor([ -1, -2, -3, -4, -5, -6, -7, -8, -9, -10, -11, -12, -13, -14,
-15, -16, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
28, 29, 30, 31], device='mps:0')
```
after, it generates reasonable output
```
tensor([ -1, 1, -3, 3, -5, 5, -7, 7, -9, 9, -11, 11, -13, 13,
-15, 15, -17, 17, -19, 19, -21, 21, -23, 23, -25, 25, -27, 27,
-29, 29, -31, 31], device='mps:0')
```
| true
|
3,000,328,931
|
FlexAttention add decorator for large test cases
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151459
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,000,220,727
|
Simplify symints before passing to FXGraphCache
|
jamesjwu
|
closed
|
[
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151458
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,000,102,812
|
When distributed.destroy_process_group() is executed, new memory usage will be generated on device 0, which may cause OOM under extreme conditions and thus abnormal exit.
|
Staten-Wang
|
open
|
[
"oncall: distributed",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
You should adjust max_i so that mem_consumer almost completely consumes GPU memory.
At this time, executing distributed.destroy_process_group() may cause OOM.
From what I've observed, this seems to be related to the order in which different 'ranks' execute destroy_process_group.
You can see that I commented out the two sleep statements.
When destroy_process_group is executed in the order of ‘rank’, i.e. sleep(local_rank), OOM will not occur.
When a process with a non-0 rank is forced to execute destroy_process_group first, OOM will definitely occur.
From my observations, it seems that other ranks create additional memory footprint on the first device when executing destroy_process_group.
```python
def proc_main(local_rank):
torch.cuda.set_device(local_rank)
backend = 'nccl' if distributed.is_nccl_available() else 'gloo'
print(f'backend is {backend}')
distributed.init_process_group(
backend=backend,
init_method='env://',
world_size=torch.cuda.device_count(),
rank=local_rank,
)
distributed.barrier()
max_i = 5900
mem_consumer = []
i = 0
while True:
mem_consumer.append(torch.zeros(1024 * 1024, device=local_rank))
i += 1
if i > max_i:
break
distributed.barrier()
# sleep(-local_rank+5)
# sleep(local_rank)
distributed.destroy_process_group()
print(f'local_rank {local_rank} destroy_process_group ---------------------')
def main():
if distributed.is_available():
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '8800'
mp.spawn(proc_main, nprocs=torch.cuda.device_count())
else:
raise RuntimeError("pytorch's torch.distributed.is_available() returns false, "
"check why your pytorch does not support distributed, and fix it.")
if __name__ == '__main__':
main()
```
### Versions
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,000,078,957
|
Add OIDC permissions to bazel workflow
|
zxiiro
|
open
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 11
|
COLLABORATOR
|
Update workflow to use OIDC authentication to access AWS resources rather than assuming the runner's default role. This is part of the multicloud effort to prepare jobs to support being run in non-AWS clouds.
The JWT ID token requires `id-token: write` in order to create the token for the job. See: https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-cloud-providers#adding-permissions-settings
Ref: pytorch-fdn/multicloud-ci-infra#3
| true
|
2,999,997,751
|
Add OIDC permissions to xpu workflow
|
zxiiro
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 8
|
COLLABORATOR
|
The reusable workflow requires OIDC authentication to work and is configured via it's only caller xpu.yml however setting it here too to clarify that it is required. This setting also flags jobs that call this workflow without the required permissions set to remind them it need to be set.
JWT ID token requires `id-token: write` permissions as documented here https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-cloud-providers#adding-permissions-settings
Ref: pytorch-fdn/multicloud-ci-infra#3
| true
|
2,999,996,794
|
Broken Links GHA
|
sekyondaMeta
|
closed
|
[
"module: docs",
"release notes: releng"
] | 1
|
CONTRIBUTOR
|
Adding Github Action that runs monthly and checks for broken links in repo. If broken links exist, it creates an issue with a list of the links
<img width="1319" alt="Screenshot 2025-04-15 at 13 39 51" src="https://github.com/user-attachments/assets/d42ba1ee-83ce-422c-8ac4-f5267e887b52" />
cc @svekars @AlannaBurke
| true
|
2,999,927,479
|
[BE] Remove outdated script to check namespace BC
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"suppress-bc-linter"
] | 6
|
CONTRIBUTOR
|
Now that we have bc_lint in CI, this script is no longer needed (nor has it ever been conclusive). I've already updated the Runbook to not need this script.
Suppressing bc_lint as this script is not shipped as a part of torch--it is not user facing! For context, this script is (rarely) used by the release notes manager to ensure BC across releases. It had been broken for at least since 2.6.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151569
* __->__ #151453
| true
|
2,999,913,231
|
Add default value for `serialization_format` in `_write_item` function for better compatibility
|
BestJuly
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"release notes: distributed (checkpoint)"
] | 2
|
NONE
|
The [861d2cc](https://github.com/pytorch/pytorch/commit/861d2cc02cce860d789cfda644a366abb95b53a5) commit by @ankitageorge introduced `serialization_format` argument to replace the original `safe_tensors` argument in `_write_item` function. It is fine to use this in pytorch. However, for many other projects, e.g., a widely-used LLM training framework [Megatron-LM](https://github.com/NVIDIA/Megatron-LM/tree/main), which directly uses the [_write_item](https://github.com/NVIDIA/Megatron-LM/blob/cd974f8d6bd3f528b0afc29355fce244a4addd3d/megatron/core/dist_checkpointing/strategies/filesystem_async.py#L320) function, this change will cause errors in its usage `_write_item(*transform_list, stream, data, write_item, storage_key)` there because before this commit, the previous argument `safe_tensors` has default value while the new argument does not.
Therefore, I think this PR is a better-to-have change and is more friendly for community projects which uses some torch functions. Thank you for your consideration.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,999,822,361
|
update fx.Interpreter error logging to check if submodules are GraphModules
|
bdhirsh
|
open
|
[
"fb-exported",
"release notes: fx",
"fx"
] | 2
|
CONTRIBUTOR
|
Summary: update fx.Interpreter error logging to check if submodules are GraphModules
Test Plan: CI
Differential Revision: D73069078
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,999,768,273
|
Compilation of the post-training quantized model using Nvidia ModelOpt is failing with the error: Unsupported — 'inline in skipfiles: QuantLinearConvBase.quantize_weight
|
abhayaps
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
NONE
|
### 🐛 Describe the bug
Hi Team,
I’ve been experimenting with NVIDIA’s `modelopt` library for post-training quantization of the [Feature Tokenizer Transformer model](https://github.com/yandex-research/rtdl). I was able to successfully quantize the model using the following setup and code snippets:
---
### **Environment Setup**
Dependencies used:
```bash
!pip install setuptools==70.0
!pip install torch==2.6.0
!pip install tensorrt==10.7.0.post1 --extra-index-url https://pypi.nvidia.com
!pip install torch-tensorrt==2.6.0+cu124 --index-url https://download.pytorch.org/whl/cu124
!pip uninstall torchvision -y
!pip install torchvision
!pip install "nvidia-modelopt[all]" -U --extra-index-url https://pypi.nvidia.com
!pip install delu
```
---
### **Quantization Code**
**Section 1: Applying Quantization**
```
import modelopt.torch.quantization as mtq
config = mtq.INT8_SMOOTHQUANT_CFG
batch_size = 500
device = torch.device('cuda')
data_loader = delu.data.IndexLoader(1000, batch_size, device=device)
model = load_model()
model = model.to(device)
def forward_loop(model):
for iteration, batch_idx in enumerate(data_loader):
x_num_batch = X_num['val'][batch_idx].to(device)
x_cat_batch = X_cat['val'][batch_idx].to(device)
model(x_num_batch, x_cat_batch)
model_q = mtq.quantize(model, config, forward_loop)
```
**Output:**
```
Inserted 57 quantizers
Smoothed 19 modules
```
---
**Section 2: Printing Quantization Summary**
```python
mtq.print_quant_summary(model_q)
```
**Output:**
```
transformer.blocks.0.attention.W_q.input_quantizer TensorQuantizer(8 bit fake per-tensor ...)
transformer.blocks.0.attention.W_q.output_quantizer TensorQuantizer(disabled)
transformer.blocks.0.attention.W_q.weight_quantizer TensorQuantizer(8 bit fake axis=0 ...)
...
```
---
### **Issue: TRT Compilation Failure**
Although quantization was successful, compiling the quantized model with TensorRT is failing. The error message indicates:
```
Unsupported: 'inline in skipfiles: QuantLinearConvBase.quantize_weight'
```
(I have attached Full stack trace and TORCH_TRACE for reference.)
---
### **Compilation Code (Works for non-quantized model)**
**Section 3: TensorRT Compilation Attempt**
```python
import torch_tensorrt
model_q = model_q.eval().cuda()
numeric_features_len = 97
cat_features_len = 7
sample_dynamic_inputs = [
torch_tensorrt.Input(
min_shape=(1, numeric_features_len),
opt_shape=(30, numeric_features_len),
max_shape=(80, numeric_features_len),
dtype=torch.float32),
torch_tensorrt.Input(
min_shape=(1, cat_features_len),
opt_shape=(30, cat_features_len),
max_shape=(80, cat_features_len),
dtype=torch.int64)
]
compiled_model_q = torch_tensorrt.compile(model_q, ir="dynamo", inputs=sample_dynamic_inputs)
torch_tensorrt.save(compiled_model_q, "trt_ptq.ep", inputs=sample_dynamic_inputs)
```
This works fine for the original (non-quantized) model but fails when applied to the quantized version.
### Error logs
```
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/model_ptq.py", line 1968, in <module>
compiled_model = torch_tensorrt.compile(model, ir="dynamo", inputs=sample_dynamic_inputs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch_tensorrt/_compile.py", line 286, in compile
exp_program = dynamo_trace(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch_tensorrt/dynamo/_tracer.py", line 83, in trace
exp_program = export(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/export/__init__.py", line 368, in export
return _export(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/export/_trace.py", line 1008, in wrapper
ep = fn(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/export/_trace.py", line 1970, in _export
return _export_for_training(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/export/_trace.py", line 1008, in wrapper
ep = fn(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/export/_trace.py", line 1834, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/export/_trace.py", line 1283, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/export/_trace.py", line 662, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1569, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1658, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 443, in call_function
return tx.inline_user_function_return(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 443, in call_function
return tx.inline_user_function_return(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1658, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 443, in call_function
return tx.inline_user_function_return(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1658, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3116, in inline_call_
result = InliningInstructionTranslator.check_inlineable(func)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3093, in check_inlineable
unimplemented(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: 'inline in skipfiles: QuantLinearConvBase.quantize_weight | helper /home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/contextlib.py, skipped according trace_rules.lookup SKIP_DIRS'
from user code:
File "/home/ec2-user/SageMaker/model_ptq.py", line 1530, in forward
x = self.transformer(x)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ec2-user/SageMaker/model_ptq.py", line 1188, in forward
x_residual, _ = layer['attention'](
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ec2-user/SageMaker/model_ptq.py", line 927, in forward
q, k, v = self.W_q(x_q), self.W_k(x_kv), self.W_v(x_kv)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/modelopt/torch/quantization/nn/modules/quant_module.py", line 83, in forward
with self.quantize_weight():
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
[dedicated_log_torch_trace_dade9978.log](https://github.com/user-attachments/files/19778380/dedicated_log_torch_trace_dade9978.log)
### Versions
Output of python3 collect_env.py
```
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.2
[pip3] onnx_graphsurgeon==0.5.8
[pip3] onnxruntime==1.16.3
[pip3] onnxruntime_extensions==0.14.0
[pip3] torch==2.6.0+cu124
[pip3] torch-model-archiver==0.7.1b20230208
[pip3] torch_tensorrt==2.6.0+cu124
[pip3] torch-workflow-archiver==0.2.15b20240930
[pip3] torchaudio==2.2.2
[pip3] torchserve==0.11.0b20240516
[pip3] torchtext==0.17.2
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] blas 1.0 mkl conda-forge
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.6.0+cu124 pypi_0 pypi
[conda] torch-model-archiver 0.7.1 py310_0 pytorch
[conda] torch-tensorrt 2.6.0+cu124 pypi_0 pypi
[conda] torch-workflow-archiver 0.2.15 py310_0 pytorch
[conda] torchaudio 2.2.2 py310_cu121 pytorch
[conda] torchserve 0.11.0 py310_0 pytorch
[conda] torchtext 0.17.2 py310 pytorch
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,999,759,565
|
[MPSInductor] Add pow, log2 and FloorToInt ops
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151449
That enables `test_pow_by_natural_log2_dynamic_shapes_mps`
Not sure why log2 printer function suffix is `OpaqueUnaryFn_log2`, rather than just `log2`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,999,672,402
|
CUDA error on RTX 5090: no kernel image is available for execution on the device
|
thecangel
|
closed
|
[] | 3
|
NONE
|
### Summary
When trying to use PyTorch with an NVIDIA RTX 5090 GPU and CUDA 12.1, I receive the following error:
`RuntimeError: CUDA error: no kernel image is available for execution on the device`
This happens even when using the latest PyTorch Nightly builds:
### System Info
- GPU: NVIDIA RTX 5090 (sm_90)
- OS: Windows 11
- Python: 3.11.7
- PyTorch: 2.3.1 +cu121 (tried Nightly as well)
- Installation method: pip with CUDA 12.1 wheel
### What I expected
PyTorch should support newer GPUs like the RTX 5090 with the latest available CUDA builds.
### Additional Info
Please let us know if official support for RTX 5090 (compute capability `sm_90` or newer) is planned in upcoming PyTorch versions or if a special build is required.
Thanks a lot in advance for your help!
| true
|
2,999,626,711
|
Allow to byteswap data when reading saved torch jit data
|
AlekseiNikiforovIBM
|
open
|
[
"oncall: jit",
"triaged",
"open source",
"release notes: jit"
] | 4
|
COLLABORATOR
|
It looks like some pickled data is endian-dependent. Byteswap such data when needed.
Add testcases.
Fixes #151428
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,999,571,695
|
DISABLED test_max_autotune_cuda (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_max_autotune_cuda&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40636603929).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_max_autotune_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 2134, in test_max_autotune
self.run_test(score_mod, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 698.12 MiB is free. Process 94732 has 21.26 GiB memory in use. Of the allocated memory 6.83 GiB is allocated by PyTorch, and 14.17 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_max_autotune_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,999,571,279
|
DISABLED test_non_equal_head_dims_score_mod1_bfloat16_head_dims0_cuda_bfloat16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod1_bfloat16_head_dims0_cuda_bfloat16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40636603929).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod1_bfloat16_head_dims0_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 2159, in test_non_equal_head_dims
self.run_test(score_mod, dtype, B, H, S, qk_d, B, H, S, V_D=v_d, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 864.12 MiB is free. Process 110069 has 21.10 GiB memory in use. Of the allocated memory 6.75 GiB is allocated by PyTorch, and 13.71 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_non_equal_head_dims_score_mod1_bfloat16_head_dims0_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,999,571,278
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE_128_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE_128_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40636603929).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE_128_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,999,411,947
|
isin() on MPS backend raises error with mixed dtypes, unlike CPU/CUDA
|
manueldeprada
|
closed
|
[
"triaged",
"module: mps"
] | 2
|
NONE
|
### 🐛 Describe the bug
The MPS implementation of `torch.isin()` is not consistent with the CPU or CUDA behavior when input tensors have different but compatible dtypes (e.g., `int64` and `int32`).
```
> torch.isin(torch.tensor([1,2,3], dtype=torch.int64), torch.tensor(1,dtype=torch.int32))
tensor([ True, False, False])
> torch.isin(torch.tensor([1,2,3], dtype=torch.int64).to("mps"), torch.tensor(1,dtype=torch.int32).to("mps"))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Expected elements.dtype() == test_elements.dtype() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
This raises a RuntimeError on MPS due to a strict dtype check, whereas CPU and CUDA gracefully handle the dtype mismatch.
The error originates from this line in the MPS backend: https://github.com/pytorch/pytorch/blob/c7400d0026ef17fdeff9d4ceba72de2e47a18dae/aten/src/ATen/native/mps/operations/TensorCompare.mm#L297
### Expected behavior:
MPS should follow the same behavior as CPU and CUDA by allowing dtype promotion or implicit casting where safe.
Tagging relevant reviewers and original PR #124896 authors for visibility: @jhavukainen @kulinseth @malfet
Thanks!
### Versions
Tested extensively across pytorch 2.5.1 and 2.6.0.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,999,397,919
|
Some PyTorch tensor functions silently change the default locale encoding
|
youkaichao
|
open
|
[
"module: cuda",
"triaged",
"module: jiterator"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
A minimal reproducible example:
```python
import locale
import torch
def main():
print(locale.getpreferredencoding())
x = torch.tensor(1., device='cuda')
x.erfinv_()
print(locale.getpreferredencoding())
if __name__ == '__main__':
main()
```
Calling `erfinv_` will change the encoding of the process, and later `open` will fail if it does not specify `utf-8` encoding.
There is a known issue, similar to this: https://github.com/pytorch/pytorch/issues/111480 , and the reason is clear: https://stackoverflow.com/questions/74044994/nvrtc-nvrtccompileprogram-is-changing-the-output-of-locale-getpreferredencoding
So this is a nvcc bug, and is solved in nvcc 12.7: https://github.com/NVIDIA/cuda-python/issues/29#issuecomment-2678474727
However, it is surprising here, because I didn't use any `torch.jit.script` . After some investigation, I find that this function `erfinv_`, even though being called in eager mode, is jit-compiled: https://github.com/pytorch/pytorch/blob/c7400d0026ef17fdeff9d4ceba72de2e47a18dae/aten/src/ATen/native/cuda/UnarySpecialOpsKernel.cu#L285C7-L285C24
Since this is nvcc's bug, there's not really what we can do from PyTorch side. But perhaps we can document the behavior better, by saying that some operators are always jit-compiled upon the first call?
In addition, not sure if `nvrtc` is statically linked or dynamically linked. If the latter is the case, maybe users can fix it by installing a new nvrtc library and put it in `LD_LIBRARY_PATH` .
### Versions
PyTorch 2.0 and 2.6, both tested
cc @ptrblck @msaroufim @eqy @jerryzh168 @mruberry
| true
|
2,999,314,370
|
Implement fast exp for AVX2 and AVX512 for the flash attention
|
timocafe
|
open
|
[
"module: cpu",
"triaged",
"open source",
"topic: not user facing",
"module: sdpa"
] | 4
|
NONE
|
**Implement fexp for avx2 and avx512**
Cristiano and all propose a clever exp using the IEEE representation with a fine control of the precision, especially useful
for mix computation of the flash attention.
- Implement Fast Exponential Computation on SIMD Architectures
A. Cristiano I. Malossi, Yves Ineichen, Costas Bekas, and Alessandro Curioni
- AVX2 and AVX512 float only, up to 20% faster for mix precision flash attention
than the current implementation.
- For the other types legacy implementation.
**Precision**
1 ULP only valid in hybrid mode fp32 -> f16 due to the cast during the
store operation in the flash attention:
**Benchmark**
Machine Xeon 6972P, results in TOPs, Python forward pass flash attention
numhead 16, Head dimension 64
|Seq. L.| PT | fexp |
|-------|------|------|
| 512 | 0.8 | 1.3 |
| 1024 | 1.7 | 1.7 |
| 2048 | 6 | 6.1 |
| 4096 | 16 | 16.8 |
| 8192 | 30.6 | 32.3 |
| 16384 | 40 | 40.8 |
| 32768 | 44.9 | 51.4 |
| 65536 | 45.8 | 54.4 |
numhead 16, Head dimension 128
|Seq. L.| PT | fexp |
|-------|------|------|
| 512 | 2.5 | 4.1 |
| 1024 | 3.3 | 4 |
| 2048 | 11.4 | 10.5 |
| 4096 | 27.4 | 28.4 |
| 8192 | 44.4 | 46 |
| 16384 | 64.2 | 68.1 |
| 32768 | 77.8 | 83 |
| 65536 | 82.1 | 88.1 |
numhead 16, Head dimension 256
|Seq. L.| PT | fexp |
|-------|------|------|
| 512 | 1.7 | 3.4 |
| 1024 | 4.2 | 6.5 |
| 2048 | 14.6 | 16.1 |
| 4096 | 30.1 | 31.1 |
| 8192 | 60 | 62 |
| 16384 | 83.3 | 87.3 |
| 32768 | 98.7 | 106 |
| 65536 | 102.2| 107.1|
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
2,999,292,982
|
Inexact results of VMap operation due to optimization in linalg.solve
|
Flamefire
|
open
|
[
"module: numerical-stability",
"triaged",
"module: linear algebra",
"module: vmap",
"module: functorch"
] | 6
|
COLLABORATOR
|
### 🐛 Describe the bug
I've investigated #151113 and #114868 and traced the issue to `_linalg_solve_ex_out`.
It only happens on AMD CPUs but not on Intel CPUs in the scale that fails the test. It happens with both OpenBLAS and MKL although the differences are slightly different.
There is an ["optimization"](https://github.com/pytorch/pytorch/blob/0a489f924db080c13bff61b9b80cc074834d2ba6/aten/src/ATen/native/BatchLinearAlgebra.cpp#L1948-L1952) using a transposed input in some cases.
TLDR: Disabling the optimization and the [other side](https://github.com/pytorch/pytorch/blob/0a489f924db080c13bff61b9b80cc074834d2ba6/aten/src/ATen/functorch/BatchRulesLinearAlgebra.cpp#L389) of it resolved both issues.
The test cases run the `linalg.(tensor_)solve` function twice. First directly and then with the same input duplicated as a batch of 2 with `vmap`.
- `linalg_solve_ex_out` is called with those in both cases and the same inputs (except for the batched duplication in the vmap case)
- this calls `linalg_lu_factor_ex_out` which first calls `linalg_lu_factor_ex_out` and then `linalg_lu_solve_out`
- The result is supposed to be the same but there are slight differences, e.g. (regular vs vmap):
```diff
- -15.8471, -12.4022, -17.0307, -12.6871, 29.1342, -13.0953, -6.9707, -14.4058, 24.0526, 5.87875, 2.9288, -7.22714,
+ -15.8453, -12.4006, -17.0288, -12.6856, 29.1309, -13.0939, -6.96982, -14.4041, 24.0499, 5.87819, 2.92857, -7.22624,
```
This then later causes larger differences, e.g. the largest absolute difference is in an element `492.4144 != 492.3525` which then fails the test allowing at most `1e-4`
I think the optimization can be safely removed as it is seemingly outdated.
> Possible optimization: Compute the LU factorization of A^T if A is contiguous
> Then we solve A^T X = B with adjoint=True
> This saves a copy as A doesn't need to be copied into an F-contig matrix in lu_factor
But in `linalg_lu_factor_ex_out` the only copy is done when `!LU.is_same(A)` but LU is a new Tensor (at least in this codepath) and even if it is not I don't think `A.mT()` can be the same as LU, can it?
There is another [potential copy](https://github.com/pytorch/pytorch/blob/0a489f924db080c13bff61b9b80cc074834d2ba6/aten/src/ATen/native/BatchLinearAlgebra.cpp#L2191) being done in `linalg_lu_solve_out` conditioned on `LU.mT().is_contiguous()`. But in all tests cases of this test with and without the optimization `LU.mT()` is always contiguous.
If this is the case in general or at least "usually" that "optimization" can be removed to ensure better results.
### Versions
Pretty much all recent-ish PyTorch versions independent of other versions, but only on AMD CPUs
CPU:
Architektur: x86_64
CPU Operationsmodus: 32-bit, 64-bit
Adressgrößen: 43 bits physical, 48 bits virtual
Byte-Reihenfolge: Little Endian
CPU(s): 256
Liste der Online-CPU(s): 0-255
Anbieterkennung: AuthenticAMD
Modellname: AMD EPYC 7702 64-Core Processor
Prozessorfamilie: 23
Modell: 49
Thread(s) pro Kern: 2
Kern(e) pro Sockel: 64
Sockel: 2
Stepping: 0
Übertaktung: aktiviert
Skalierung der CPU(s): 69%
Maximale Taktfrequenz der CPU: 2183,5930
Minimale Taktfrequenz der CPU: 1500,0000
BogoMIPS: 4000,22
Markierungen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualisierung: AMD-V
cc @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,999,250,051
|
Add is_pinned to host allocator
|
guangyey
|
open
|
[
"open source",
"ciflow/trunk",
"release notes: cpp",
"ciflow/mps",
"ciflow/rocm",
"ciflow/xpu"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151439
# Motivation
This PR aims to add the `is_pinned` functionality into the `HostAllocator` class, which enables centralized pinned memory verification through calls like `at::getHostAllocator(at::kCUDA)->is_pinned(ptr)`.
Benefits include:
- Consistent host memory handling across all device backends
- Group similar functionalities together to enhance code modularity.
This architecture makes the system more maintainable and extensible for future device support.
# Additional Context
It's difficult to deprecate `isPinnedPtr` in `AcceleratorHooksInterface` because some backends (such as `mps`, `hpu`, `privateuser1`) may not register their own host allocator using the `REGISTER_HOST_ALLOCATOR` mechanism, which was introduced in [#151431](https://github.com/pytorch/pytorch/pull/151431).
| true
|
2,999,214,821
|
add split sizes info dump for uneven all2all bw calculation
|
sanshang-nv
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"skip-url-lint"
] | 24
|
CONTRIBUTOR
|
Add split sizes info to dumped execution trace and kineto trace for bw calcuation of uneven all2all.
Take input data as an example from case below, although we know input size of Rank-0 is 50 elements, actual data size that Rank-0 sends out is (12+13+14)=39 elements. Rank-0 doesn't send the 1st chunk of 11 elements to peers. But we don't know this infomation now, because "in split size" filed is empty.


cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,999,205,462
|
Deprecate host allocator legacy APIs
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151531
* #151439
* __->__ #151437
* #151431
# Motivation
This PR aims to deprecate the host allocator legacy API and recommend users to use the unified API `getHostAllocator(device_type)` APIs, such as:
```cpp
at::getHostAllocator(device_type)->allocate(...);
at::getHostAllocator(device_type)->empty_cache();
at::getHostAllocator(device_type)->record_event(...);
at::getHostAllocator(device_type)->get_stats();
at::getHostAllocator(device_type)->reset_accumulated_stats();
at::getHostAllocator(device_type)->reset_peak_stats();
```
# Additional Context
TODO:
- [ ] Move is_pinned from `AcceleratorHookInterface` to `HostAllocator`
- [ ] Deprecate `getPinnedMemoryAllocator` inside `AcceleratorHookInterface` and recommend using `getHostAllocator` instead.
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.