id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,999,191,210
|
Fix implicit state dict modification
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Summary: Previously we were modyfing ep.state_dict while runnning decomp which it shouldn't
Test Plan: CI
Fixes: https://github.com/pytorch/pytorch/issues/151366
Differential Revision: D73102315
| true
|
2,999,179,420
|
[inductor] Reduce runtime of CPU OpInfo tests
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
COLLABORATOR
|
`has_triton()` returns True if Triton is present on the system and supports _any_ backend we care about. In this case, that means we _always_ check gradients, even though the intended behavior is to skip gradients when testing on CPU.
Fixes a bug from #146911.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,999,167,770
|
Implement fexp for avx2 and avx512
|
timocafe
|
closed
|
[
"oncall: distributed",
"module: cpu",
"module: mkldnn",
"open source",
"module: amp (automated mixed precision)",
"NNC",
"release notes: quantization",
"release notes: releng",
"module: inductor",
"module: dynamo",
"release notes: distributed (checkpoint)",
"module: compiled autograd"
] | 3
|
NONE
|
**Optimization Flash Attention for X86 with F16 support**
Malossi and all in 2015 has published a paper with a fine control of the precision and using a clever tips of the binary representation of the floating pointing for the exp function which is the bottleneck of the flash attention. I implemented this fexp into the vector class and do the connection with the selector of the flash attention.
- Implement Fast Exponential Computation on SIMD Architectures A. Cristiano I. Malossi, Yves Ineichen, Costas Bekas, and Alessandro Curioni
- AVX2 and AVX512 float only, up to 20% faster for mix precision flash attention than the current implementation.
- For the other types legacy implementation.
Precision: 1 ULP only valid in hybrid mode fp32 -> f16 due to the cast during the store operation in the flash attention:
**Benchmark**
Machine Xeon 6972P, results in TOPs, Python forward pass flash attention
numhead 16, Head dimension 64
|Seq. L.| PT | fexp |
|-------|------|------|
| 512 | 0.8 | 1.3 |
| 1024 | 1.7 | 1.7 |
| 2048 | 6 | 6.1 |
| 4096 | 16 | 16.8 |
| 8192 | 30.6 | 32.3 |
| 16384 | 40 | 40.8 |
| 32768 | 44.9 | 51.4 |
| 65536 | 45.8 | 54.4 |
numhead 16, Head dimension 128
|Seq. L.| PT | fexp |
|-------|------|------|
| 512 | 2.5 | 4.1 |
| 1024 | 3.3 | 4 |
| 2048 | 11.4 | 10.5 |
| 4096 | 27.4 | 28.4 |
| 8192 | 44.4 | 46 |
| 16384 | 64.2 | 68.1 |
| 32768 | 77.8 | 83 |
| 65536 | 82.1 | 88.1 |
numhead 16, Head dimension 256
|Seq. L.| PT | fexp |
|-------|------|------|
| 512 | 1.7 | 3.4 |
| 1024 | 4.2 | 6.5 |
| 2048 | 14.6 | 16.1 |
| 4096 | 30.1 | 31.1 |
| 8192 | 60 | 62 |
| 16384 | 83.3 | 87.3 |
| 32768 | 98.7 | 106 |
| 65536 | 102.2| 107.1|
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @mcarilli @ptrblck @leslie-fang-intel @EikanWang @voznesenskym @penguinwu @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan
| true
|
2,999,017,738
|
The compare interface of the memory snapshot visualization tool “_memory_viz.py” has a bug
|
zhangxixi1993
|
open
|
[
"module: cuda",
"module: memory usage",
"triaged",
"oncall: profiler"
] | 1
|
NONE
|
### 🐛 Describe the bug
`
def compare(before, after, format_flamegraph=format_flamegraph):
def _seg_key(seg):
return (seg['address'], seg['total_size']) # **!!!panic: string indices must be integers**. seg is string, not a dict
def _seg_info(seg):
return f'stream_{seg["stream"]};seg_{seg["address"]}'
before_segs = {_seg_key(seg) for seg in before} # **‘before’ type is dict, 'seg' is key of dict, and type is string**
after_segs = {_seg_key(seg) for seg in after}
for seg in before:
if _seg_key(seg) not in after_segs:
_write_blocks(f, f'only_before;{_seg_info(seg)}', seg['blocks'])
for seg in after:
if _seg_key(seg) not in before_segs:
_write_blocks(f, f'only_after;{_seg_info(seg)}', seg['blocks'])
`
### Versions
Maybe it should be like this:
before_segs = {_seg_key(seg) for seg in before["segments"]}
after_segs = {_seg_key(seg) for seg in after["segments"]}
cc @ptrblck @msaroufim @eqy @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,998,884,920
|
[inductor] `proxy_tensor.py` throws `SyntaxError` when using `.random_`
|
shaoyuyoung
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: pt2-dispatcher",
"dynamo-triage-jan2025"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `proxy_tensor.py` throws `SyntaxError` when using `.random_`
**device backend**: both CPP and triton
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = x * x.random_(0, 2)
return x
model = Model()
x = torch.randn(4, 8)
inputs = [x]
def run_test(model, inputs, backend):
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(f"succeed on {backend}")
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'inductor')
```
### Error logs
eager
```
succeed on eager
```
inductor
```
SyntaxError: invalid syntax (proxy_tensor.py:1265 in wrapped, line 5)
```
### Versions
nightly 20250414
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bdhirsh @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
2,998,869,315
|
Add HostAllocator as the unified parent class
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"ciflow/xpu",
"module: accelerator"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151531
* #151439
* #151437
* __->__ #151431
# Motivation
This PR introduces a unified parent class `HostAllocator` with the following goals:
1. Enable backend-specific host allocator registration, including support for out-of-tree backends.
2. Provide a unified and extensible API surface for host memory management across all backends, especially accelerators.
The new interface includes:
- `at::getHostAllocator()->allocate`
- `at::getHostAllocator()->empty_cache`
- `at::getHostAllocator()->record_event`
- `at::getHostAllocator()->get_stats`
- `at::getHostAllocator()->reset_accumulated_stats`
- `at::getHostAllocator()->reset_peak_stats`
# Additional Context
We plan to deprecate legacy APIs such as `at::cuda::CachingHostAllocator_emptyCache` and recommend users migrate to the new backend-specific API, for example:
```cpp
at::getHostAllocator(at::kCUDA)->empty_cache();
```
This refactor will help standardize host memory management across devices and simplify backend integration in the future.
Another key improvement I am going to do is move the `is_pinned` functionality into the `HostAllocator` class, which enables centralized pinned memory verification through calls like `at::getHostAllocator(at::kCUDA)->is_pinned(ptr)`.
Benefits include:
- Consistent host memory handling across all device backends
- Decouple pinned memory functionality with `AcceleratorHooksInterface` in a more modular way
- Clearer separation between device memory allocation and pinned host memory management
This architecture makes the system more maintainable and extensible for future device support.
cc @albanD @EikanWang
| true
|
2,998,830,388
|
Onnx Export doesn't acknowledge dynamic_dict for 2D Tensors (Cannot use dynamic dimensions for Points and Labels in Sam)
|
FabianSchuetze
|
closed
|
[
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The following code shows that torch.onnx.export doesn't produce a graph with variable input sizes for 2D tensors:
```
import torch
class Module(torch.nn.Module):
def forward(self, points, labels, image):
return points
if __name__ == "__main__":
model = Module()
image = torch.rand(1, 3, 100, 200)
points = torch.tensor([[20, 10]])
labels = torch.tensor([[1]])
input_names = ["points", "labels", "image"]
inp_args = (points, labels, image)
dynamic_dict = {
"points": {0: "axis_0"},
"labels": {0: "axis_0"},
"image": {2: "axis_2", 3: "axis_3"},
}
onnx_model = torch.onnx.export(
model,
inp_args,
"/tmp/model.onnx",
dynamo=True,
report=True,
input_names=input_names,
dynamic_shapes=dynamic_dict,
)
```
The output graph shows:
```
In [4]: onnx_model
Out[4]:
ONNXProgram(
model=
<
ir_version=10,
opset_imports={'pkg.onnxscript.torch_lib.common': 1, '': 18},
producer_name='pytorch',
producer_version='2.8.0.dev20250409+cu118',
domain=None,
model_version=None,
>
graph(
name=main_graph,
inputs=(
%"points"<INT64,[1,2]>,
%"labels"<INT64,[1,1]>,
%"image"<FLOAT,[1,3,axis_2,axis_3]>
),
outputs=(
%"points"<INT64,[1,2]>
),
) {
return %"points"<INT64,[1,2]>
}
,
exported_program=
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, points: "i64[1, 2]", labels: "i64[1, 1]", image: "f32[1, 3, s48, s41]"):
return (points,)
Graph signature:
# inputs
points: USER_INPUT
labels: USER_INPUT
image: USER_INPUT
# outputs
points: USER_OUTPUT
Range constraints: {s48: VR[2, int_oo], s41: VR[2, int_oo]}
)
```
I am surprised that dim 2 and 3 or the `image` argument are dynamic, but not dim 0 or the `points` and `labels` argument. What can I do to use a dynamic number of points and labels?
The example above is a MWE covering problems encountered while exporting SAM2. The output size is a function of the input image size.
### Error logs
_No response_
### Versions
Collecting environment information...
/home/fabian/.local/lib/python3.12/site-packages/torch/cuda/__init__.py:174: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() > 0
PyTorch version: 2.8.0.dev20250409+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 14.2.0-4ubuntu2~24.04) 14.2.0
Clang version: 19.1.1 (1ubuntu1~24.04.2)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA RTX 500 Ada Generation Laptop GPU
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 22
On-line CPU(s) list: 0-21
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 7 155H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 25%
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 896 KiB (14 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-21
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] fast_pytorch_kmeans==0.2.2
[pip3] flake8==7.1.2
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.2.2
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250409+cu118
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.0.dev20250410+cu118
[pip3] triton==3.2.0
[pip3] types-flake8-2020==1.8
[pip3] types-flake8-bugbear==23.9.16
[pip3] types-flake8-builtins==2.2
[pip3] types-flake8-docstrings==1.7
[pip3] types-flake8-plugin-utils==1.3
[pip3] types-flake8-rst-docstrings==0.3
[pip3] types-flake8-simplify==0.21
[pip3] types-flake8-typing-imports==1.15
[pip3] types-mypy-extensions==1.0
[conda] Could not collect
cc @chauhang @penguinwu
| true
|
2,998,792,484
|
Update docker image names for s390x release
|
AlekseiNikiforovIBM
|
open
|
[
"open source",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 1
|
COLLABORATOR
|
Disable switching tag for s390x docker images
Keep it that way unless they are published.
There's no way to determine in advance
which docker image names are needed
for building s390x binaries otherwise.
This is a copy of https://github.com/pytorch/pytorch/pull/151426 for release branch.
| true
|
2,998,766,717
|
TorchScript Model Saved on x86 Returns NaNs When Loaded on s390x
|
KanakaPathivada
|
open
|
[
"oncall: jit"
] | 1
|
NONE
|
### 🐛 Describe the bug
I trained a simple LSTM model on the Iris dataset using PyTorch on an **x86 (little-endian)** system. I then saved the model using both `torch.jit.trace()` and `torch.jit.script()`.
When loading this `.pt` scripted model on an **s390x (big-endian)** architecture, the model loads **without error**, but the output is **entirely NaN**, even when using `set_default_load_endianness(LoadEndianness.LITTLE)` before loading.
This issue does **not occur** when loading the model on x86 again. Only the s390x platform exhibits this behavior.
**Reproduction Script**
- Save the model on x86_64
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
# Set deterministic behavior
torch.manual_seed(0)
# Define a simple LSTM model
class LSTMClassifier(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(LSTMClassifier, self).__init__()
self.lstm = nn.LSTM(input_size, hidden_size, batch_first=True)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x):
x = x.unsqueeze(1) # (batch, seq_len=1, input_size)
lstm_out, _ = self.lstm(x)
return self.fc(lstm_out[:, -1, :]) # logits
# Load data
X = load_iris().data
X = StandardScaler().fit_transform(X)
X_tensor = torch.tensor(X, dtype=torch.float32)
# Train dummy model
model = LSTMClassifier(input_size=4, hidden_size=16, num_classes=3)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
y = torch.randint(0, 3, (150,))
for epoch in range(10):
out = model(X_tensor)
loss = criterion(out, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Save with torch.jit.script
scripted_model = torch.jit.script(model)
torch.jit.save(scripted_model, "lstm_script_model.pt")
# Save with torch.jit.trace
traced_model = torch.jit.trace(model, torch.randn(1, 4))
torch.jit.save(traced_model, "lstm_traced_model.pt")
```
- Load the model on s390x (zLinux)
```
import torch
from torch.serialization import set_default_load_endianness
from torch.utils.serialization.config import LoadEndianness
# Set little-endian since model was saved on x86
set_default_load_endianness(LoadEndianness.LITTLE)
# Load the TorchScript model
model = torch.jit.load("lstm_script_model.pt")
model.eval()
# Test input
input_data = torch.tensor([
[0.1, -0.2, 0.3, 0.4],
[-1.1, 0.9, -0.3, 0.7],
[1.2, -1.2, 1.3, -1.1],
[0.3, 0.5, -0.4, 0.2]
], dtype=torch.float32)
# Inference
with torch.no_grad():
output = model(input_data)
print("Output:")
print(output)
# Load the TorchScript trace model
model = torch.jit.load("lstm_trace_model.pt")
model.eval()
# Inference
with torch.no_grad():
output = model(input_data)
print("Output:")
print(output)
```
**Observed Behavior**
```
tensor([[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]])
```
Even though the model loads successfully and is in eval() mode, the inference result is NaN across all outputs.
**Expected Behavior**
The model should return consistent logits as it does on x86.
```
tensor([[-4.2518, 6.0927, -1.8566],
[ 6.4866, -2.5016, -3.5421],
[-3.9846, -3.5373, 6.0734],
[-4.0324, 4.9734, -1.0101]])
```
**Request**
Would appreciate help in:
- Is this a known TorchScript limitation across different architectures (x86 vs s390x)?
- Are there any workarounds (e.g., manually loading LSTM weights)?
- Could this be a bug in how TorchScript handles LSTM/any other model's weight serialization across architectures?
### Versions
```
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (s390x)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: s390x
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Big Endian
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.5.0
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,998,760,786
|
[Easy][Building] Fix the warning of int4mm.cu when building
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151302
* __->__ #151427
As the title stated.
**Changes Before:**
```C++
[999/1526] Building CUDA object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/int4mm.cu.o
/root/Git.d/pytorch/pytorch/aten/src/ATen/native/cuda/int4mm.cu(142): warning #177-D: variable "at::native::kWarpSize" was declared but never referenced
constexpr int32_t kWarpSize = 32;
^
Remark: The warnings can be suppressed with "-diag-suppress <warning-number>"
```
| true
|
2,998,755,966
|
Update docker image names for s390x
|
AlekseiNikiforovIBM
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 3
|
COLLABORATOR
|
Disable switching tag for s390x docker images
Keep it that way unless they are published.
There's no way to determine in advance
which docker image names are needed
for building s390x binaries otherwise.
| true
|
2,998,752,960
|
`version.txt` mismatch with tags in release branch
|
generspooler
|
open
|
[
"module: binaries",
"oncall: releng",
"low priority",
"triaged",
"enhancement"
] | 3
|
NONE
|
### 🐛 Describe the bug
tags v2.5.1, the git log shows version is v2.5.1, but its version.txt is 2.5.0a0.
This error will make the torch compiled .whl become torch-.2.5.0a0+xxxxx.whl. After installing this .whl, the environment torch version will be 2.5.0a0, which is wrong. [Correct version i think should be 2.5.1].
After manually changed 2.5.0a0 to 2.5.1 in version.txt, the compiled .whl back to correct torch-2.5.1+xxxx.whl, and everything looks correct.
### Versions
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.5.1+git1ca95c8
[pip3] torch-npu==2.5.1+git14fa78c
[pip3] torchvision==0.16.0
[conda] gpytorch 1.12 <pip>
[conda] modelarts-pytorch-model-server 1.0.6 <pip>
[conda] numpy 1.26.4 <pip>
[conda] optree 0.13.1 <pip>
[conda] torch 2.5.1+gita8d6afb <pip>
[conda] torch 2.1.0a0+gitf55e5f3 <pip>
[conda] torch-npu 2.1.0.post11+git9cf5934 <pip>
[conda] torchvision 0.16.0 <pip>
cc @seemethere @malfet @osalpekar @atalman
| true
|
2,998,747,886
|
[Inductor] fix torch._inductor.exc.InductorError: KeyError
|
jianyizh
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 7
|
CONTRIBUTOR
|
Fixes #151423, which is a regression after #150845
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,739,820
|
[inductor] dynamo benchmark model dm_nfnet_f0 fails with torch._inductor.exc.InductorError: KeyError: 'op566'
|
jianyizh
|
closed
|
[
"triaged",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
After #150845 The ci log https://ossci-raw-job-status.s3.amazonaws.com/log/40563911423 shows the timm model dm_nfnet_f0 training is failed.
2025-04-15T10:14:44.3504086Z loading model: 0it [00:00, ?it/s]
2025-04-15T10:14:44.3504436Z loading model: 0it [00:02, ?it/s]
2025-04-15T10:14:44.3504749Z cuda train dm_nfnet_f0
2025-04-15T10:15:27.7967917Z ERROR:common:Backend dynamo failed in warmup()
2025-04-15T10:15:27.7968391Z Traceback (most recent call last):
2025-04-15T10:15:27.7969315Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2533, in warmup
2025-04-15T10:15:27.7969857Z fn(model, example_inputs)
2025-04-15T10:15:27.7970432Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 671, in _fn
2025-04-15T10:15:27.7971149Z raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
2025-04-15T10:15:27.7971921Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 766, in _compile_fx_inner
2025-04-15T10:15:27.7972662Z raise InductorError(e, currentframe()).with_traceback(
2025-04-15T10:15:27.7973394Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 750, in _compile_fx_inner
2025-04-15T10:15:27.7974193Z mb_compiled_graph = fx_codegen_and_compile(
2025-04-15T10:15:27.7974913Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1356, in fx_codegen_and_compile
2025-04-15T10:15:27.7975793Z return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
2025-04-15T10:15:27.7976659Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1245, in codegen_and_compile
2025-04-15T10:15:27.7977368Z compiled_module = graph.compile_to_module()
2025-04-15T10:15:27.7978026Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2205, in compile_to_module
2025-04-15T10:15:27.7978679Z return self._compile_to_module()
2025-04-15T10:15:27.7979316Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2213, in _compile_to_module
2025-04-15T10:15:27.7980099Z self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
2025-04-15T10:15:27.7980826Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2150, in codegen
2025-04-15T10:15:27.7981441Z self.scheduler.codegen()
2025-04-15T10:15:27.7982034Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 4309, in codegen
2025-04-15T10:15:27.7982649Z else self._codegen(self.nodes)
2025-04-15T10:15:27.7983255Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 4445, in _codegen
2025-04-15T10:15:27.7983909Z self.get_backend(device).codegen_node(node)
2025-04-15T10:15:27.7984680Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/cuda_combined_scheduling.py", line 104, in codegen_node
2025-04-15T10:15:27.7985469Z return self._triton_scheduling.codegen_node(node)
2025-04-15T10:15:27.7986172Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/simd.py", line 1318, in codegen_node
2025-04-15T10:15:27.7986838Z return self.codegen_node_schedule(
2025-04-15T10:15:27.7987542Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/simd.py", line 1359, in codegen_node_schedule
2025-04-15T10:15:27.7988558Z self.codegen_node_schedule_with_kernel(node_schedule, kernel)
2025-04-15T10:15:27.7989397Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/simd.py", line 1439, in codegen_node_schedule_with_kernel
2025-04-15T10:15:27.7990150Z node.decide_inplace_update()
2025-04-15T10:15:27.7990800Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 550, in decide_inplace_update
2025-04-15T10:15:27.7991506Z and single_index_in_fused_node(input_buf)
2025-04-15T10:15:27.7992484Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 484, in single_index_in_fused_node
2025-04-15T10:15:27.7993249Z buf_to_be_inplaced.scheduler.get_fused_node(user_node)
2025-04-15T10:15:27.7993944Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 2989, in get_fused_node
2025-04-15T10:15:27.7996226Z return self.name_to_fused_node[node.get_first_name()]
2025-04-15T10:15:27.7996734Z torch._inductor.exc.InductorError: KeyError: 'op566'
2025-04-15T10:15:27.7997104Z warmup_failed
2025-04-15T10:15:31.8469481Z Run failed with return code: 255
2025-04-15T10:15:31.8469861Z Output: None
2025-04-15T10:15:31.8470092Z Error: None
2025-04-15T10:15:35.6188853Z
### Versions
This error is on current main, 40ce4fb24a536d175348df876f61956d4945778e, see https://hud.pytorch.org/benchmark/timm_models/inductor_no_cudagraphs?dashboard=torchinductor&startTime=Sat,%2001%20Mar%202025%2007:14:57%20GMT&stopTime=Wed,%2016%20Apr%202025%2007:14:57%20GMT&granularity=hour&mode=training&model=dm_nfnet_f0&dtype=amp&deviceName=cuda%20(a100)&lBranch=main&lCommit=ccfce9ae868131cc87dd99584ab79e316c14e7d4&rBranch=main&rCommit=ccfce9ae868131cc87dd99584ab79e316c14e7d4
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,998,709,439
|
[WIP] multi graph compile
|
bobrenjc93
|
closed
|
[
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151422
* #151421
* #151499
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,998,709,332
|
[ez] Rewrite comment to be more friendly to non haskellers
|
bobrenjc93
|
open
|
[
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151422
* __->__ #151421
* #151499
| true
|
2,998,708,850
|
Enable skipIfXpu to support class-level skipping
|
EikanWang
|
open
|
[
"open source",
"topic: not user facing",
"ciflow/xpu"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151420
* #151315
| true
|
2,998,639,351
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE_128_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE_128_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40622264900).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE_128_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 881, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 832.12 MiB is free. Process 57746 has 21.13 GiB memory in use. Of the allocated memory 6.77 GiB is allocated by PyTorch, and 14.10 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE_128_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,639,262
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE_128_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE_128_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40622264900).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE_128_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,639,153
|
DISABLED test_non_equal_head_dims_score_mod1_float16_head_dims1_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod1_float16_head_dims1_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40622167717).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod1_float16_head_dims1_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,639,152
|
DISABLED test_non_equal_head_dims_score_mod2_bfloat16_head_dims0_cuda_bfloat16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod2_bfloat16_head_dims0_cuda_bfloat16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40623309256).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod2_bfloat16_head_dims0_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,638,940
|
DISABLED test_builtin_score_mods_dynamic_float16_score_mask_mod1_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_dynamic_float16_score_mask_mod1_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40623309256).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_dynamic_float16_score_mask_mod1_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1156, in test_builtin_score_mods_dynamic
self.run_dynamic_test(score_mask_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 831, in run_dynamic_test
golden_out1.backward(backward_grad1.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 128.12 MiB is free. Process 126453 has 21.82 GiB memory in use. Of the allocated memory 6.68 GiB is allocated by PyTorch, and 14.88 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_dynamic_float16_score_mask_mod1_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,638,859
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE_128_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE_128_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40623309256).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE_128_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 396.12 MiB is free. Process 90683 has 21.56 GiB memory in use. Of the allocated memory 6.77 GiB is allocated by PyTorch, and 14.52 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE_128_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,638,761
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE3_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE3_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40622264900).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod6_BLOCK_SIZE3_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,638,671
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE_256_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE_256_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40622167717).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE_256_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,631,323
|
[Easy][torch.Event] Fix and improve the docs of torch.Event
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: python_frontend",
"ci-no-td"
] | 20
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151226
* __->__ #151411
* #151221
* #151404
**Changes:**
- add detailed function or class signature
- fix the wrong display of torch.Event.wait and torch.Event.record
| true
|
2,998,490,346
|
[invoke_subgraph] fake tensor caching for None output
|
anijain2305
|
closed
|
[
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151620
* #150704
* __->__ #151410
* #151409
* #151756
* #151633
* #151477
* #151357
* #151256
* #151330
| true
|
2,998,490,213
|
[invoke_subgraph] Compile time traces
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152062
* #151961
* #151957
* #151477
* #151633
* __->__ #151409
| true
|
2,998,484,963
|
Fix #150472 torch.library.custom_op doesn't handle single element tuples returns
|
jijiew
|
open
|
[
"release notes: composability"
] | 3
|
CONTRIBUTOR
|
Fixes #150472
| true
|
2,998,439,928
|
[ez] Make relaxed constraint error message more user friendly
|
bobrenjc93
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 29
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151407
Fixes #151356
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: D73833827
| true
|
2,998,387,348
|
[Cutlass] Add epilogue inputs/outputs to def_kernel
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152815
* #150907
* __->__ #151406
* #150906
* #152733
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,387,274
|
[Cutlass] Fixes for e2e compilation in arg rendering
|
mlazos
|
closed
|
[
"Merged",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #152390
* #150909
* #150908
* #150907
* #151406
* #150906
* #151713
* __->__ #151405
* #150905
* #152306
* #152305
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,348,719
|
[Easy] Add more check for elapsedTime of torch.xxx.Event and torch.Event
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 48
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151226
* #151411
* #151221
* __->__ #151404
As the title stated
**Changes:**
- Add **record**, **query** and **enable_timing** check
- Add related tests
| true
|
2,998,274,857
|
Refine host caching allocator
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"topic: improvements",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151439
* #151437
* #151431
* __->__ #151403
# Motivation
This stack of PRs aims to generalize and improve PyTorch host allocator code.
This PR introduces a `DeleterFnPtr` template parameter to `CachingHostAllocatorInterface` to resolve circular dependency issues. This change allows for better code reuse and simplifies the implementation of host allocators.
# Additional Context
TODO:
- [ ] Unify host allocator related API
- [ ] Deprecate those device-specific legacy API
- [ ] Move `is_pinned` to host allocator
| true
|
2,998,262,649
|
RuntimeError: curPos <= (kUpperBound - kAppendInterval) INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/ir.cpp":698, please report a bug to PyTorch.
|
Mingbo-Lee
|
open
|
[
"module: onnx",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
I encountered an internal assertion failure when trying to export my custom Compound ResNet18 model to ONNX format. The error occurs during the PyTorch tracing process when handling my custom tropical algebra convolution layers (CompoundMinMaxPlusSumConv2d2p), specifically during the execution of the maxplus_conv2d function.
```python
model = compound_type1_resnet18()
dummy_input = torch.randn(BATCH_SIZE, 3, 32, 32)
torch.onnx.export(
model,
dummy_input,
"resnet18_pytorch.onnx",
input_names=["input"],
output_names=["output"],
dynamic_axes={"input": {0: "batch_size"}, "output": {0: "batch_size"}},
opset_version=11,
)
```
I think the error comes from my CUDA operator implementation, which I mean is that my CUDA operator is a test case that can be defined and implemented by myself, such as forward propagation and backpropagation, which means that they can train the neural network normally, however, when I try to convert the model to ONNX format, these operators are incompatible with the PyTorch implementation, which causes the error to occur
Error Message:
```
File ~/projects/tropical-embedding-cuda/tropical_algebra/layers/functions.py:95, in MaxplusConv2dFunction.forward(ctx, input, weight, stride)
[93](https://vscode-remote+ssh-002dremote-002bluoyegroup.vscode-resource.vscode-cdn.net/home/limingbo/projects/trop-quant/demo/onnx/~/projects/tropical-embedding-cuda/tropical_algebra/layers/functions.py:93) ctx.save_for_backward(input, weight)
[94](https://vscode-remote+ssh-002dremote-002bluoyegroup.vscode-resource.vscode-cdn.net/home/limingbo/projects/trop-quant/demo/onnx/~/projects/tropical-embedding-cuda/tropical_algebra/layers/functions.py:94) ctx.stride = stride
---> [95](https://vscode-remote+ssh-002dremote-002bluoyegroup.vscode-resource.vscode-cdn.net/home/limingbo/projects/trop-quant/demo/onnx/~/projects/tropical-embedding-cuda/tropical_algebra/layers/functions.py:95) return _C.maxplus_conv2d_forward(input, weight, stride)
RuntimeError: curPos <= (kUpperBound - kAppendInterval) INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/ir.cpp":698, please report a bug to PyTorch.
```
I'd like to provide some code snippets for you to fix bugs
minplus_maxplus_cuda.cu
```cpp
__global__ void maxplus_conv2d_forward_kernel(
const float *__restrict__ input,
const float *__restrict__ weight,
float *__restrict__ output,
int batch_size, int in_channels, int out_channels,
int in_height, int in_width,
int kernel_height, int kernel_width,
int out_height, int out_width,
int stride_h, int stride_w)
{
const int batch_idx = blockIdx.x;
const int out_ch_idx = blockIdx.y;
const int out_h_idx = (blockIdx.z / out_width);
const int out_w_idx = (blockIdx.z % out_width);
const int in_ch_idx = threadIdx.x;
if (batch_idx < batch_size && out_ch_idx < out_channels &&
out_h_idx < out_height && out_w_idx < out_width && in_ch_idx < in_channels)
{
float max_val = -FLT_MAX;
for (int kh = 0; kh < kernel_height; ++kh)
{
for (int kw = 0; kw < kernel_width; ++kw)
{
int in_h = out_h_idx * stride_h + kh;
int in_w = out_w_idx * stride_w + kw;
if (in_h < in_height && in_w < in_width)
{
float input_val = input[(batch_idx * in_channels + in_ch_idx) * in_height * in_width + in_h * in_width + in_w];
float weight_val = weight[(out_ch_idx * in_channels + in_ch_idx) * kernel_height * kernel_width + kh * kernel_width + kw];
max_val = fmaxf(max_val, input_val + weight_val);
}
}
}
// 5D output tensor indexing (batch, out_ch, in_ch, h, w)
int output_idx = ((batch_idx * out_channels + out_ch_idx) * in_channels + in_ch_idx) * out_height * out_width +
out_h_idx * out_width + out_w_idx;
output[output_idx] = max_val;
}
}
// MaxPlus 2D convolution forward function
torch::Tensor maxplus_conv2d_cuda_forward(torch::Tensor input, torch::Tensor weight, std::vector<int64_t> stride)
{
CHECK_INPUT(input);
CHECK_INPUT(weight);
// Get dimensions
const auto batch_size = input.size(0);
const auto in_channels = input.size(1);
const auto in_height = input.size(2);
const auto in_width = input.size(3);
const auto out_channels = weight.size(0);
const auto kernel_height = weight.size(2);
const auto kernel_width = weight.size(3);
const auto stride_h = stride[0];
const auto stride_w = stride[1];
const auto out_height = (in_height - kernel_height) / stride_h + 1;
const auto out_width = (in_width - kernel_width) / stride_w + 1;
// Create 5D output tensor with explicit dimensions to match test expectations
auto output = torch::empty({batch_size, out_channels, in_channels, out_height, out_width},
torch::TensorOptions()
.dtype(input.dtype())
.device(input.device()));
// Configure kernel launch
const dim3 blocks(batch_size, out_channels, out_height * out_width);
const dim3 threads(in_channels, 1, 1); // Using threads for in_channels
// Launch kernel
maxplus_conv2d_forward_kernel<<<blocks, threads>>>(
input.data_ptr<float>(),
weight.data_ptr<float>(),
output.data_ptr<float>(),
batch_size, in_channels, out_channels,
in_height, in_width,
kernel_height, kernel_width,
out_height, out_width,
stride_h, stride_w);
return output;
}
```
minplus_maxplus_cpp.cpp:
```cpp
torch::Tensor maxplus_conv2d_cpu_forward(torch::Tensor input, torch::Tensor weight, std::vector<int64_t> stride)
{
// input: [B, C_in, H, W], weight: [C_out, C_in, kH, kW], stride: [sH, sW]
auto B = input.size(0);
auto C_in = input.size(1);
auto H = input.size(2);
auto W = input.size(3);
auto C_out = weight.size(0);
auto kH = weight.size(2);
auto kW = weight.size(3);
auto sH = stride[0];
auto sW = stride[1];
auto H_out = (H - kH) / sH + 1;
auto W_out = (W - kW) / sW + 1;
// 输出 shape: [B, C_out, C_in, H_out, W_out]
auto output = torch::full({B, C_out, C_in, H_out, W_out}, -std::numeric_limits<float>::max(), input.options());
for (int b = 0; b < B; ++b)
{
for (int oc = 0; oc < C_out; ++oc)
{
for (int ic = 0; ic < C_in; ++ic)
{
for (int i = 0; i < H_out; ++i)
{
for (int j = 0; j < W_out; ++j)
{
float maxval = -std::numeric_limits<float>::max();
for (int u = 0; u < kH; ++u)
{
for (int v = 0; v < kW; ++v)
{
float val = input[b][ic][i * sH + u][j * sW + v].item<float>() + weight[oc][ic][u][v].item<float>();
if (val > maxval)
maxval = val;
}
}
output[b][oc][ic][i][j] = maxval;
}
}
}
}
}
return output;
}
torch::Tensor maxplus_conv2d_forward(torch::Tensor input, torch::Tensor weight, std::vector<int64_t> stride)
{
if (input.device().is_cuda())
{
return maxplus_conv2d_cuda_forward(input, weight, stride);
}
else
{
return maxplus_conv2d_cpu_forward(input, weight, stride);
}
}
```
functions.py
```python
import torch
from torch import Tensor
from torch.autograd import Function
from torch.utils.cpp_extension import load
_C = load(
name="tropical_algebra_cpp",
sources=[
"/home/limingbo/projects/tropical-embedding-cuda/src/minplus_maxplus_cpp.cpp",
"/home/limingbo/projects/tropical-embedding-cuda/src/minplus_maxplus_cuda.cu",
],
)
class MaxplusConv2dFunction(Function):
@staticmethod
def forward(ctx, input, weight, stride):
ctx.save_for_backward(input, weight)
ctx.stride = stride
return _C.maxplus_conv2d_forward(input, weight, stride)
@staticmethod
def backward(ctx, grad_output):
input, weight = ctx.saved_tensors
stride = ctx.stride
grad_input = _C.maxplus_conv2d_backward(grad_output, input, weight, stride)
grad_weight = _C.maxplus_conv2d_weight_backward(
grad_output, input, weight, stride
)
return grad_input, grad_weight, None
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (GCC) 11.2.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-193-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
Nvidia driver version: 570.124.06
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops.so.9
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz
Stepping: 7
CPU MHz: 1000.005
CPU max MHz: 3200.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 20 MiB
L3 cache: 27.5 MiB
NUMA node0 CPU(s): 0-9,20-29
NUMA node1 CPU(s): 10-19,30-39
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.0
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu11==9.6.0.74
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.19.2
[pip3] torch==2.4.1+cu121
[pip3] torchaudio==2.4.1+cu121
[pip3] torchmetrics==1.6.1
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.19.1+cu121
[pip3] triton==3.0.0
[conda] numpy 1.23.0 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.6.0.74 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 1.10.2+cu113 pypi_0 pypi
[conda] torchaudio 2.4.1+cu121 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchprofile 0.0.4 pypi_0 pypi
[conda] torchvision 0.19.1+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
| true
|
2,998,195,834
|
[inductor] [aot] `torch.linalg.lu` can't accept `slice operation`, behaving differently with eager
|
shaoyuyoung
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `torch.linalg.lu` can't accept **slice operation**, behaving differently with eager. As you can see, I use **[:2]** to get **P, L**. I can do this successfully on eager but `aot` throws `dynamic_attributes` error.
**device backend**: both CPP and triton
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
P, L = torch.linalg.lu(x)[:2]
return P, L
model = Model()
x = torch.randn(2, 4, 3, 3)
inputs = [x]
def run_test(model, inputs, backend):
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(f"succeed on {backend}")
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'aot_eager')
```
### Error logs
eager
```
succeed on eager
```
aot_eager
```
TypeError: VariableTracker.__init__() got an unexpected keyword argument 'dynamic_attributes'
```
### Versions
nightly 20250414
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames
| true
|
2,998,172,950
|
[inductor] [cpu] `torch.outer` outputs inconsistent res when input tesnor is very large
|
shaoyuyoung
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: when using `torch.outer` for `ones_like` vec, the result inconsistency is big enough. Note that the input tensor should be very large
**device backend**: only CPP
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
vec = x.flatten()
vec_one = torch.ones_like(vec)
x = torch.outer(vec, vec_one)
return torch.mean(x, dim=1)
model = Model()
x = torch.randn(3, 8, 64, 64) # error will be amplified as the input tensor gets larger
inputs = [x]
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
fp64 = run_test(model.to(dtype=torch.float64), [inputs[0].to(dtype=torch.float64)], 'eager')
print(torch.allclose(output, c_output, rtol=1e-3, atol=1e-3))
print(torch.max(torch.abs(c_output - output)))
print(torch._dynamo.utils.same(output, c_output, fp64))
```
### Error logs
CPP
```
False
tensor(0.0052)
False
```
triton
```
True
tensor(0., device='cuda:0')
True
```
### Versions
nightly 20240414
cc @chauhang @penguinwu
| true
|
2,998,088,196
|
[Inductor] Broadcast to range tree shape before block pointer store
|
blaine-rister
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
# Feature
This fixes a bug related to block pointer stores. Since Triton's block pointer stores don't support implicit broadcasting, in certain cases we need to generate a `reshape->broadcast->reshape` pattern to ensure that the tensor being stored has the same shape as the block pointer. This happens when the block indexing expression involves strides of 0 or dimensions of 1, both of which we eliminate from the block pointer.
The existing logic missed an important edge case. We may need a broadcast prior to the first `reshape` of this pattern, in case the tensor comes from a load with implicit broadcasting. For example, if the range trees have shape `[YBLOCK, XBLOCK]`, but the load has a shape `[1, XBLOCK]`, we need to broadcast this to `[YBLOCK, XBLOCK]` prior to storing. See the example kernel below, which comes from `expand` -> `clone` with 3D tiling. The load has an implicit broadcast, and the store has a reshape. Thus, we need to insert an explicit broadcast between them.
```
@triton.jit
def triton_poi_fused_clone_0(in_ptr0, out_ptr0, znumel, ynumel, xnumel, ZBLOCK : tl.constexpr, YBLOCK : tl.constexpr, XBLOCK : tl.constexpr):
znumel = 32
ynumel = 1
xnumel = 32
zoffset = tl.program_id(2) * ZBLOCK
zindex = zoffset + tl.arange(0, ZBLOCK)[:, None, None]
zmask = zindex < znumel
yoffset = tl.program_id(1) * YBLOCK
yindex = yoffset + tl.arange(0, YBLOCK)[None, :, None]
ymask = tl.full([ZBLOCK, YBLOCK, XBLOCK], True, tl.int1)
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[None, None, :]
xmask = xindex < xnumel
x1 = xindex
z0 = zindex
tmp0 = tl.load(tl.make_block_ptr(in_ptr0, shape=[32], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0], eviction_policy='evict_last')[None, None, :]
tl.store(tl.make_block_ptr(out_ptr0, shape=[32, 32], strides=[32, 1], block_shape=[ZBLOCK, XBLOCK], order=[1, 0], offsets=[zoffset, xoffset]), tl.reshape(tl.broadcast_to(tmp0, [ZBLOCK, YBLOCK, XBLOCK]), [ZBLOCK, XBLOCK]).to(tl.float32), boundary_check=[0, 1])
''', device_str='cuda')
```
The tricky part is that we don't want to emit redundant broadcasts in the store. This PR reworks the logic a bit to make sure we don't emit a second broadcast unless it actually changes the shape.
# Test plan
Added a CI test for this case, which would fail on trunk. Checked that only one broadcast was emitted.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,034,612
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE2_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE2_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40611969619).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE2_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,034,610
|
DISABLED test_njt_causal_bfloat16_cuda_bfloat16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_njt_causal_bfloat16_cuda_bfloat16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40613351998).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 11 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_njt_causal_bfloat16_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 2116, in test_njt_causal
self.run_test_with_paged_attention(causal_njt, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 698, in run_test_with_paged_attention
compiled_out, compiled_lse = self.run_paged_attention(
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 641, in run_paged_attention
compiled_out, compiled_lse = compiled_sdpa(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py", line 1153, in flex_attention
def flex_attention(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1207, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 331, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 692, in inner_fn
outs = compiled_fn(args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 498, in wrapper
return compiled_fn(runtime_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 561, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2444, in run
return model(new_inputs)
File "/tmp/tmp9o95m5rl/j5/cj55rghfffehibmg46tyrvy4eeuenpx56jjtc3ibxdrzojme55dn.py", line 609, in call
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1059, in run
return launcher(
File "<string>", line 5, in launcher
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/triton/backends/nvidia/driver.py", line 529, in __call__
self.launch(gridX, gridY, gridZ, stream, function, self.launch_cooperative_grid, global_scratch, *args)
RuntimeError: Triton Error [CUDA]: out of memory
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_njt_causal_bfloat16_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,034,262
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE_256_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE_256_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40609417022).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE_256_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 72, in inner
return autograd_not_implemented_inner(op, deferred_error, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 45, in autograd_not_implemented_inner
result = operator(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 872, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 928.12 MiB is free. Process 112675 has 21.04 GiB memory in use. Of the allocated memory 6.76 GiB is allocated by PyTorch, and 14.02 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE_256_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,034,193
|
DISABLED test_non_equal_head_dims_score_mod7_float32_head_dims1_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod7_float32_head_dims1_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40611779558).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod7_float32_head_dims1_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 2159, in test_non_equal_head_dims
self.run_test(score_mod, dtype, B, H, S, qk_d, B, H, S, V_D=v_d, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 72, in inner
return autograd_not_implemented_inner(op, deferred_error, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 45, in autograd_not_implemented_inner
result = operator(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 880, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 224.12 MiB is free. Process 103369 has 21.72 GiB memory in use. Of the allocated memory 6.81 GiB is allocated by PyTorch, and 14.65 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_non_equal_head_dims_score_mod7_float32_head_dims1_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,034,167
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE_256_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE_256_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40612807146).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE_256_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,034,076
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE3_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE3_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40609417022).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE3_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,034,055
|
DISABLED test_njt_causal_float16_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_njt_causal_float16_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40612807146).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_njt_causal_float16_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,033,968
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE_256_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE_256_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40611884651).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod5_BLOCK_SIZE_256_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,033,923
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE2_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE2_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40613351998).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE2_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 72, in inner
return autograd_not_implemented_inner(op, deferred_error, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 45, in autograd_not_implemented_inner
result = operator(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 880, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 832.12 MiB is free. Process 59337 has 21.13 GiB memory in use. Of the allocated memory 6.77 GiB is allocated by PyTorch, and 14.10 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod7_BLOCK_SIZE2_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,033,885
|
DISABLED test_index_weird2_cuda (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_index_weird2_cuda&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40608545535).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_index_weird2_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,033,832
|
DISABLED test_load_from_bias_seq_batch_float16_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_load_from_bias_seq_batch_float16_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40608545535).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_load_from_bias_seq_batch_float16_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,033,804
|
DISABLED test_multiple_mask_calls_cuda (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 7
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_multiple_mask_calls_cuda&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40607419311).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_multiple_mask_calls_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_attention.py", line 1790, in test_multiple_mask_calls
torch.testing.assert_close(grad, grad_compiled, atol=3e-2, rtol=3e-2)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 191 / 131072 (0.1%)
Greatest absolute difference: 2.662456512451172 at index (0, 3, 390, 48) (up to 0.03 allowed)
Greatest relative difference: 18.222545623779297 at index (0, 3, 392, 49) (up to 0.03 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_multiple_mask_calls_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,033,789
|
DISABLED test_index_propagation_nested_indirect_indexing_mps (__main__.GPUTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"module: macos",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_index_propagation_nested_indirect_indexing_mps&suite=GPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40608617293).
Over the past 3 hours, it has been determined flaky in 15 workflow(s) with 47 failures and 15 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_index_propagation_nested_indirect_indexing_mps`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_torchinductor.py", line 1637, in test_index_propagation_nested_indirect_indexing
self.assertEqual(expect, actual)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_14479486848/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 1792 / 1920 (93.3%)
Greatest absolute difference: 3.397368907928467 at index (15, 8) (up to 1e-05 allowed)
Greatest relative difference: inf at index (2, 0) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor.py GPUTests.test_index_propagation_nested_indirect_indexing_mps
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor.py`
cc @clee2000 @malfet @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,033,703
|
DISABLED test_index_propagation_nested_indirect_indexing_mps (__main__.GPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"module: macos",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_index_propagation_nested_indirect_indexing_mps&suite=GPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40614349031).
Over the past 3 hours, it has been determined flaky in 15 workflow(s) with 15 failures and 15 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_index_propagation_nested_indirect_indexing_mps`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_torchinductor.py", line 1627, in test_index_propagation_nested_indirect_indexing
self.assertEqual(expect, actual)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_14476990843/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 1792 / 1920 (93.3%)
Greatest absolute difference: 3.397368907928467 at index (15, 8) (up to 1e-05 allowed)
Greatest relative difference: inf at index (2, 0) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor.py GPUTests.test_index_propagation_nested_indirect_indexing_mps
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor.py`
cc @clee2000 @malfet @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,033,653
|
DISABLED test_remove_noop_slice_cpu (__main__.CpuTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 9
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remove_noop_slice_cpu&suite=CpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40609661180).
Over the past 3 hours, it has been determined flaky in 75 workflow(s) with 150 failures and 75 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remove_noop_slice_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_compile_subprocess.py`
cc @clee2000 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,998,033,625
|
DISABLED test_remove_noop_slice_cuda (__main__.GPUTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 4
|
NONE
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remove_noop_slice_cuda&suite=GPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40611884660).
Over the past 3 hours, it has been determined flaky in 20 workflow(s) with 40 failures and 20 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remove_noop_slice_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 13293, in new_test
return value(self)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 6386, in test_remove_noop_slice
self.assertExpectedInline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3097, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 'def forward(self, arg0_1: "Sym(s77)", arg[333 chars]_9,)' != ''
- def forward(self, arg0_1: "Sym(s77)", arg1_1: "Sym(s27)", arg2_1: "Sym(s53)", arg3_1: "f32[s77, s27, s53][s27*s53, s53, 1]cuda:0"):
- add: "f32[s77, s27, s53][s27*s53, s53, 1]cuda:0" = torch.ops.aten.add.Tensor(arg3_1, 1); arg3_1 = None
- add_9: "f32[s77, s27, s53][s27*s53, s53, 1]cuda:0" = torch.ops.aten.add.Tensor(add, 1); add = None
- return (add_9,) : To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
python test/inductor/test_compile_subprocess.py GPUTests.test_remove_noop_slice_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_compile_subprocess.py`
cc @clee2000 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,998,033,592
|
DISABLED test_remove_noop_slice_scatter_cpu (__main__.CpuTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 8
|
NONE
|
Platforms: asan, linux, mac, macos, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remove_noop_slice_scatter_cpu&suite=CpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40610240284).
Over the past 3 hours, it has been determined flaky in 89 workflow(s) with 178 failures and 89 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remove_noop_slice_scatter_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_compile_subprocess.py`
cc @clee2000 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,998,033,561
|
DISABLED test_remove_noop_slice1_cuda (__main__.GPUTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 5
|
NONE
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remove_noop_slice1_cuda&suite=GPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40611206448).
Over the past 3 hours, it has been determined flaky in 19 workflow(s) with 38 failures and 19 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remove_noop_slice1_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 13293, in new_test
return value(self)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 6410, in test_remove_noop_slice1
self.assertExpectedInline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3097, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 'def forward(self, arg0_1: "Sym(s77)", arg[416 chars]_9,)' != ''
- def forward(self, arg0_1: "Sym(s77)", arg1_1: "Sym(s27)", arg2_1: "f32[s77, s27, 2][2*s27, 2, 1]cuda:0"):
- add: "f32[s77, s27, 2][2*s27, 2, 1]cuda:0" = torch.ops.aten.add.Tensor(arg2_1, 1); arg2_1 = None
- slice_1: "f32[s77, s27, 1][2*s27, 2, 1]cuda:0" = torch.ops.aten.slice.Tensor(add, -1, 0, -1); add = None
- add_9: "f32[s77, s27, 1][s27, 1, 1]cuda:0" = torch.ops.aten.add.Tensor(slice_1, 1); slice_1 = None
- return (add_9,) : To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
python test/inductor/test_compile_subprocess.py GPUTests.test_remove_noop_slice1_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_compile_subprocess.py`
cc @clee2000 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,998,033,557
|
DISABLED test_einsum_cpu (__main__.TestUnbackedSymintsCPU)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"module: macos",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_einsum_cpu&suite=TestUnbackedSymintsCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40612679742).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 6 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_einsum_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_unbacked_symints.py", line 466, in test_einsum
torch.testing.assert_close(actual, expected)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_14477982802/lib/python3.9/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 12144640 / 31457280 (38.6%)
Greatest absolute difference: nan at index (0, 0, 0, 0) (up to 1e-05 allowed)
Greatest relative difference: nan at index (0, 0, 0, 0) (up to 1.3e-06 allowed)
The failure occurred for item [0]
To execute this test, run the following from the base repo dir:
python test/inductor/test_unbacked_symints.py TestUnbackedSymintsCPU.test_einsum_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_unbacked_symints.py`
cc @clee2000 @malfet @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,033,527
|
DISABLED test_remove_noop_slice1_cpu (__main__.CpuTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 6
|
NONE
|
Platforms: asan, linux, mac, macos, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remove_noop_slice1_cpu&suite=CpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40609417035).
Over the past 3 hours, it has been determined flaky in 88 workflow(s) with 176 failures and 88 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remove_noop_slice1_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_compile_subprocess.py`
cc @clee2000 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,998,033,477
|
DISABLED test_remove_noop_slice_scatter_cuda (__main__.GPUTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 5
|
NONE
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remove_noop_slice_scatter_cuda&suite=GPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40609879619).
Over the past 3 hours, it has been determined flaky in 21 workflow(s) with 42 failures and 21 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remove_noop_slice_scatter_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 13293, in new_test
return value(self)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 6440, in test_remove_noop_slice_scatter
self.assertExpectedInline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3097, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 'def forward(self, arg0_1: "Sym(s77)", arg[738 chars]13,)' != ''
- def forward(self, arg0_1: "Sym(s77)", arg1_1: "Sym(s27)", arg2_1: "Sym(s53)", arg3_1: "f32[s77, s27, s53][s27*s53, s53, 1]cuda:0"):
- empty: "f32[s77, s27, s53][s27*s53, s53, 1]cuda:0" = torch.ops.aten.empty.memory_format([arg0_1, arg1_1, arg2_1], dtype = torch.float32, layout = torch.strided, device = device(type='cuda', index=0), pin_memory = False); arg0_1 = arg1_1 = arg2_1 = None
- permute: "f32[s77, s27, s53][s27*s53, s53, 1]cuda:0" = torch.ops.aten.permute.default(empty, [0, 1, 2]); empty = permute = None
- add: "f32[s77, s27, s53][s27*s53, s53, 1]cuda:0" = torch.ops.aten.add.Tensor(arg3_1, 1); arg3_1 = None
- add_13: "f32[s77, s27, s53][s27*s53, s53, 1]cuda:0" = torch.ops.aten.add.Tensor(add, 1); add = None
- return (add_13,) : To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
python test/inductor/test_compile_subprocess.py GPUTests.test_remove_noop_slice_scatter_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_compile_subprocess.py`
cc @clee2000 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,998,017,841
|
Do not propagate real tensor in extern kernel
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Summary: See internal Diff for more details.
In ExternKernel, the FakeTensors do not have associated real tensors, because they are just created from ir.Node's shape and stride.
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r aoti_data_dependent_ex
buck2 run mode/dev-nosan fbcode//caffe2/test/inductor:aot_inductor_arrayref_cpu -- -r data_dependent_extern_kernel_op
```
Differential Revision: D73002775
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,998,004,288
|
Request Pytorch Support for RTX 5000-Series GPU's and CUDA sm_120 Capabilities
|
jvossler
|
closed
|
[
"module: cuda",
"triaged"
] | 7
|
NONE
|
### 🚀 The feature, motivation and pitch
Request Pytorch Support for RTX 5000-Series GPU's and CUDA sm_120 capabilities
### Alternatives
_No response_
### Additional context
import torch
# Check if CUDA is available
print(f"CUDA available: {torch.cuda.is_available()}")
# Check CUDA version PyTorch was built with
if torch.cuda.is_available():
print(f"CUDA version: {torch.version.cuda}")
# Check how many GPUs are available
print(f"GPU count: {torch.cuda.device_count()}")
# Get name of GPU
print(f"GPU name: {torch.cuda.get_device_name(0)}")
# Run a simple tensor operation on GPU
x = torch.tensor([1.0, 2.0, 3.0], device='cuda')
try:
y = x * 2
print(f"GPU computation test: {y}")
print("GPU test passed successfully!")
except Exception as e:
print(f"Error in GPU test: {e}")
else:
print("CUDA is not available. PyTorch will run on CPU only.")
###
# Check basic PyTorch installation
print(f"PyTorch version: {torch.version}")
print(f"CUDA available but not compatible: {torch.cuda.is_available()}")
if torch.cuda.is_available():
print(f"CUDA version: {torch.version.cuda}")
print(f"GPU count: {torch.cuda.device_count()}")
print(f"GPU name: {torch.cuda.get_device_name(0)}")
print("Note: This GPU is detected but not compatible with current PyTorch")
# Force CPU operations
print("\nRunning CPU tests instead:")
x = torch.tensor([1.0, 2.0, 3.0], device='cpu')
y = x * 2
print(f"CPU computation test: {y}")
# More comprehensive CPU test
start_time = torch.cuda.Event(enable_timing=True) if torch.cuda.is_available() else None
end_time = torch.cuda.Event(enable_timing=True) if torch.cuda.is_available() else None
import time
cpu_start = time.time()
a = torch.randn(1000, 1000)
b = torch.randn(1000, 1000)
c = torch.matmul(a, b)
cpu_end = time.time()
print(f"CPU matrix multiplication time: {cpu_end - cpu_start:.4f} seconds")
print("CPU tests passed successfully!")
(practical_deep_learning) ubuntu@DESKTOP-PECCAOU:/workspaces/AI_ML/Practical_Deep_Learning_for_Coders/practical_deep_learning$ python gpu_pytorch_test.py
CUDA available: True
CUDA version: 12.4
GPU count: 1
/workspaces/AI_ML/Practical_Deep_Learning_for_Coders/practical_deep_learning/.magic/envs/default/lib/python3.12/site-packages/torch/cuda/init.py:235: UserWarning:
NVIDIA GeForce RTX 5080 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5080 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(
GPU name: NVIDIA GeForce RTX 5080
Error in GPU test: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
PyTorch version: 2.6.0.dev20241112
CUDA available but not compatible: True
CUDA version: 12.4
GPU count: 1
GPU name: NVIDIA GeForce RTX 5080
Note: This GPU is detected but not compatible with current PyTorch
Running CPU tests instead:
CPU computation test: tensor([2., 4., 6.])
CPU matrix multiplication time: 0.0177 seconds
CPU tests passed successfully!
(practical_deep_learning) ubuntu@DESKTOP-PECCAOU:/workspaces/AI_ML/Practical_Deep_Learning_for_Coders/practical_deep_learning$
cc @ptrblck @msaroufim @eqy
| true
|
2,997,982,400
|
Add ccode for CeilToInt and IntTrueDiv
|
sidt-meta
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary: As titled
Test Plan: Test in D73052653 -- shape calculator generates successfully
Differential Revision: D73073845
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,997,959,927
|
[CI][NoOp] Update skip reason for argmin_with_nan
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151374
Which is https://github.com/pytorch/pytorch/issues/130295 (i.e. torch.compile produces correct results, but eager is not)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,942,452
|
[C10D] avoid computing global_rank when group_rank is used
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151373
collective APIs accept either group or global rank for src/dst rank.
We provide a helper `_canonicalize_group_rank` which converts from maybe
group or maybe global to one particular format (defined by the kwarg
return_global: bool=False).
In this PR we stop performing the mapping lookup that converts group to
global or global to group in the case that the caller wants us to return
the same value that was passed in. The PR should be functionally
equivalent, except in cases where the mapping itself would raise an
exception but the mapping was not necessary in the first place.
This has come up in cases where people create new process groups outside
of 'init_process_group' APIs and group-specific ranks may not have a
valid mapping to the 'global' rank.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k
| true
|
2,997,939,915
|
Use more efficient mask to index computation
|
aartbik
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
This change addresses the third time/mem "spike" observed in
https://github.com/pytorch/pytorch/issues/151351
The change sees to perform better (time/mem) for both very sparse and very dense cases. It runs faster, and claims less memory both observed on CPU/GPU. It even avoids OOM for larger cases.
| true
|
2,997,893,841
|
[ONNX] Add a comment for handling bf16/fp8 tensor to numpy conversion
|
justinchuby
|
closed
|
[
"open source",
"Merged",
"release notes: onnx"
] | 3
|
COLLABORATOR
|
Follow up of https://github.com/pytorch/pytorch/pull/151259
| true
|
2,997,889,966
|
cd: S390x defaults to main not release
|
seemethere
|
closed
|
[
"ciflow/binaries",
"topic: not user facing"
] | 1
|
MEMBER
|
This is an oversight by us but s390x images don't have a release version of the manylinux builders.
I also can't find these images on Docker Hub which leads me to believe that they only exist on the nodes themselves and can't be reproduced
| true
|
2,997,882,148
|
test
|
laithsakka
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151369
| true
|
2,997,871,650
|
[ROCm] Upgrade ROCm CI to ROCm6.4
|
jithunnair-amd
|
open
|
[
"oncall: distributed",
"module: rocm",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ciflow/rocm",
"ci-no-td",
"ciflow/inductor-rocm",
"ciflow/rocm-mi300",
"ciflow/periodic-rocm-mi300",
"ciflow/pull"
] | 42
|
COLLABORATOR
|
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,868,744
|
[BE] Fix extra-semi warning in attention.cpp
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Introduced by https://github.com/pytorch/pytorch/pull/149512
Before this change, following warning was generated
```
/Users/nshulga/git/pytorch/pytorch/aten/src/ATen/native/transformers/attention.cpp:452:71: warning: extra ';' outside of a function is incompatible with C++98 [-Wc++98-compat-extra-semi]
452 | REGISTER_HPU_DISPATCH(_fused_sdp_choice_stub, &_fused_sdp_choice_meta);
| ^
1 warning generated.
```
| true
|
2,997,861,389
|
ep.module() error out after ep.run_decomposition
|
yushangdi
|
closed
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The repro:
```python
import torch
from torch.testing._internal.custom_tensor import CustomTensorPlainOut
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
self.p1 = torch.nn.Parameter(torch.ones(3, 4))
self.p2 = torch.nn.Parameter(
CustomTensorPlainOut(
torch.ones(3, 4),
torch.ones(3, 4),
)
)
def forward(self, x):
a = (2 * self.p1 + self.p2).sum()
return x + a
model = Foo()
example_inputs = (torch.randn(3, 4),)
ep = torch.export.export(model, example_inputs, strict=False)
ep.run_decompositions()
ep.module()
```
The error:
```
KeyError: 'p2'
Open Traceback
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[68], line 105
103 ep = torch.export.export(model, example_inputs, strict=False)
104 ep.run_decompositions()
--> 105 ep.module()
File /data/users/shangdiy/.bento/kernels/bento_kernel_pytorch/2532/bento_kernel_pytorch_binary-inplace#link-tree/torch/export/exported_program.py:1297, in module(self)
File /data/users/shangdiy/.bento/kernels/bento_kernel_pytorch/2532/bento_kernel_pytorch_binary-inplace#link-tree/torch/export/_unlift.py:420, in _unlift_exported_program_lifted_states(ep)
File /data/users/shangdiy/.bento/kernels/bento_kernel_pytorch/2532/bento_kernel_pytorch_binary-inplace#link-tree/torch/export/_unlift.py:265, in _register_attrs_to_new_gm(new_gm, graph_signature, state_dict, constants)
KeyError: 'p2'
```
ep's state_dict before we run decomposition:
```
{'p1': Parameter containing:
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]], requires_grad=True), 'p2': CustomTensorPlainOut(tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]]), tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]]))}
```
The state_dict of ep after:
```
{'p1': Parameter containing:
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]], requires_grad=True),
'parametrizations.p2.original0': Parameter containing:
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]], requires_grad=True),
'parametrizations.p2.original1': Parameter containing:
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]], requires_grad=True)}
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
### Versions
14.15 warm
| true
|
2,997,858,395
|
flex_attention error in torch.compile
|
jjh42
|
closed
|
[
"oncall: pt2",
"module: pt2-dispatcher",
"module: flex attention"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I haven't yet got a good, standalone reproduction of the issue yet.
Using flex_attention + torch.compile I get an error that "Query must be contiguous in the last dimension" (full stack trace below).
If I run in eager mode q.stride() always reports 1 for the last dimension. Even more mystifying to me, this is triggered by some preprocessing code far away from where the error occurs (which is one of the reasons I haven't managed to write a simple reproduction). Adding q = q.contiguous() doesn't make any difference.
```
File "/tmp/elefant-uv-env/lib/python3.12/site-packages/torch/_inductor/kernel/flex_attention.py", line 1419, in flex_attention
assert q_strides[-1] == 1, "Query must be contiguous in the last dimension"
^^^^^^^^^^^^^^^^^^
torch._inductor.exc.InductorError: LoweringException: AssertionError: Query must be contiguous in the last dimension
target: flex_attention
args[0]: TensorBox(StorageBox(
ComputedBuffer(name='buf408', layout=FixedLayout('cuda:0', torch.bfloat16, size=[25, 8, 1200, 32], stride=[307200, 1, 256, 8]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.bfloat16, inner_fn=<function ReinterpretView.make_loader.<locals>.loader at 0x788a740e4400>, ranges=[25, 8, 1200, 32]))
))
args[1]: TensorBox(StorageBox(
ComputedBuffer(name='buf410', layout=FixedLayout('cuda:0', torch.bfloat16, size=[25, 8, 1200, 32], stride=[307200, 1, 256, 8]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.bfloat16, inner_fn=<function ReinterpretView.make_loader.<locals>.loader at 0x788a740cd080>, ranges=[25, 8, 1200, 32]))
))
args[2]: TensorBox(
ReinterpretView(
StorageBox(
ExternKernelOut(
python_kernel_name='extern_kernels.mm',
name=buf406,
layout=FixedLayout('cuda:0', torch.bfloat16, size=[30000, 768], stride=[768, 1]),
inputs=[ReinterpretView(
StorageBox(
ComputedBuffer(name='buf405', layout=FixedLayout('cuda:0', torch.bfloat16, size=[25, 1200, 256], stride=[307200, 256, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.bfloat16, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x788a717945e0>, ranges=[25, 1200, 256]))
),
FixedLayout('cuda:0', torch.bfloat16, size=[30000, 256], stride=[256, 1]),
origins=OrderedSet([mm])
), ComputedBuffer(name='buf404', layout=FixedLayout('cuda:0', torch.bfloat16, size=[256, 768], stride=[1, 256]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.bfloat16, inner_fn=<function BaseView.make_loader.<locals>.loader at 0x788a71794d60>, ranges=[256, 768]))],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=extern_kernels.mm,
cpp_kernel_name=at::mm_out,
ordered_kwargs_for_cpp_kernel=(),
op_overload=None,
arg_properties=[{}, {}],
kwarg_properties=None,
unbacked_bindings={},
mutation_outputs=[],
origin_node=mm,
origins=OrderedSet([mm])
)
),
FixedLayout('cuda:0', torch.bfloat16, size=[25, 8, 1200, 32], stride=[921600, 32, 768, 1], offset=512),
origins=OrderedSet([permute_5])
)
)
args[3]: Subgraph(name='sdpa_score0', graph_module=<lambda>(), graph=None)
args[4]: (1200, 1200, TensorBox(StorageBox(
InputBuffer(name='primals_173', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 10], stride=[10, 10, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_172', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 10, 10], stride=[100, 100, 10, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_174', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 10], stride=[10, 10, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_175', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 10, 10], stride=[100, 100, 10, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_176', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 10], stride=[10, 10, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_177', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 10, 10], stride=[100, 100, 10, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_178', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 10], stride=[10, 10, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_179', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 10, 10], stride=[100, 100, 10, 1]))
)), 128, 128, Subgraph(name='sdpa_mask0', graph_module=<lambda>(), graph=None))
args[5]: 0.17677669529663687
args[6]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'WRITE_DQ': True, 'OUTPUT_LOGSUMEXP': True}
args[7]: ()
args[8]: ()
```
### Versions
python 3.12.3
pytorch nightly (April 15)
cc @chauhang @penguinwu @zou3519 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng @ydwu4
| true
|
2,997,800,678
|
[bazel] Build flatbuffers within bazel
|
jhance
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: build",
"topic: bug fixes",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
This is similar to how we handle protobufs and it makes it more convenient for bazel users to handle their version of flatbuffers. (Flatbuffers is very picky about the generated code matching the runtime). Instead of using the checked in generated code, we generate it on the fly.
This is relevant to https://github.com/pytorch/pytorch/issues/112903, because having the version of flatbuffers tied to pytorch will make pytorch difficult to use as an external workspace.
| true
|
2,997,787,368
|
fix test_einsum: use initialized values
|
ColinPeppler
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 16
|
CONTRIBUTOR
|
Summary: `empty` uses uninitialized values so that could be NaNs, thus, the assert_close kept failing in FBCode.
Differential Revision: D73067722
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,778,206
|
[dynamo] support fb internal bytecode EAGER_IMPORT_NAME
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: graph breaks"
] | 4
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151362
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D73127097](https://our.internmc.facebook.com/intern/diff/D73127097)
| true
|
2,997,765,675
|
[c10] helpers for runtime c10::alias re-use
|
dolpm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 17
|
CONTRIBUTOR
|
Summary: we need these to check whether the input tensor was re-sized/strided between executions when choosing to alias
Test Plan: CI
Reviewed By: henryoier
Differential Revision: D73061676
| true
|
2,997,735,036
|
ROCm mx-fp4 Support
|
petrex
|
open
|
[
"module: rocm",
"triaged",
"open source"
] | 2
|
CONTRIBUTOR
|
TLDR: ROCm mx-fp4 support on gfx950
This pull request includes updates to support new data types and versions for CUDA and ROCm in various files. The most important changes include adding support for ROCm 6.5 and above for specific data types and updating the `hipify` mappings to include new attributes.
### Support for new data types and versions:
* [`aten/src/ATen/cuda/CUDABlas.cpp`](diffhunk://#diff-74fcb26047c1df4024105d36ce22a36b77cf8cc93c28631d743e639b3d6066aeL1606-R1617): Updated conditions to support `torch.float8_e8m0fnu` and `torch.float8_e4m3fn` scales for CUDA 12.8 or ROCm 6.5 and above.
* [`aten/src/ATen/cuda/tunable/GemmHipblaslt.h`](diffhunk://#diff-bfa1a3b5d4bef1892bf50338775f3b0fd8cd31fc1868148f3968b98aefb68e3fR88-R96): Added support for `c10::Float4_e2m1fn_x2` data type for ROCm 6.5 and above.
* [`aten/src/ATen/native/cuda/Blas.cpp`](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abR1174-R1188): Added checks to ensure `Float4_e2m1fn_x2`, `Float8_e5m2`, and `Float8_e4m3fn` data types are only used with ROCm 6.5 and above.
### Updates to `hipify` mappings:
* [`torch/utils/hipify/cuda_to_hip_mappings.py`](diffhunk://#diff-85bd10d67a85149584e7d7a8cba533241f7ad14450e5d54ffec23da34032429aR7342-R7345): Added mappings for `CUBLASLT_MATMUL_DESC_A_SCALE_MODE`, `CUBLASLT_MATMUL_DESC_B_SCALE_MODE`, `CUBLASLT_MATMUL_MATRIX_SCALE_VEC32_UE8M0`, and `CUBLASLT_MATMUL_MATRIX_SCALE_VEC16_UE4M3`.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,997,731,385
|
FlexDecode not guarding on GQA groups correctly
|
drisspg
|
open
|
[
"triaged",
"bug",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 1
|
CONTRIBUTOR
|
# Summary
When attempting to run flexattention in flex_decode settings w/ HQ = 12 and HKV = 2 e.g. 6 groups we hit the following error:
`torch._inductor.exc.InductorError: LoweringException: ValueError: Number of shared query heads sharing the same KV head must be power of 2. `
We should ideally remove this restriction but at the very least correctly update our flex_decode dispatch
Seen for qwen2 1.5b
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @yanboliang @BoyuanFeng
| true
|
2,997,725,694
|
FlexAttention ModIndex misses cache hit for autograd func
|
drisspg
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 0
|
CONTRIBUTOR
|
# Summary
https://github.com/vllm-project/vllm/pull/16078, while working on this Richard and I noticed that we are missing cache on repeated runs to "compile_block_mask" because of mod_index autograd func
https://github.com/pytorch/pytorch/blob/21c2565f35f1d5034c3244066b61e58eb5148781/torch/_dynamo/_trace_wrapped_higher_order_op.py#L141
Fix is to check if grad_mod is enabled / x requries grad. If so run func else: call contents of foward
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @yanboliang @BoyuanFeng
| true
|
2,997,717,526
|
[compile][compile time traces] Add more dynamo traces
|
anijain2305
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151410
* #151409
* #150704
* #150717
* __->__ #151357
* #151256
* #151330
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,714,384
|
Improve Error Message for Dynamic Shape Constraint Violation
|
drisspg
|
closed
|
[
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
CONTRIBUTOR
|
# Description
When a dynamic shape constraint is violated due to specialization, the current error message isn't helpful
## Current behavior
The error message shows that a constraint violation occurred but doesn't provide clear guidance on why the specialization happened:
```
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['query'].size()[2])! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of RelaxedUnspecConstraint(L['query'].size()[2]) are valid because L['query'].size()[2] was inferred to be a constant (22).
```
I have no idea why this constraint violation is here / what that even means. I think something along the lines "something is forcing the thing you are tyring to mark dynamic to be static" run dynamic+ logs to see what that thing is and if you think that is wrong open an issue w/ repro"
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,997,699,850
|
[ROCm] upgrade nightly wheels to rocm6.4
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/binaries",
"topic: not user facing",
"ciflow/rocm",
"no-runner-experiments"
] | 13
|
COLLABORATOR
|
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,997,614,965
|
[test] New calculate docker image action
|
clee2000
|
closed
|
[
"module: rocm",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Testing for https://github.com/pytorch/test-infra/pull/6499/files
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,997,597,627
|
Don't retry on permission denied errors for ECR
|
ZainRizvi
|
open
|
[
"module: ci",
"triaged"
] | 0
|
CONTRIBUTOR
|
Permission denied error still resulted in retries for 3 hours even though such a failure would never succeed on retry
Splitting out an issue from https://github.com/pytorch/pytorch/issues/148771
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,997,585,745
|
[inductor] Implicitly emulate precision casts when intermediates are saved
|
jansel
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
This is related to a numerics conversation we had in the inductor meeting.
We have the config:
https://github.com/pytorch/pytorch/blob/f98150fc8e621e76da8fbe101bc56025ca4b7981/torch/_inductor/config.py#L616-L618
When False, it improves numerics relative to eager, but we were seeing a case with a recomputed RMSNorm where it causes differences between forwards and backwards numerics. Where forwards is done in fp32 and backwards in fp16.
One possibly fix would be to implicitly apply trunction whenever we write out an intermediate.
For example, suppose you have:
```py
tmp0 = ops.load(<some fp16 tensor>)
tmp1 = <compute something based on tmp0 in fp32>
ops.store(..., tmp1) # save tmp1 in fp16 for use in backwards
tmp2 = <compute something based on tmp1 in fp32>
...
```
This can cause a difference because the ops.store (used in backwards) is fp16, while tmp1 is fp32.
We could instead to:
```py
tmp0 = ops.load(<some fp16 tensor>)
tmp1 = <compute something based on tmp0 in fp32>
tmp1 = tmp1.to(fp16).to(fp32) # NEW: truncate before the store so that tmp2 is computed with fp16 to match backwards
ops.store(..., tmp1) # save tmp1 in fp16 for use in backwards
tmp2 = <compute something based on tmp1 in now truncated to fp16>
...
```
This is the same as what `emulate_precision_casts` does, but we *only* do it when we save the intermediate. Since saving the intermediate is a signal that the value will be uses elsewhere with lower precision.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,566,101
|
Sparse tensor conversion performance issues (CPU/GPU)
|
aartbik
|
closed
|
[
"module: sparse",
"triaged"
] | 7
|
COLLABORATOR
|
### 🐛 Describe the bug
I was investigating opportunities for improving "activation" sparsity as follows
```
import time
import torch
d = 1024 * 32
A = -torch.ones(d, d)
A[0, 0] = 111
A[10, 10] = 222 # most entries in A are < 0
T = torch.relu(A) # materializes very sparse T as dense first
S = T.to_sparse_csr() # would be nice to have something like S = torch.sparse_relu(A)
# but that is not the point of this bug yet
```
where I really would like to have a "sparsifying" relu to avoid materializing T as dense intermediate first.
However, while pondering on that, I noticed a huge performance (time and memory) difference between converting to COO or converting to CSR. Take the following annotated code
```
.. construct A..
time.sleep(10) # INTERVAL 1
T = torch.relu(A) # materializes very sparse T as dense first
time.sleep(10) # INTERVAL 2
S = T.to_sparse() # to COO
time.sleep(10) # INTERVAL 3
```
and compare memory tracking for COO (to_sparse) with CSR (to_sparse_csr). Attached are the two plots. Both are running on CPU (but the problem occurs for GPU as well, same code path).
It is clear that COO behaves more or less as expected (first bump to get A, second bump to get T, then nothing more for S; again, my initial goal was to avoid the second bump for T, but read on). Then looking at CSR, we get a similar first bump to get A, second bump to T, but then a huge increase in memory to get to S. Also, the time increases substantially (notice the extra time in between INTERVAL 2 and INTERVAL 3).


### Versions
all versions
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
2,997,531,397
|
`@pytorchbot rebase -s` can result in a confusing warning
|
malfet
|
closed
|
[
"low priority",
"module: ci",
"triaged",
"enhancement"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Not sure what happened in https://github.com/pytorch/pytorch/pull/146273
but apparently single pytorchbot command
https://github.com/pytorch/pytorch/pull/146273#issuecomment-2806801272
Somehow spawned two concurrent rebase workflows (hattip to @ZainRizvi for investigation)
https://github.com/pytorch/pytorch/actions/runs/14474744160
and
https://github.com/pytorch/pytorch/actions/runs/14474738110
Which resulted in https://github.com/pytorch/pytorch/pull/146273#issuecomment-2806807974
To the best of my knowledge, this should not have happened, as one command should translate to one workflow_dispatch :)
### Versions
N/A
cc @seemethere @pytorch/pytorch-dev-infra
| true
|
2,997,528,933
|
[dynamo] Guard serialization for HASATTR
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 17
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151349
* #151343
* #151318
Adding guard serialization for type HASATTR
Differential Revision: [D73059073](https://our.internmc.facebook.com/intern/diff/D73059073/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,997,520,368
|
Infra for handling builtin ops (min, max, math.pow)
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151348
* #151347
Reapply of https://github.com/pytorch/pytorch/pull/150003
Differential Revision: [D73050801](https://our.internmc.facebook.com/intern/diff/D73050801/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,997,520,177
|
Don't specialize min/max
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151348
* __->__ #151347
address https://github.com/pytorch/pytorch/issues/149635
Differential Revision: [D73041489](https://our.internmc.facebook.com/intern/diff/D73041489/)
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,997,520,038
|
Support C++ statically_known_true
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151348
* #151347
* __->__ #151346
Differential Revision: [D73040543](https://our.internmc.facebook.com/intern/diff/D73040543/)
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,997,510,903
|
[ROCm][CI/CD] Create ROCm6.4 magma tarball
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,997,471,765
|
Fix tensorpipe compilation with clang-17
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: build",
"topic: bug fixes"
] | 5
|
CONTRIBUTOR
|
By suppressing `missing-template-arg-list-after-template-kw` warning, which seems to be required to compile Google's libnop, which is in a semi-abandoned state now
```
In file included from /Users/malfet/git/pytorch/pytorch/third_party/tensorpipe/third_party/libnop/include/nop/base/variant.h:21:
/Users/malfet/git/pytorch/pytorch/third_party/tensorpipe/third_party/libnop/include/nop/types/variant.h:241:30: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
241 | index_ = value_.template Construct(std::forward<Args>(args)...);
| ^
/Users/malfet/git/pytorch/pytorch/third_party/tensorpipe/third_party/libnop/include/nop/types/variant.h:258:26: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
258 | if (!value_.template Assign(TypeTag<T>{}, index_, std::forward<U>(value))) {
| ^
/Users/malfet/git/pytorch/pytorch/third_party/tensorpipe/third_party/libnop/include/nop/types/variant.h:265:26: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
265 | if (!value_.template Assign(index_, std::forward<T>(value))) {
| ^
3 errors generated.
```
Fixes https://github.com/pytorch/pytorch/issues/151316
| true
|
2,997,466,487
|
[dynamo] Guard serialization for NOT_PRESENT_IN_GENERIC_DICT
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151349
* __->__ #151343
* #151318
Adding guard serialization for type NOT_PRESENT_IN_GENERIC_DICT
Differential Revision: [D73057304](https://our.internmc.facebook.com/intern/diff/D73057304/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,997,296,798
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE3_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE3_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40585278657).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE3_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 72, in inner
return autograd_not_implemented_inner(op, deferred_error, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 45, in autograd_not_implemented_inner
result = operator(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 872, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 572.12 MiB is free. Process 161493 has 21.38 GiB memory in use. Of the allocated memory 6.73 GiB is allocated by PyTorch, and 14.40 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE3_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,295,715
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE3_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE3_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40585278657).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE3_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,295,624
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE2_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE2_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40586885149).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE2_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,295,623
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE2_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE2_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40585278657).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod2_BLOCK_SIZE2_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,295,097
|
DISABLED test_non_equal_head_dims_score_mod2_float32_head_dims1_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod2_float32_head_dims1_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40585278657).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod2_float32_head_dims1_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 2159, in test_non_equal_head_dims
self.run_test(score_mod, dtype, B, H, S, qk_d, B, H, S, V_D=v_d, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 72, in inner
return autograd_not_implemented_inner(op, deferred_error, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/utils.py", line 45, in autograd_not_implemented_inner
result = operator(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 869, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.1092 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 8, in forward
add = torch.ops.aten.add.Tensor(mul_2, mul_1); mul_2 = mul_1 = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 776, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 776, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 21.95 GiB of which 224.12 MiB is free. Process 131319 has 21.72 GiB memory in use. Of the allocated memory 7.79 GiB is allocated by PyTorch, and 13.66 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_non_equal_head_dims_score_mod2_float32_head_dims1_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,997,295,095
|
DISABLED test_fully_masked_out_rows_compile_True_cuda (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_fully_masked_out_rows_compile_True_cuda&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40585278657).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_fully_masked_out_rows_compile_True_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.