id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,899,255,863
|
[Window][Inductor UT] Fix for tempfile.NamedTemporaryFile(delete=True) not work on Windows.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 10
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148632
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,899,250,725
|
DISABLED test_sdpa_rewriter_11_cuda (__main__.SDPAPatternRewriterCudaDynamicTests)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sdpa_rewriter_11_cuda&suite=SDPAPatternRewriterCudaDynamicTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38274939794).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sdpa_rewriter_11_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 582, in _test_sdpa_rewriter_11
self._check_common(dot_prod_attention)
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 85, in _check_common
self.assertGreaterEqual(counters["inductor"]["fuse_attention"], 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1250, in assertGreaterEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 0 not greater than or equal to 1
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_fused_attention.py SDPAPatternRewriterCudaDynamicTests.test_sdpa_rewriter_11_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_fused_attention.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,899,112,366
|
[inductor] lowering for fractional_max_pool3d
|
isuruf
|
open
|
[
"open source",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148630
also a lowering with a reduction for large window_sizes for
fractional_max_pool2d
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,899,065,459
|
[ONNX] dynamic dims are not exported with the specified names
|
yuanyao-nv
|
open
|
[
"module: onnx",
"triaged"
] | 32
|
NONE
|
### 🐛 Describe the bug
For the following export script:
```
import torch
import torch.nn as nn
import torch.onnx
class AddModel(nn.Module):
def __init__(self):
super(AddModel, self).__init__()
def forward(self, x, y):
return x + y
# Instantiate the model
model = AddModel()
# Set the model to evaluation mode
model.eval()
# Create dynamic input tensors
x = torch.randn(2, 3)
y = torch.randn(2, 3)
# Define dynamic axes for ONNX export
batch_size = torch.export.Dim("batch_size")
features = torch.export.Dim("features")
dynamic_axes = {
"input1": {0: batch_size, 1: features},
"input2": {0: batch_size, 1: features},
"output": {0: batch_size, 1: features}
}
# Export the model to ONNX
onnx_filename = "add_model.onnx"
torch.onnx.export(
model,
(x, y),
onnx_filename,
input_names=["input1", "input2"],
output_names=["output"],
dynamic_shapes=dynamic_axes,
dynamo=True,
)
```
The exported model has dynamic dimensions named "s0" "s1" instead of "batch_size", "features". This is different from the torchscript exporter's behavior.

Dynamo exporter is not exporting the dynamic dimensions with the specified name
### Versions
PyTorch version: 2.7.0.dev20250305+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.39
Python version: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A5000
Nvidia driver version: 560.28.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-9820X CPU @ 3.30GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 30%
CPU max MHz: 4200.0000
CPU min MHz: 1200.0000
BogoMIPS: 6599.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 pti ssbd mba ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 320 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 10 MiB (10 instances)
L3 cache: 16.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cudnn-frontend==1.10.0
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] nvtx==0.2.5
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.14.0
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250305+cu126
[pip3] torch_geometric==2.5.3
[pip3] torch_tensorrt==2.6.0a0
[pip3] torchaudio==2.6.0.dev20250305+cu126
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.0.dev20250305+cu126
[conda] Could not collect
onnxscript 0.2.2
| true
|
2,899,061,844
|
Adjust CMake code for Eigen
|
cyyever
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 13
|
COLLABORATOR
|
There are some CMake changes introduced:
1. CAFFE2_USE_EIGEN_FOR_BLAS is removed because it is used in the removed Caffe2 code.
2. Link to Eigen only when `BLAS` value is `EIGEN`.
| true
|
2,899,025,088
|
[ONNX] Use torch export to get dynamic shapes for JIT convert strategy
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 6
|
COLLABORATOR
|
Use torch export to get dynamic shapes for JIT converted graph. I just realized we can retrace a converted jit graph with `torch.export` and produce dynamic shapes using `torch.export`.
- **Prior:** The exporter will produce a **static graph silently** even when dynamic_shapes are provided.
- **Proposed:** When `dynamic_shapes` is provided and when the strategy is able to handle it, it will succeed
## Why are we still keeping the JIT strategy?
It is useful when users want to convert JIT modules or `.pt` files into ONNX via the new path. Sometimes also useful when there are JIT scripted modules in the nn module.
| true
|
2,899,015,281
|
[triton 3.3] test_triton_kernel_constants fix
|
davidberard98
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148626
Thanks @FindHao who did the initial version of this PR: https://github.com/pytorch/pytorch/pull/148505
TL;DR is that https://github.com/triton-lang/triton/pull/5961 deprecates `tl.constexpr` annotations - you're supposed to wrap the constexpr value in `tl.constexpr()` instead.
This just updates the tests to wrap with `tl.constexpr()` (and leaves the annotations - that way the old triton versions will still pass).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,899,010,794
|
Remove Cuda 12.4 from nightly Binaries
|
tinglvv
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/binaries",
"topic: not user facing",
"ci-no-td"
] | 10
|
COLLABORATOR
|
https://github.com/pytorch/pytorch/issues/145570
removes cuda 12.4 nightly builds
cc @atalman @malfet @nWEIdia @ptrblck
| true
|
2,899,002,229
|
DISABLED test_return_captured_var_used_multiple_times_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 8
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_return_captured_var_used_multiple_times_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38271253731).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 14 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_return_captured_var_used_multiple_times_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 288, in test_return_captured_var_used_multiple_times
self._test_wrap_simple(fn, default_args_generator((x,)), arg_count, 3)
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 191, in _test_wrap_simple
self.assertEqual(len(wrap_node.args), expected_num_wrap_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 4 but got 5.
Absolute difference: 1
Relative difference: 0.25
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesHigherOrderOpTests.test_return_captured_var_used_multiple_times_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,898,880,349
|
[mm_logs] follow up to add count info based on shape for inductor `aten.mm`s
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Summary:
as title.
when enable `TORCH_LOGS="+inductor"`, you can get logs at the end such as
stats [('calls_captured', 1), ('unique_graphs', 1)]
inductor [('pattern_matcher_count', 2), ('pattern_matcher_nodes', 2), ('benchmarking.TritonBenchmarker.benchmark_gpu', 2), **(('aten_addmm', (16, 6, 16)), 1)**, ('extern_calls', 1), ('async_compile_cache_miss', 1)]
graph_break []
Test Plan: follow up to add proper logging test.
Differential Revision: D70665104
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,833,035
|
stage 2 of depreate silent fallback of tuning gemm
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 43
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148622
* #151506
context: https://github.com/pytorch/pytorch/issues/147479
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,832,481
|
update get_default_device to also respect torch.device ctx manager
|
kshitij12345
|
open
|
[
"triaged",
"open source",
"release notes: python_frontend"
] | 2
|
COLLABORATOR
|
Fixes https://github.com/pytorch/pytorch/issues/131328
| true
|
2,898,792,839
|
fix 142457 , fixes double free corruption by adding TORCH_CHECK to ensure weights have the proper size
|
AmalDevHaridevan
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
NONE
|
Fixes #142457
# Problem
```slow_conv_transpose3d_shape_check``` currently does not enforce the constraint that the size of the ```weight = n_input_plane x n_output_plane x kernel_depth x kernel_height x kernel_width```. This causes the undefined behavior seen in issue 142457. This can be fixed by enforcing that the sizes of weight at dims ```3```, ```4```, and ```5``` equals the sizes of kernel at dims ```1```, ```2```, and ```3```.
# Fix
Added 3 ```TORCH_CHECKs``` to meet the above constraint.
# Test
## Reproduction code
```python
import torch
self = torch.full((1, 2, 4, 5, 4,), 0.5, dtype=torch.double)
weight = torch.full((2, 3, 2, 3, 2,), 0.5, dtype=torch.double)
kernel_size = [1, 1, 1]
bias = torch.full((3,), 0.5, dtype=torch.double)
stride = [1, 1, 1]
padding = [2, 2, 2]
output_padding = [2, 2, 2]
dilation = [1879048192, 1879048192, 1879048192]
torch.ops.aten.slow_conv_transpose3d(self, weight, kernel_size, bias, stride, padding, output_padding, dilation)
```
## Before fix
```bash
double free or corruption (!prev)
Aborted (core dumped)
```
## After fix
```bash
Traceback (most recent call last):
File "/home/system/Desktop/pytorch_contrib/pytorch/../test.py", line 11, in <module>
torch.ops.aten.slow_conv_transpose3d(self, weight, kernel_size, bias, stride, padding, output_padding, dilation)
File "/home/system/Desktop/pytorch_contrib/pytorch/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
RuntimeError: Expected weight to have size 1 at dimension 3 but got 2
```
# More verification tests
```python
import torch
self = torch.full((1, 2, 4, 5, 4,), 0.5, dtype=torch.double)
weight = torch.full((2, 3, 2, 3, 2,), 0.5, dtype=torch.double)
kernel_size = [2, 3, 2,]
bias = torch.full((3,), 0.5, dtype=torch.double)
stride = [1, 1, 1]
padding = [2, 2, 2]
output_padding = [0, 0, 0]
dilation = [1, 1, 1]
res1 = torch.ops.aten.slow_conv_transpose3d(self, weight, kernel_size, bias, stride, padding, output_padding, dilation)
module = torch.nn.ConvTranspose3d(2, 3, kernel_size=kernel_size, stride=stride, padding=padding, output_padding=output_padding, dilation=dilation, bias=True)
module.weight = torch.nn.Parameter(weight)
module.bias = torch.nn.Parameter(bias)
res2 = module(self)
assert torch.allclose(res1, res2)
print("Success")
```
| true
|
2,898,780,339
|
[ONNX] Support complex comparison when verify=True
|
titaiwangms
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 4
|
COLLABORATOR
|
Previously, the comparison of complex numbers was not supported when `verify=True`.
NOTE: This PR can be extended to support more complex comparison cases if there are other places in onnx codebase needed to be changed.
| true
|
2,898,760,708
|
[dynamo] Don't affect stack traces under TORCHDYNAMO_DISABLE
|
xmfan
|
open
|
[
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 8
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148618
Follow-up to https://fb.workplace.com/groups/1286739428954016/permalink/1446477902980167/. We don't want to wrap eager code when TORCHDYNAMO_DISABLE/JustKnobs are flipped, because it was confusing model owners to still see their exception stack traces go through eval_frame.py.
We already had it supported for torch.compile, this PR adds support for disable and run.
```python
# err.py
import torch
@torch._dynamo.disable
def fn():
raise Exception("hi")
fn()
# Before
> TORCHDYNAMO_DISABLE="1" python err.py
Traceback (most recent call last):
File "/home/xmfan/core/a/pytorch/err.py", line 7, in <module>
fn()
File "/home/xmfan/core/a/pytorch/torch/_dynamo/eval_frame.py", line 828, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/err.py", line 5, in fn
raise Exception("hi")
Exception: hi
# After
> TORCHDYNAMO_DISABLE="1" python err.py
Traceback (most recent call last):
File "/home/xmfan/core/a/pytorch/err.py", line 7, in <module>
fn()
File "/home/xmfan/core/a/pytorch/err.py", line 5, in fn
raise Exception("hi")
Exception: hi
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,898,718,575
|
[ONNX] Update saved exported program in debugging report if the exporting passes run_decomposition()
|
titaiwangms
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 5
|
COLLABORATOR
|
Previous to this PR, if the exporting passes run_decomposition(), the report still shows the exported_program before decomposition, which adds the difficulties to our users when they want to check the exported program that are used to translate to ONNX graph.
The following example is what we see before this PR:
```
# PyTorch ONNX Conversion Report
```
✅ Obtain model graph with `torch.export.export(..., strict=False)`
⚪ Obtain model graph with `torch.export.export(..., strict=True)`
⚪ Obtain model graph with `torch.jit.trace`
✅ Decompose operators for ONNX compatibility
❌ Translate the graph into ONNX
⚪ Run `onnx.checker` on the ONNX model
⚪ Execute the model with ONNX Runtime
⚪ Validate model output accuracy
```
## Error messages
```pytb
Traceback (most recent call last):
File "/home/titaiwang/pytorch/torch/onnx/_internal/exporter/_core.py", line 707, in _translate_fx_graph
_handle_call_function_node_with_lowering(
File "/home/titaiwang/pytorch/torch/onnx/_internal/exporter/_core.py", line 486, in _handle_call_function_node_with_lowering
raise _errors.DispatchError(
torch.onnx._internal.exporter._errors.DispatchError: No ONNX function found for <OpOverload(op='aten.slice', overload='Tensor')>. Failure message: No decompositions registered for the complex-valued input
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/titaiwang/pytorch/torch/onnx/_internal/exporter/_core.py", line 1371, in export
onnx_program = _exported_program_to_onnx_program(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/titaiwang/pytorch/torch/onnx/_internal/exporter/_core.py", line 1007, in _exported_program_to_onnx_program
values = _translate_fx_graph(
^^^^^^^^^^^^^^^^^^^^
File "/home/titaiwang/pytorch/torch/onnx/_internal/exporter/_core.py", line 733, in _translate_fx_graph
raise _errors.ConversionError(
torch.onnx._internal.exporter._errors.ConversionError: Error when translating node %slice_1 : [num_users=1] = call_function[target=torch.ops.aten.slice.Tensor](args = (%_to_copy, 0, 0, 9223372036854775807), kwargs = {}). See the stack trace for more information.
```
## Exported program
```python
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, x: "f32[3, 4]"):
# File: /home/titaiwang/pytorch/test_slice_complex.py:6 in forward, code: x_complex = x.to(torch.complex64)
to: "c64[3, 4]" = torch.ops.aten.to.dtype(x, torch.complex64); x = None
# File: /home/titaiwang/pytorch/test_slice_complex.py:8 in forward, code: return x_complex[:, :2]
slice_1: "c64[3, 4]" = torch.ops.aten.slice.Tensor(to, 0, 0, 9223372036854775807); to = None
slice_2: "c64[3, 2]" = torch.ops.aten.slice.Tensor(slice_1, 1, 0, 2); slice_1 = None
return (slice_2,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='slice_2'), target=None)])
Range constraints: {}
```
## Analysis
PyTorch ONNX Conversion Analysis
## Model Information
The model has 0 parameters and 0 buffers (non-trainable parameters).
Number of parameters per dtype:
```python
defaultdict(<class 'int'>, {})
```
Number of buffers per dtype:
```python
defaultdict(<class 'int'>, {})
```
Inputs:
- `x`: `TensorMetadata(shape=torch.Size([3, 4]), dtype=torch.float32, requires_grad=False, stride=(4, 1), memory_format=torch.contiguous_format, is_quantized=False, qparams={})`
Outputs:
- `slice_2`: `TensorMetadata(shape=torch.Size([3, 2]), dtype=torch.complex64, requires_grad=False, stride=(4, 1), memory_format=None, is_quantized=False, qparams={})`
The FX graph has 5 nodes in total. Number of FX nodes per op:
- `placeholder`: 1
- `call_function`: 3
- `output`: 1
Of the call_function nodes, the counts of operators used are:
- `aten.slice.Tensor`: 2
- `aten.to.dtype`: 1
## ONNX Conversion Information
The model contains operators the dispatcher could not find registered ONNX decompositions for. This may be due to missing implementations, decompositions not registered correctly, or a bug in the dispatcher.
Errors grouped by operator:
- `aten.to.dtype`: No decompositions registered for the real-valued input. Example node: `%to : [num_users=1] = call_function[target=torch.ops.aten.to.dtype](args = (%x, torch.complex64), kwargs = {})`. All nodes: `[to]`
- `aten.slice.Tensor`: No decompositions registered for the complex-valued input. Example node: `%slice_1 : [num_users=1] = call_function[target=torch.ops.aten.slice.Tensor](args = (%to, 0, 0, 9223372036854775807), kwargs = {})`. All nodes: `[slice_1, slice_2]`
## Decomposition comparison
Ops exist only in the ExportedProgram before decomposition: `['aten.to.dtype']`
Ops exist only in the ExportedProgram after decomposition: `['aten._to_copy.default']`
```
| true
|
2,898,703,848
|
Optimize AOTInductor: Caching, Reduced Decompositions, and Improved JSON Handling
|
devsashidhar
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 2
|
NONE
|
PR Description:
This PR improves AOTInductor's compilation performance by introducing caching, limiting unnecessary decompositions, and optimizing JSON handling.
Changes:
Added persistent caching to avoid redundant recompilation.
Restricted decompositions to only necessary operators (aten::add, aten::mul).
Optimized JSON metadata updates to prevent unnecessary file writes.
Impact:
Reduces compilation time for repeated runs.
Improves efficiency by only updating metadata when needed.
Helps prevent excessive decompositions, leading to better overall performance.
Testing:
Ran pytest test/inductor to check for regressions.
Verified that AOT compilation is significantly faster on repeated runs.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,685,504
|
Bump to AOTriton 0.9.2 to fix version strings
|
xinyazhang
|
closed
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"rocm",
"ciflow/rocm"
] | 4
|
COLLABORATOR
|
AOTriton 0.9.1 does not bump the version strings in both .comment section or the file name, which may cause confusions if slipped into final release.
A new point release is made to address this confusion and unify the version strings.
| true
|
2,898,678,530
|
[while_loop] enforce stride to be the same for subgraph's input and output
|
ydwu4
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148614
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,677,542
|
[For Discussion][Dynamo] Avoiding skipping module.py inner() frame, to keep forward hooks and forward in the same graph
|
yf225
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148613
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,898,650,654
|
[CI] [inductor] Add cu126 inductor jobs and move away cu124
|
tinglvv
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ciflow/inductor-perf-test-nightly",
"ciflow/inductor-perf-compare",
"ciflow/inductor-micro-benchmark",
"ciflow/inductor-periodic"
] | 6
|
COLLABORATOR
|
https://github.com/pytorch/pytorch/issues/145570
breaking https://github.com/pytorch/pytorch/pull/140793 into eager and inductor benchmarks to unblock
Seems many inductor yml are added after initial change was prepared.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @atalman @malfet @nWEIdia @ptrblck
| true
|
2,898,629,447
|
Remove warnings on non-buffer tensor constants (#148483)
|
tugsbayasgalan
|
closed
|
[
"Stale",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 3
|
CONTRIBUTOR
|
Summary:
Export already registers tensor constants directly in the graph and this is also true for Torchbind objects. This removes warning that pollutes the output.
cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv
imported-using-ghimport
Test Plan: Imported from OSS
Reviewed By: zou3519
Differential Revision: D70577856
Pulled By: tugsbayasgalan
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,898,620,337
|
Optimize shard_dim_alltoall to use alltoall_single
|
wanchaol
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 2
|
COLLABORATOR
|
as titled, previously the shard_dim_alltoall uses `all_to_all`, which essentially could incur lots of copies if the tensor become non-contiguous during splits, and alltoall itself also incur copies
This PR uses alltoall_single instead, so that we could minimize tensor copies.
tested on all the shard dim change tests and it works properly
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,898,608,933
|
Optimize shard_dim_alltoall to use alltoall_single
|
wanchaol
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 2
|
COLLABORATOR
|
as titled, previously the shard_dim_alltoall uses `all_to_all`, which essentially could incur lots of copies if the tensor become non-contiguous during splits, and alltoall itself also incur copies
This PR uses alltoall_single instead, so that we could minimize tensor copies.
tested on all the shard dim change tests and it works properly
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,898,588,532
|
[MPS] fix crash for mse loss with 0 numel inputs
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"topic: bug fixes",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 3
|
COLLABORATOR
|
Fixes #148589
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,898,585,211
|
DISABLED test_host_memory_stats (__main__.TestCuda)
|
pytorch-bot[bot]
|
closed
|
[
"module: cuda",
"triaged",
"module: flaky-tests",
"skipped"
] | 5
|
NONE
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_host_memory_stats&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38260272155).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_host_memory_stats`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_cuda.py", line 202, in test_host_memory_stats
check_stats(expected)
File "/var/lib/jenkins/pytorch/test/test_cuda.py", line 188, in check_stats
self.assertEqual(v, stats[k])
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 50333269 but got 0.
Absolute difference: 50333269
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_cuda.py TestCuda.test_host_memory_stats
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
cc @ptrblck @msaroufim @eqy @clee2000
| true
|
2,898,584,165
|
DISABLED test_nested_tuple_output_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nested_tuple_output_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38260273171).
Over the past 3 hours, it has been determined flaky in 41 workflow(s) with 82 failures and 41 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nested_tuple_output_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_higher_order_ops.py", line 2587, in test_nested_tuple_output
graph = self._test_wrap_simple(
File "/var/lib/jenkins/workspace/test/dynamo/test_higher_order_ops.py", line 191, in _test_wrap_simple
self.assertEqual(len(wrap_node.args), expected_num_wrap_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 4 but got 5.
Absolute difference: 1
Relative difference: 0.25
To execute this test, run the following from the base repo dir:
python test/dynamo/test_dynamic_shapes.py DynamicShapesHigherOrderOpTests.test_nested_tuple_output_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,898,568,947
|
[cuda] Add new faster gammabeta backward kernel
|
ahmadsharif1
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cuda",
"ci-no-td"
] | 24
|
CONTRIBUTOR
|
This PR adds a new kernel for producing gamma and beta values for the backward pass in a performant way.
To test the performance against the baseline, I measured the backward pass of layernorm while sweeping over the following variables:
1. dtype in {half, float}
2. M in `2**k, 2**k - 1, 2**k + 1 for k in range(...)`
3. N in `2**k, 2**k - 1, 2**k + 1 for k in range(...)`
4. Whether we flush the L2 cache before running the backward pass
Summary: The new code performs better than the old code, especially for powers of 2. For M >> N case, it performs very well (kernel itself can be 30x faster and the overall backward pass can be 5-10x faster).
In order to visualize results of the kernel when choosing different values of M, N and dtype, I wrote some code to generate a heatmap. The heatmap has N on the x-axis, M on the y-axis and color-coded points where green shows performance improvement and red shows regressions. For example, `m=32 n=2048 1.42x` in the heatmap would indicate the normalized shape had 32 elements. The leading dimensions' product was 2048 elements and the new kernel resulted in the *backward pass* being 1.42x faster than the old *backward pass*.
Important note: This heatmap shows the total backward pass time as seen by the user. The kernel time difference can be sometimes very large while the total backward pass time is not that high. For example, for dtype=torch.half, M=32 N=2048, flush_l2_cache=True case, the heatmap shows a speedup of 1.42x, while ncu tells me the new kernel is 2.5x faster than the old:
M=32 N=2048 dtype=half flush_l2=True Old Kernel NCU summary:
```
----------------------- ----------- ------------
Metric Name Metric Unit Metric Value
----------------------- ----------- ------------
DRAM Frequency Ghz 1.59
SM Frequency Ghz 1.35
Elapsed Cycles cycle 27,526
Memory Throughput % 2.21
DRAM Throughput % 0.54
Duration us 20.42
L1/TEX Cache Throughput % 4.31
L2 Cache Throughput % 2.62
SM Active Cycles cycle 1,475.02
Compute (SM) Throughput % 0.29
----------------------- ----------- ------------
```
M=32 N=2048 dtype=half flush_l2=True New Kernel NCU summary:
```
----------------------- ----------- ------------
Metric Name Metric Unit Metric Value
----------------------- ----------- ------------
DRAM Frequency Ghz 1.59
SM Frequency Ghz 1.34
Elapsed Cycles cycle 10,920
Memory Throughput % 5.64
DRAM Throughput % 1.35
Duration us 8.13
L1/TEX Cache Throughput % 1.92
L2 Cache Throughput % 6.89
SM Active Cycles cycle 3,554.41
Compute (SM) Throughput % 0.67
----------------------- ----------- ------------
```
Let's look at some rows from the heatmap. For dtype=float16 flush_l2_cache=True and when input shapes are powers of 2, we get the following:
<img width="1508" alt="image" src="https://github.com/user-attachments/assets/06179599-b2f0-4a45-8664-247a1067950b" />
There are 3 columns -- the first shows all data points, the second shows speedups only and the 3rd column shows regressions only. We can see that there are dramatic speedups for M >> N cases and the regressions are not that high (less than 1%, which could just be measurement noise). Here is a small guide I made:

For dtype=float32, we get a similar chart:
<img width="1499" alt="image" src="https://github.com/user-attachments/assets/c4d31a76-03b0-426c-9114-e1bfad29b530" />
The new code performs especially well for m >> n cases, and also where m and n are small. The m >> n case is special because we run 2 reduction kernels back to back and parallelize in the "M" dimension (the older kernel only parallelized in the "N" dimension).
The new code can sometimes have regressions for non-powers of 2. That is because the old code was using block sizes of {16, 32} while we have `threads.x = 32`. For example when N=33, the old code would have 3 blocks and we will have 2 blocks. I wrote some code to specialize for this case, but I think it will add complexity and @ngimel mentioned that non-powers of 2 are rare enough.
I am including the regressions here for completeness' sake:
<img width="1500" alt="image" src="https://github.com/user-attachments/assets/31c17cfb-ed9b-4106-b9c8-5c359751f530" />
To see this better:
1. Click the image
2. Right click the expanded image and open in a new tab
3. Go to that tab and left click once to zoom in
If you want to see the full data, here it is:

I also measured binary size and compile time since those are important for developers:
Binary size comparison

```
# Original
-rwxr-xr-x 1 ahmads users 307193112 Mar 6 08:46 ./torch/lib/libtorch_cuda.so
# This PR
-rwxr-xr-x 1 ahmads users 307193112 Mar 6 08:46 ./torch/lib/libtorch_cuda.so
```
The diff in bytes is 302kB which is about a 0.1% increase.
Compile time difference:
```
# Original
real 0m10.931s
user 0m9.676s
sys 0m1.004s
# this PR
real 0m16.720s
user 0m15.514s
sys 0m1.066s
# Command I ran
time /usr/local/cuda/bin/nvcc -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DFLASHATTENTION_DISABLE_ALIBI -DFLASHATTENTION_DISABLE_SOFTCAP -DFLASH_NAMESPACE=pytorch_flash -DFMT_HEADER_ONLY=1 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_BUILD_MAIN_LIB -DTORCH_CUDA_USE_NVTX3 -DUNFUSE_FMA -DUSE_C10D_GLOO -DUSE_C10D_NCCL -DUSE_CUDA -DUSE_CUFILE -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_FLASH_ATTENTION -DUSE_MEM_EFF_ATTENTION -DUSE_NCCL -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cuda_EXPORTS -I/home/ahmads/personal/pytorch/build/aten/src -I/home/ahmads/personal/pytorch/aten/src -I/home/ahmads/personal/pytorch/build -I/home/ahmads/personal/pytorch -I/home/ahmads/personal/pytorch/cmake/../third_party/benchmark/include -I/home/ahmads/personal/pytorch/third_party/onnx -I/home/ahmads/personal/pytorch/build/third_party/onnx -I/home/ahmads/personal/pytorch/nlohmann -I/home/ahmads/personal/pytorch/third_party/flash-attention/csrc/flash_attn/src -I/home/ahmads/personal/pytorch/aten/src/THC -I/home/ahmads/personal/pytorch/aten/src/ATen/cuda -I/home/ahmads/personal/pytorch/third_party/fmt/include -I/home/ahmads/personal/pytorch/aten/src/ATen/../../../third_party/cutlass/include -I/home/ahmads/personal/pytorch/aten/src/ATen/../../../third_party/cutlass/tools/util/include -I/home/ahmads/personal/pytorch/build/caffe2/aten/src -I/home/ahmads/personal/pytorch/aten/src/ATen/.. -I/home/ahmads/personal/pytorch/build/nccl/include -I/home/ahmads/personal/pytorch/c10/cuda/../.. -I/home/ahmads/personal/pytorch/c10/.. -I/home/ahmads/personal/pytorch/third_party/tensorpipe -I/home/ahmads/personal/pytorch/build/third_party/tensorpipe -I/home/ahmads/personal/pytorch/third_party/tensorpipe/third_party/libnop/include -I/home/ahmads/personal/pytorch/torch/csrc/api -I/home/ahmads/personal/pytorch/torch/csrc/api/include -isystem /home/ahmads/personal/pytorch/build/third_party/gloo -isystem /home/ahmads/personal/pytorch/cmake/../third_party/gloo -isystem /home/ahmads/personal/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/ahmads/personal/pytorch/third_party/protobuf/src -isystem /home/ahmads/personal/pytorch/third_party/XNNPACK/include -isystem /home/ahmads/personal/pytorch/third_party/ittapi/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/ahmads/personal/pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -isystem /home/ahmads/personal/pytorch/third_party/ideep/include -isystem /home/ahmads/personal/pytorch/INTERFACE -isystem /home/ahmads/personal/pytorch/third_party/nlohmann/include -isystem /home/ahmads/personal/pytorch/third_party/NVTX/c/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/cudnn_frontend/include -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -D_GLIBCXX_USE_CXX11_ABI=1 -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch -gencode arch=compute_90,code=sm_90 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -O3 -DNDEBUG -std=c++17 -Xcompiler=-fPIC -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -Xcompiler -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-missing-field-initializers -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wunused-function -Wunused-variable -Wunused-but-set-variable -Wno-maybe-uninitialized -MD -MT caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/layer_norm_kernel.cu.o -MF caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/layer_norm_kernel.cu.o.d -x cu -c /home/ahmads/personal/pytorch/aten/src/ATen/native/cuda/layer_norm_kernel.cu -o caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/layer_norm_kernel.cu.o
```
So the new PR is 6 seconds longer compile time.
| true
|
2,898,564,048
|
Clear triton kernels after parent make_launcher
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148604
Before, we were clearing the cache only after inductor compile. But inductor may not **always** compile, i.e. on AOTAutogradCache hit.
So instead, we should clear it when the future is consumed. This is a more robust fix for the issue in D69476856
Differential Revision: [D70646281](https://our.internmc.facebook.com/intern/diff/D70646281/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,549,021
|
[ONNX] Expose verification utilities
|
justinchuby
|
closed
|
[
"module: onnx",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: new features"
] | 12
|
COLLABORATOR
|
Expose verification utilities to public documentation.
| true
|
2,898,498,813
|
[CI][CUDA] Move away from cuda12.4, Add cuda12.6 eager CI tests
|
tinglvv
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"keep-going",
"ciflow/slow"
] | 7
|
COLLABORATOR
|
https://github.com/pytorch/pytorch/issues/145570
breaking https://github.com/pytorch/pytorch/pull/140793/ into eager and inductor benchmarks to unblock
cc @atalman @malfet @nWEIdia @ptrblck
| true
|
2,898,452,325
|
Fix for AOTI + CUDAGraphs when calling from Python
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148601
**Background**: I've been comparing performance of torch.compile vs. torch.export + AOTI (specifically, loaded from Python) on the Flux model and found a ~1.4% performance decrease with the latter. The trace shows that CUDAGraphs are not utilized for torch.export + AOTI, leading to higher overhead.
When trying to manually CUDAGraph the loaded, previously exported + AOTIed model (thanks to @eellison for the logic here), I get:
```
Error: operation not permitted when stream is capturing
```
@desertfire confirms that this is due to multi-threading logic on the AOTI runtime side (in `AOTIModelContainer` / `AOTIModel`) conflicting with the use of CUDAGraphs.
**Fix**: This PR takes the approach of providing an alternate, single-threaded method for running loaded models with the AOTI runtime. Details:
* Python side introduces a new flag to enable this behavior (needs a better name): `torch._inductor.package.load_package(..., run_single_threaded=False)`
* This flag is passed down to the C++ side's `AOTIModelPackageLoader`, which passes it to the `CreateAOTIModelRunnerFunc` during `AOTIModelContainerRunner` construction.
* C++ side introduces single-threaded alternatives to model running and model container running:
* `AOTIModelContainer.run_single_threaded()` / `AOTIModel.run_single_threaded()`. The interfaces match those of `run()`, but the synchronization logic has been removed.
* Introduces `AOTInductorModelContainerRunSingleThreaded` to AOTI's `interface.h`; this is invoked by the `AOTIModelContainerRunner` utility class when `run_single_threaded=true`.
I've verified on both a small repro and my real-world use case that I can manually CUDAGraph a loaded model that was previously exported + AOTIed.
**Future work:**
* Flip default value to `run_single_threaded=True` as Python-side inference doesn't take advantage of the AOTI runtime thread pool
* There are some BC concerns here - models need to be re-serialized so the .so contains the new `AOTInductorModelContainerRunSingleThreaded` interface func. We can flip the default value and warn (instead of crashing) if the `AOTInductorModelContainerRunSingleThreaded` symbol does not exist.
* Compose with cudagraph trees as opposed to manual cuda graph wrapping
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,448,515
|
[pytorch] Update flexattention bwd config generation
|
mandroid6
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary: Currently `flex_attention` template's backward config generation returns values for every case. This change instead stores intermediate values in `'bwd_config` returned at the end.
Test Plan: CI. Existing tests.
Differential Revision: D70649316
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,435,031
|
torch.conj behaves differently on cpu and mps
|
arandono
|
closed
|
[
"triaged",
"module: complex",
"module: mps"
] | 3
|
NONE
|
### 🐛 Describe the bug
torch.conj appears to behave differently on cpu vs mps devices. On cpu when combined in matrix multiplication operations it behaves as expected. On mps devices it does not perform conjugation before the matrix multiplication. Here's an example:
```
a = torch.rand(2,2, dtype=torch.cfloat)
A = a.to("mps")
b = torch.rand(N,N, dtype=torch.cfloat)
B = b.to("mps")
ab1 = torch.mm(a,b)
AB1 = torch.mm(A,B)
ab2 = torch.mm(a,torch.conj(b))
AB2 = torch.mm(A,torch.conj(B))
ab3 = torch.mm(a,torch.conj_physical(b))
AB3 = torch.mm(A,torch.conj_physical(B))
print(ab1)
print(AB1)
print(ab2)
print(AB2)
print(ab3)
print(AB3)
```
We should have ab1=AB1, and ab2=AB2=ab3=AB3. But note that ab2≠AB2. Instead, AB2=AB1 suggesting the conjugate operation was not executed properly on mps devices. However, torch.conj_physical appears to work as expected.
### Versions
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.5 (main, Sep 11 2023, 08:31:25) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[conda] numpy 1.26.0 py311he598dae_0
[conda] numpy-base 1.26.0 py311hfbfe69c_0
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,898,408,639
|
Test
|
zxiiro
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,898,365,204
|
Clear triton kernels after parent make_launcher
|
jamesjwu
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148597
Before, we were clearing the cache only after inductor compile. But inductor may not **always** compile, i.e. on AOTAutogradCache hit.
So instead, we should clear it when the future is consumed. This is a more robust fix for the issue in D69476856
Differential Revision: [D70646281](https://our.internmc.facebook.com/intern/diff/D70646281/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,316,383
|
[c10d] Make getDefaultBackend more fault tolerant
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148596
This is a forward fix for #135338.
It hits error like this:
```
"distributed_c10d.py", line 2156, in destroy_process_group
if type(pg) == ProcessGroup and pg._has_hooks():
RuntimeError: Could not find the default backend type 0 for Process Group with name undefined.
```
When users call `init_process_group(nothing)`, default backend is not set, or set to `undefined`. Thus the above signature. Triggered by the `_has_hooks()` call.
The fix wraps `getDefaultBackend` with a try-catch.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,898,311,828
|
[Inductor][Triton] Fix test_autotune_inplace_kernel to work with newer Triton version
|
PaulZhang12
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 15
|
CONTRIBUTOR
|
For new Triton version 3.3, constexpr are included as part of the signature. Update failing test to reflect this change, additional context in https://github.com/pytorch/pytorch/pull/145051.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,311,261
|
[CUDA Graphs][NCCL] Set event queries to happen under thread-local mode in `ProcessGroupNCCL.cpp`
|
eqy
|
closed
|
[
"oncall: distributed",
"module: cuda",
"module: nccl",
"open source",
"Merged",
"module: cuda graphs",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"rocm"
] | 7
|
COLLABORATOR
|
Should mean we don't need to coordinate the watchdog with CUDAGraph captures anymore
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ptrblck @msaroufim @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng
| true
|
2,898,267,447
|
Add XPU device to nested_layer_norm
|
min-jean-cho
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu",
"release notes: xpu",
"module: xpu"
] | 15
|
COLLABORATOR
|
Work with https://github.com/intel/torch-xpu-ops/pull/1416 .
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,898,251,150
|
[AOTI] Swith to local cpp compile for fbcode
|
zoranzhao
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 15
|
MEMBER
|
Summary: as title, otherwise we can not find lamdhip64
Test Plan: https://www.internalfb.com/phabricator/paste/view/P1747104431
Differential Revision: D70637798
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,237,706
|
Move broken job to unstable workflow: `trunk / libtorch-linux-focal-cuda12.4-py3.10-gcc9-debug / build`
|
ZainRizvi
|
closed
|
[
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
The job `trunk / libtorch-linux-focal-cuda12.4-py3.10-gcc9-debug / build` is currently broken and marked as unstable: https://github.com/pytorch/pytorch/issues/148495
Why is this needed?
* Using Issues to mark jobs as unstable is only meant for short term use, and this job is taking longer to fix.
* The tooling that upgrades the `viable/strict` commit does not respect issues that marks a job as unstable, so the `viable/strict` branch is lagging `main` by two days now
This change can be reverted once the job becomes healthy again
| true
|
2,898,168,333
|
[PGNCCL] Launch kernel on current stream & remove `record_stream` entirely
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (c10d)",
"keep-going",
"ci-no-td"
] | 50
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148590
This PR has multiple changes to `ProcessGroupNCCL` (which unfortunately are related):
1. When async_op=False, we directly launch the collective on "current" stream, instead of a trampoline stream and join back.
- Resolves #147729
- Resolves #146881
- Also saves two event syncs (which have overhead in case of HIP) and one pybind when we call `work.wait()` in distributed_c10d.py on behalf of user.
2. Entirely remove `record_stream` and use CPU-side stashing for managing tensor lifetime against recycling.
- Resolves #147168
3. Remove tensor life management when async_op=False; only use it when async_op=True.
4. To guard against user not calling `work.wait()`, we ask watchdog to unstash tensors after detecting completion of collectives, to prevent us from holding reference to tensors forever. This is a safety net, rather than a service guarantee, see discussion [here](https://github.com/pytorch/pytorch/issues/147168#issuecomment-2660142460).
5. Profile in async_op=False mode would look different -- collective kernels would show up in the same line and compute kernels.
Joint work with @cenzhaometa who wants to remove the event sync overhead.
Cc: @ngimel @awgu @Aidyn-A @skyw @wconstab @leonardo0lyj
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
@diff-train-skip-merge
Differential Revision: [D71652868](https://our.internmc.facebook.com/intern/diff/D71652868)
| true
|
2,898,164,996
|
RuntimeError on MPS: [srcBuf length] > 0 INTERNAL ASSERT FAILED – Placeholder tensor is empty using huggingface model ibm-granite/granite-timeseries-ttm-r2
|
Arjein
|
closed
|
[
"module: crash",
"triaged",
"module: mps"
] | 2
|
NONE
|
### 🐛 Describe the bug
**Describe the bug**
I encounter the following error during inference when running a model on MPS with PyTorch 2.6.0 on macOS 15.3.1 (Apple M4):
RuntimeError: [srcBuf length] > 0 INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/OperationUtils.mm":530, please report a bug to PyTorch. Placeholder tensor is empty!
I’m running a model on MPS with PyTorch 2.6.0 on macOS 15.3.1 (Apple M4). During inference, I see this crash repeatedly .When I change the device to 'cpu', the error does not occur. It seems this issue is specific to the MPS backend.
**To Reproduce**
1. pip install torch==2.6.0
2. Use the following script (simplified):
```python
import torch
from tsfm_public import TimeSeriesForecastingPipeline
from ibm_granite import TinyTimeMixerForPrediction # Adjust import if necessary
# Assume input_df is a properly formatted DataFrame with a datetime column 'date'
# and an identifier column 'item_id', plus at least one target column 'close'
# For example:
# input_df = pd.read_csv('path_to_your_csv')
# input_df['date'] = pd.to_datetime(input_df['date'])
timestamp_column = "date"
target_columns = ['close']
context_length = 512
zeroshot_model = TinyTimeMixerForPrediction.from_pretrained(
"ibm-granite/granite-timeseries-ttm-r2", # Using huggingface model
num_input_channels=len(target_columns),
)
pipeline = TimeSeriesForecastingPipeline(
zeroshot_model,
timestamp_column=timestamp_column, # Column dtype = DateTime
id_columns=['item_id'], # Column dtype = String
target_columns=target_columns, # Column Type = float
explode_forecasts=False,
freq="5min",
device="mps", # Setting device to MPS
)
# Trigger inference
zeroshot_forecast = pipeline(input_df)
print(zeroshot_forecast.tail())
```
3. Run the script with the device set to 'mps'.
**Expected behavior**
I expect the model to perform inference without throwing an error on the MPS backend.
Environment
• macOS 15.3.1 (Apple M4)
• Python (version used in your environment)
• PyTorch 2.6.0
• Using the huggingface model: ibm-granite/granite-timeseries-ttm-r2
• (Include full output from python -m torch.utils.collect_env if possible)
Additional context
• The error only occurs when using device "mps". On CPU, the inference works as expected.
• This issue appears to be related to the MPS backend implementation in PyTorch.
Please let me know if any additional information is required. Thanks for your help!
### Versions
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.9 (main, Feb 4 2025, 14:38:38) [Clang 16.0.0 (clang-1600.0.26.6)] (64-bit runtime)
Python platform: macOS-15.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.6.0
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,898,153,150
|
[CI][CUDA] Update test_unary_ufuncs.py to workaround #148143
|
nWEIdia
|
closed
|
[
"open source",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Workaround: #148143
test_reference_numerics_small has the following:
if dtype in (torch.bool,):
raise self.skipTest("bool has no small values")
Does bool have "normal" values?
Fixes #148143
cc @atalman @malfet @eqy @tinglvv @ptrblck
| true
|
2,898,147,484
|
[AOTI] build CPU CPP kernels at O3, and all other code at O1
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148587
In the future, we may also want to add LTO linking to further optimize the results (while still hopefully netting compile time benefits).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D70641543](https://our.internmc.facebook.com/intern/diff/D70641543)
| true
|
2,898,147,482
|
Enable a fast path for (static) qlinear for AArch64 through ACL directly.
|
fadara01
|
closed
|
[
"module: cpu",
"open source",
"release notes: quantization"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148586
* #148585
* #148584
This enables a fast path for eager mode static quantization for AArch64 through Arm Compute Library (ACL) directly.
PR #145942 addressed the high overhead in qlinear_dynamic on AArch64 (due to redundant weight pretranspositions and reductions) by enabling a path that calls ACL directly.
This does the same thing but for (static) qlinear.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,898,147,367
|
Enable fast qlinear static/dynamic path for AArch64 through ACL directly
|
fadara01
|
closed
|
[
"module: cpu",
"open source",
"module: arm",
"Merged",
"release notes: quantization",
"ciflow/linux-aarch64",
"arm priority"
] | 12
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148653
* __->__ #148585
This enables a fast path for eager mode static/dynamic quantization for AArch64 through Arm Compute Library (ACL) directly.
Context: PRs #126687, #139887 enabled an optimized implementation for `qlinear` and `qlinear_dynamic` for aarch64 through `ideep → oneDNN → ACL` which improved performance by ~10x compared to the previous implementation.
However, the current `qlinear` and `qlinear_dynamic` path (`ideep → oneDNN → ACL`) suffers from high overhead due to the API friction between the stateless oneDNN API and the stateful ACL low-precision GEMM (`lowp_gemm`) API - for example, ACL's `lowp_gemm` objects cache information like weights reduction or weights in optimized memory format which oneDNN does not allow due to its stateless nature.
Hence, ACL currently runs a (redundant) sum of columns and pre-transposition (to the gemm kerne's optimal format) for each GEMM operation.
This PR addresses the sub-optimalities above by integrating ACL directly with `qlinear` and `qlinear_dynamic`.
- **For `qlinear_dynamic` (dynamically quantized matmuls):**
This PR yields an ****average speedup** (averaged over context_lengths of 2^3 up to 2^9) of ~ **50%** for `bert-base-uncased`, `bert-large-uncased`, `roberta-base`, `distilbert-base-uncased`** with 16 threads on a Neoverse-V1 (with transformers==4.48) for the benchmarking script below:
```
# SPDX-FileCopyrightText: Copyright 2025 Arm Limited and/or its affiliate <open-source-office@arm.com>
# SPDX-License-Identifier: BSD-3-Clause
import torch
from transformers import AutoModel, AutoConfig
import time
import numpy as np
from argparse import ArgumentParser
class ModelArgumentParser(ArgumentParser):
def __init__(self) -> None:
super().__init__(description="huggingface model")
self.add_argument("--context_length",
help="context length - number of input tokens",
type=int,
default=64
)
self.add_argument("--model",
help="model checkpoint - i.e. 'bert-base-uncased'",
type=str,
default=None)
self.add_argument("--iters",
help="benchmark iterations",
default=500)
if __name__ == "__main__":
parser = ModelArgumentParser()
args = parser.parse_args()
model_name = args.model
config = AutoConfig.from_pretrained(model_name)
batch_size = 1
model = AutoModel.from_pretrained(model_name)
model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
model.eval()
inputs = torch.randint(config.vocab_size, (batch_size, args.context_length), dtype=torch.long, device="cpu")
times = []
with torch.no_grad():
# warmup
for _ in range(10):
model(inputs)
# benchmark
for _ in range(args.iters):
s = time.time_ns()
model(inputs)
times.append((time.time_ns() - s) / 1e6)
print("Model = ", model_name)
print("Context Length = ", args.context_length)
print("Min (ms) = ", min(times))
print("Mean (ms) = ", np.mean(times))
```
- **For `qlinear` (statically quantized matmuls):**
This PR yields an **average speedup of 2x for signed activations (`s8s8s8`) and 95x for unsigned activations (u8s8u8)** on a Neoverse-V1 with 16 threads for the benchmarking script below.
The averages are over for all combinations of `M = [8, 16, ..., 512]`, `K = [768, 1024, 2048, 4096]`, `N = [768, 1024, 2048, 4096]`.
The astronomical speedup for unsigned activation is because oneDNN v3.7 does not have an optimized implementation for `u8s8u8` on AArch64.
```
# SPDX-FileCopyrightText: Copyright 2025 Arm Limited and/or its affiliate <open-source-office@arm.com>
# SPDX-License-Identifier: BSD-3-Clause
import torch
import torch.nn as nn
from torch.quantization import QConfig
from torch.ao.quantization.observer import HistogramObserver, default_weight_observer
import torch
import torch.nn as nn
import numpy as np
import random
from argparse import ArgumentParser
import time
class ModelArgumentParser(ArgumentParser):
def __init__(self) -> None:
super().__init__()
self.add_argument("--M",
help="M dimension",
type=int,
default=64
)
self.add_argument("--K",
help="K dimension",
type=int,
default=64
)
self.add_argument("--N",
help="N dimension",
type=int,
default=64
)
self.add_argument("--signed_input",
help="Use (signed) torch.qint8 for inputs instead of (unsigned) torch.quint8",
action="store_true"
)
self.add_argument("--seed",
help="Random seed",
type=int,
default=42
)
self.add_argument("--iters",
help="benchmark iterations",
default=500)
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
class LinearModel(nn.Module):
def __init__(self, K, N):
super(LinearModel, self).__init__()
self.quant = torch.quantization.QuantStub()
self.fc = nn.Linear(K, N)
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.fc(x)
x = self.dequant(x)
return x
def quantize_model(model, args):
qconfig = QConfig(
activation=HistogramObserver.with_args(reduce_range=False,
dtype=torch.qint8 if args.signed_input else torch.quint8),
weight=default_weight_observer,
)
# Prepare the model for static quantization
# Specify quantization configurations
model.qconfig = qconfig
model_prepared = torch.quantization.prepare(model_fp32)
# Calibrate the model with sample inputs
# Example input data for calibration
with torch.no_grad():
sample_data = torch.randn(args.M, args.K)
model_prepared(sample_data)
# Convert the prepared model to a quantized model
model_quantized = torch.quantization.convert(model_prepared)
return model_quantized
if __name__ == "__main__":
parser = ModelArgumentParser()
args = parser.parse_args()
set_seed(args.seed)
model_fp32 = LinearModel(args.K, args.N)
model_quantized = quantize_model(model_fp32, args)
inputs = torch.randn(args.M, args.K)
times = []
with torch.no_grad():
# warmup
for _ in range(10):
model_quantized(inputs)
# benchmark
for _ in range(args.iters):
s = time.time_ns()
model_quantized(inputs)
times.append((time.time_ns() - s) / 1e6)
print("M,K,N,signed = ", args.M, args.K, args.N, args.signed_input)
print("Min Times (ms) = ", min(times))
print("Mean Times (ms) = ", np.mean(times))
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,898,147,220
|
Enable Direct Use of Arm Compute Library (ACL) in ATen
|
fadara01
|
closed
|
[
"open source",
"module: arm",
"Merged",
"topic: not user facing",
"ciflow/linux-aarch64"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148653
* #148585
* __->__ #148584
ACL is already built with PyTorch as a shared library when USE_MKLDNN_ACL is set.
Currently, it is only used indirectly in ATen via oneDNN for AArch64 targets. However there are cases where it makes sense to utilize ACL directly without oneDNN as an intermediary - e.g. quantization. See #145942, #147337, #146620.
This patch enables such use cases by exposing ACL to ATen
cc @malfet @snadampal @milpuz01
| true
|
2,898,142,167
|
Enable fast qlinear static/dynamic path for AArch64 through ACL directly
|
fadara01
|
closed
|
[
"module: cpu",
"open source",
"release notes: quantization"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148583
* #148582
This enables a fast path for eager mode dynamic quantization for AArch64 through Arm Compute Library (ACL) directly.
Context: PR #126687 enabled an optimized implementation for qlinear_dynamic for aarch64 through ideep → oneDNN → ACL which improved performance by ~10x compared to the previous implementation.
However, the current qlinear_dynamic path (ideep → oneDNN → ACL) suffers from high overhead due to the API friction between the stateless oneDNN API and the stateful ACL low-precision GEMM (lowp_gemm) API - for example, ACL's lowp_gemm objects cache information like weights reduction or weights in optimized memory format which oneDNN does not allow due to its stateless nature.
Hence, ACL currently runs a (redundant) sum of columns and pre-transposition (to the gemm kerne's optimal format) for each GEMM operation.
This PR addresses the sub-optimalities above by integrating ACL directly with qlinear_dynamic. This approach yields an average speedup (averaged over context_lengths of 2^3 up to 2^9) of ~ 50% for bert-base-uncased, bert-large-uncased, roberta-base, distilbert-base-uncased with 16 threads on a Neoverse-V1 (with transformers==4.48).
To achieve this we introduce PackedLinearWeightsACL (as a subclasses of PackedLinearWeightsOnednn ) with an implementation of qlinear_dynamic that uses ACL directly, while qlinear still follows the oneDNN path.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,898,142,032
|
Enable Direct Use of Arm Compute Library (ACL) in ATen
|
fadara01
|
closed
|
[
"open source"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148583
* __->__ #148582
ACL is already built with PyTorch as a shared library when USE_MKLDNN_ACL is set.
Currently, it is only used indirectly in ATen via oneDNN for AArch64 targets. However there are cases where it makes sense to utilize ACL directly without oneDNN as an intermediary - e.g. quantization. See #145942, #147337, #146620.
This patch enables such use cases by exposing ACL to ATen
| true
|
2,898,138,311
|
Enable Direct Use of Arm Compute Library (ACL) in ATen
|
fadara01
|
closed
|
[
"open source"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148581
ACL is already built with PyTorch as a shared library when USE_MKLDNN_ACL is set.
Currently, it is only used indirectly in ATen via oneDNN for AArch64 targets. However there are cases where it makes sense to utilize ACL directly without oneDNN as an intermediary - e.g. quantization. See #145942, #147337, #146620.
This patch enables such use cases by exposing ACL to ATen
| true
|
2,898,127,106
|
[inductor]lowering scan to while_loop
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148580
This PR add a pass in post_grad that lowers scan to while_loop. See the comment before the pass for how this is implemented.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,102,028
|
[AOTI] build CPU CPP kernels at O3, and all other code at O1
|
benjaminglass1
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
In the future, we may also want to add LTO linking to further optimize the results (while still hopefully netting compile time benefits).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,898,058,501
|
[CI][CUDA][Distributed]Update test_composability.py
|
nWEIdia
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 17
|
COLLABORATOR
|
world_size = int(os.getenv("WORLD_SIZE", 4)) in subsequent lines indicate the tests in this file do not only require > 1 GPU, but at least 4 GPUs. skip_if_lt_x_gpu(4) does not properly skip this on a platform with 2 GPUs.
skip_if_lt_x_gpu being broken, potentially related to a similar issue: https://github.com/pytorch/pytorch/issues/146094
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tinglvv @eqy @ptrblck @atalman @malfet
| true
|
2,898,024,412
|
[DCP] Save Plan Caching: Fix the missing all_plans update in the cache.
|
saumishr
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 10
|
CONTRIBUTOR
|
Summary: Save Plan Caching: Fix the missing all_plans update in the cache.
Test Plan:
```
buck2 test //aiplatform/modelstore/experimental/integration_tests/tests/nosan:checkpoint_dist_save_load_test
```
https://www.internalfb.com/intern/testinfra/testrun/17451448626323264
Reviewed By: MeetVadakkanchery
Differential Revision: D70229019
cc @LucasLLC @pradeepfn @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,898,022,193
|
[Easy/Profiler] Add last entry to truncated values
|
sraikund16
|
closed
|
[
"enhancement",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: profiler"
] | 6
|
CONTRIBUTOR
|
Summary: Since the ranks of a PG are usually in a consecutive range it is useful to print the last values when truncating metadata
Test Plan:
Manually changed truncate length to 2 and ran 4 gpu graph to get the following trace:
https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/dynocli/devgpu003.rva5.facebook.com/rank-1.Mar_05_09_48_21.1280355.pt.trace.json.gz&bucket=gpu_traces
Differential Revision: D70637461
| true
|
2,897,946,847
|
[BE] Relax sympy dependency to 1.13.3 or newer
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: releng",
"topic: improvements",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/145225
| true
|
2,897,931,924
|
[Inductor][Triton] Fix test_autotune_inplace_kernel to work with newer Triton version
|
PaulZhang12
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
For new Triton version 3.3, constexpr are included as part of the signature. Update failing test to reflect this change, additional context in https://github.com/pytorch/pytorch/pull/145051.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,897,929,892
|
Re-enable test_torchinductor:test_buffer_batch_norm
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148573
Summary: Per https://github.com/pytorch/pytorch/issues/128198 seems like this is working now
Fixes #128198
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,897,914,098
|
AOTI takes very long time to compile (1:40 hours)
|
tugsbayasgalan
|
open
|
[
"oncall: pt2",
"export-triage-review",
"oncall: export",
"module: aotinductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import requests
import torch
from PIL import Image
from transformers import BlipForQuestionAnswering, BlipProcessor
processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base").to("cuda")
img_url = "https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
from torch._export.utils import wrap_method
with torch.inference_mode():
query_dim = torch.export.Dim("query", max=511)
dynamic_shapes = {"pixel_values":{}, "input_ids": {1: query_dim}, "attention_mask": {1: query_dim}}
ep = torch.export.export(wrap_method(model.generate), (), dict(**inputs), dynamic_shapes=dynamic_shapes, strict=False)
mod = ep.run_decompositions({}).module()
fmodel_path = torch._inductor.aot_compile(mod, (), dict(**inputs))
artifact = torch._export.aot_load(fmodel_path, "cuda")
out = artifact(**inputs)
```
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi
| true
|
2,897,911,568
|
[c10d] Move record param for init to the right place
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148571
The place we do the log of init does not look correct. We move it to the beginning of comm init.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,897,866,046
|
[dynamo] ctx_manager.py: replace unimplemented with unimplemented_v2
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148570
* #148454
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,897,865,112
|
[BE][pytree] cleanup parameterized pytree tests
|
XuehaiPan
|
open
|
[
"open source",
"topic: not user facing",
"module: pytree",
"module: dynamo",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #138214
* #113258
* __->__ #148569
Changes:
1. Rename import `py_pytree` -> `python_pytree`. We will add a new test for `generic_pytree` in a follow-up PR after #137400
2. Reuse the parametrize test marker:
```python
parametrize_pytree_module = parametrize(
"pytree",
[
subtest(python_pytree, name="python"),
*([subtest(cxx_pytree, name="cxx")] if not IS_FBCODE else []),
subtest(pytree, name="generic"),
],
)
```
cc @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,897,863,560
|
Suggested fixes sometimes not enough in export
|
tugsbayasgalan
|
open
|
[
"triaged",
"oncall: pt2",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import requests
import torch
from PIL import Image
from transformers import BlipForQuestionAnswering, BlipProcessor
processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base").to("cuda")
img_url = "https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
from torch._export.utils import wrap_method
with torch.inference_mode():
query_dim = torch.export.Dim("query")
dynamic_shapes = {"pixel_values":{}, "input_ids": {1: query_dim}, "attention_mask": {1: query_dim}}
ep = torch.export.export(wrap_method(model.generate), (), dict(**inputs), dynamic_shapes=dynamic_shapes, strict=False)
Outputs:
suggested_fix => query_dim = torch.export.Dim("query", max=512)
Then when i use this constraint, I get:
torch._dynamo.exc.UserError: Constraints violated (query)! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of query = L['args'][1]['input_ids'].size()[1] in the specified range query <= 512 satisfy the generated guard L['args'][1]['input_ids'].size()[1] != 512.
And it doesn't suggest anything else. I worked around it by just setting max=511
```
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
2,897,826,278
|
tc
|
clee2000
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148567
* #148566
| true
|
2,897,826,108
|
[no ci]
|
clee2000
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148567
* __->__ #148566
| true
|
2,897,799,274
|
[MTIA] Use "ieee" instead of "tf32" for MTIA's default precision in FlexAttention
|
PatriceVignola
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary: MTIA supports ieee but not tf32, so we set the default precision of MTIA to ieee similar to how it's done for AMD.
Test Plan: CI
Reviewed By: mortzur
Differential Revision: D70072064
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,897,798,818
|
[Feature Request] Dynamic shapes API requires spec for all arguments.
|
tugsbayasgalan
|
open
|
[
"feature",
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class DummyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.a = torch.ones(4, 4)
def forward(self, start, end):
return start.sum() + end.sum()
f = DummyModel()
kwargs = {"start": torch.ones(4, 4), "end": torch.ones(4, 4)}
dynamic_shapes = {"end": {0: torch.export.Dim("end_dim")}}
ep = torch.export.export(f, (), kwargs, dynamic_shapes=dynamic_shapes)
This throws:
```
torch._dynamo.exc.UserError: When `dynamic_shapes` is specified as a dict, its top-level keys must be the arg names ['start', 'end'] of `inputs`, but here they are ['end']. Alternatively, you could also ignore arg names entirely and specify `dynamic_shapes` as a list/tuple matching `inputs`. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation
```
```
I am not sure why we need to specify "start" in the dynamic_shapes API, is it possible to assume not having arg means this is static?
### Versions
Main
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,897,784,561
|
[ROCm][Windows] Enable hipblaslt for Windows
|
m-gallus
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 6
|
CONTRIBUTOR
|
This PR adds hipblaslt library as one of the Windows' dependencies. `rocBLAS` is added too, since certain symbols aren't detected with `hipblas` alone on Windows.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,897,780,719
|
[ROCm] add gfx12 to nightly wheels
|
alugorey
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Adds gfx1200 and gfx1201 to PYTORCH_ROCM_ARCH for wheels and libtorch.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,897,754,286
|
[RFC] First version of statically compiled launcher for triton compiled CUDA kernels
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 57
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148561
Putting this up for a first pass review, though I will likely make a bunch of changes before landing to add more features, etc.
This diff implements a first version of a static CUDA kernel launcher in `torch._C`. The goal here is to take a cubin file and some metadata from a CompiledKernel from `triton`, and launch the cubin file directly.
Background doc: https://docs.google.com/document/d/1rjRcHl6MfauHG30nCoQX-9UKvKyIs4WWMy_GsGyqb9g/edit?tab=t.0#heading=h.ut5lf39lzq66
Normally, using triton's CompiledKernel.make_launcher(), we would pay the cost of codegenning C++ and running it at compile time. With this new approach, we can use one statically compiled library to launch the kernel.
The tradeoff here is that this new kernel launcher will not be able to use codegen to deal with different lengths/types of arguments. So we use templating to handle up to 10 arguments for now. We also allocate 8 bytes on the stack per argument no matter the argument type, which can take more memory than codegenning. On the other hand, we improve compile time on cold and warm start by not having to call the C++ compiler at all.
This diff does not add the launcher to torch, but introduces a basic test suite.
A list of TODOs that are not yet complete, will do in separate diff:
- Handle `nvTmaDesc` and `cuTensorMap`, which triton handles
- Embed the grid logic instead of passing in gridX,Y,Z. With https://github.com/pytorch/pytorch/pull/147583, we should be able to handle all of the grid logic directly in _StaticCudaLauncher.launch_kernel, and get rid of the python evaluation.
- Handle launch_enter and exit hooks? (Not sure if inductor has these)
- Benchmarking to see if there's runtime performance loss
- Hooking it up with a config to inductor
- Testing harness to test against torch generated triton kernels
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D71230742](https://our.internmc.facebook.com/intern/diff/D71230742)
| true
|
2,897,748,486
|
[ROCm][Windows] Fix ROCm/HIP version header
|
m-gallus
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 4
|
CONTRIBUTOR
|
On Windows, ROCm libraries do not have a `<rocm-core/rocm_version.h>` header, which causes the compilation to fail. This PR resolves this problem by utilising `<hip/hip_version.h>` from HIP SDK.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,897,737,223
|
[BE] Remove `onlyCPU` decorator from test_local_scalar_dense
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps"
] | 9
|
CONTRIBUTOR
|
Followup from https://github.com/pytorch/pytorch/pull/145717, not sure why author thinks those tests should be limited to one architecture.
And fixed similar crashes for CUDA and MPS
| true
|
2,897,693,863
|
DISABLED test_internal_nonlocal_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 6
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_internal_nonlocal_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38227769452).
Over the past 3 hours, it has been determined flaky in 18 workflow(s) with 36 failures and 18 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_internal_nonlocal_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 2725, in test_internal_nonlocal
self._test_wrap_simple(h, default_args_generator((x, y)), arg_count)
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 191, in _test_wrap_simple
self.assertEqual(len(wrap_node.args), expected_num_wrap_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 4 but got 9.
Absolute difference: 5
Relative difference: 1.25
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesHigherOrderOpTests.test_internal_nonlocal_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,897,679,351
|
AOTI is OOM-ing when eager doesn't
|
tugsbayasgalan
|
open
|
[
"oncall: pt2",
"export-triage-review",
"oncall: export",
"module: aotinductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro instruction:
1. git clone https://github.com/zhxchen17/torchnative/blob/main/wip/flux_aoti.py
2. install flux
3. python flux_aoti.py (This will output a python command to run)
4. Modify https://github.com/zhxchen17/torchnative/blob/main/wip/flux_test.py#L56 to 4096 and 4096.
cc: @desertfire
### Versions
Main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi
| true
|
2,897,602,379
|
[WIP] First version of StaticCudaLauncher
|
jamesjwu
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148556
Putting this up for a first pass review, though I will likely make a bunch of changes before landing to add more features, etc.
This diff implements a first version of a static CUDA kernel launcher in `torch._C`. The goal here is to take a cubin file and some metadata from a CompiledKernel from `triton`, and launch the cubin file directly.
Background doc: https://docs.google.com/document/d/1rjRcHl6MfauHG30nCoQX-9UKvKyIs4WWMy_GsGyqb9g/edit?tab=t.0#heading=h.ut5lf39lzq66
Normally, using triton's CompiledKernel.make_launcher(), we would pay the cost of codegenning C++ and running it at compile time. With this new approach, we can use one statically compiled library to launch the kernel.
The tradeoff here is that this new kernel launcher will not be able to use codegen to deal with different lengths/types of arguments. So we use templating to handle up to 10 arguments for now. We also allocate 8 bytes on the stack per argument no matter the argument type, which can take more memory than codegenning. On the other hand, we improve compile time on cold and warm start by not having to call the C++ compiler at all.
This diff does not add the launcher to torch, but introduces a basic test suite.
A list of TODOs that are not yet complete:
- Handle `nvTmaDesc` and `cuTensorMap`, which triton handles
- Embed the grid logic instead of passing in gridX,Y,Z
- Handle launch_enter and exit hooks? (Not sure if inductor has these)
- Benchmarking to see if there's runtime performance loss
- Probably lots of features of the triton C++ generated code that I haven't handled yet.
Differential Revision: [D69926783](https://our.internmc.facebook.com/intern/diff/D69926783/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,897,456,274
|
[custom_ops][perf] Move expensive pytree traversals of tensors to C++
|
IvanKobzarev
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148555
(benchmark for 1 call)
Before:
```
└─ $ python ~/task_custom_ops_perf/test_custom_ops_perf_repro.py
DO_BENCH mutate: 77.72445678710938 us PROFILE:/home/ivankobzarev/task_custom_ops_perf/mutate.json
DO_BENCH no_mutate: 64.61143493652344 us PROFILE:/home/ivankobzarev/task_custom_ops_perf/no_mutate.json
DO_BENCH direct_mutate: 11.682510375976562 us PROFILE:/home/ivankobzarev/task_custom_ops_perf/direct_mutate.json
DO_BENCH direct_no_mutate: 18.596649169921875 us PROFILE:/home/ivankobzarev/task_custom_ops_perf/direct_no_mutate.json
```
After:
```
└─ $ python ~/task_custom_ops_perf/test_custom_ops_perf_repro.py
DO_BENCH mutate: 47.6837158203125 us PROFILE:/home/ivankobzarev/task_custom_ops_perf/mutate.json
DO_BENCH no_mutate: 31.709671020507812 us PROFILE:/home/ivankobzarev/task_custom_ops_perf/no_mutate.json
DO_BENCH direct_mutate: 10.967254638671875 us PROFILE:/home/ivankobzarev/task_custom_ops_perf/direct_mutate.json
DO_BENCH direct_no_mutate: 10.728836059570312 us PROFILE:/home/ivankobzarev/task_custom_ops_perf/direct_no_mutate.json
```
| true
|
2,897,446,332
|
[BE] format `test/inductor/s429861_repro.py`
|
XuehaiPan
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"skip-pr-sanity-checks",
"module: inductor",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144556
* #148186
* __->__ #148554
Split from #148186
The diff can be re-generated with the following code in the repo root directory on main branch:
```python
import re
from pathlib import Path
def replace(m: re.Match) -> str:
s = m.group()
if '\n' not in s:
return s
indent = m.group("indent")
varnames = s.removesuffix("None").replace("=", "").replace("(", "").replace(")", "").split()
return "\n".join(
[
f"{indent}(",
*(f"{indent} {varname}," for varname in varnames),
f"{indent}) = (None,) * {len(varnames)}",
]
)
file = Path('test/inductor/s429861_repro.py')
content = file.read_text(encoding='utf-8')
new_content = re.sub(
r"^(?P<indent> *)\w+ *=(\s*(\(\s*\w+\s*\)|\w+)\s*=\s*)+None$",
replace,
content,
flags=re.MULTILINE,
)
file.write_text(new_content, encoding='utf-8')
```
cc @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,897,329,063
|
remove TORCH_NCCL_AVOID_RECORD_STREAMS,use stashed_for_allocator_safety_ to save the input ref
|
taozhiwei
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"release notes: distributed (c10d)"
] | 4
|
CONTRIBUTOR
|
Thoroughly solve the following problems: [https://discuss.pytorch.org/t/cuda-allocation-lifetime-for-inputs-to-distributed-all-reduce/191573](https://discuss.pytorch.org/t/cuda-allocation-lifetime-for-inputs-to-distributed-all-reduce/191573)
`recordStream` can cause additional performance loss and result in memory not being released in a timely manner, save input tensors ref to `stashed_for_allocator_safety_ ` can make sure input tensors are not freed before their usages on ncclStreams finish.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,897,258,704
|
Return type annotation of `Tensor.long()` etc is not narrowed down to dtype-specific names `LongTensor` etc
|
lkct
|
open
|
[
"module: typing",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Sometimes when we want dtype-specific typing for `Tensor`, e.g. `LongTensor`, the conversion from `Tensor.long()` (and similar methods) still returns `Tensor`, so the following code fails mypy.
```python
from torch import LongTensor, Tensor
def foo(x: Tensor) -> LongTensor:
return x.long()
```
```log
a.py:5: error: Incompatible return value type (got "Tensor", expected "LongTensor") [return-value]
```
In fact, it seems no function/method is annotated with `LongTensor`, and the only way to obtain `LongTensor` for typing is to call the constructor `LongTensor()` (and, ofc, `typing.cast`).
A simple solution is to change the dtype conversion methods to return corresponding class names.
A more comprehensive solution could be to make `Tensor` generic on dtype (similar to numpy), so that methods with `dtype=` kwarg can also work. However, this might involve too much work.
Yet I would also like to confirm if there's a specific reason that we shouldn't fix it in either way. Still taking `LongTensor` as an example, it appears many times in the docstrings (e.g. `argmax` should be another way to create a `LongTensor`), but never really used in type annotations, so I wonder if it is designed so on purpose.
### Versions
nightly torch-2.7.0.dev20250305
cc @ezyang @malfet @xuzhao9 @gramster
| true
|
2,897,235,029
|
Issue with torch.compile
|
1peng
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 8
|
NONE
|
### 🐛 Describe the bug
When using fish-speech, if I compile and run, I get the error.
### Error logs
/home/eason/anaconda3/envs/fish-speech/lib/python3.10/contextlib.py:103: FutureWarning: `torch.backends.cuda.sdp_kernel()` is deprecated. In the future, this context manager will be removed. Please see `torch.nn.attention.sdpa_kernel()` for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Triton compilation failed: triton_poi_fused_lift_fresh_0
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] def triton_poi_fused_lift_fresh_0(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xnumel = 1024
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xoffset = tl.program_id(0) * XBLOCK
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xindex = xoffset + tl.arange(0, XBLOCK)[:]
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xmask = xindex < xnumel
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] x0 = xindex
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp0 = tl.load(in_ptr0 + (x0), xmask)
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.store(out_ptr0 + (x0), tmp0, xmask)
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] metadata: {'signature': {'in_ptr0': '*i32', 'out_ptr0': '*i32', 'xnumel': 'i32'}, 'device': 0, 'constants': {'XBLOCK': 256}, 'configs': [AttrsDescriptor.from_dict({'arg_properties': {'tt.divisibility': (0, 1, 2), 'tt.equal_to': ()}, 'cls': 'AttrsDescriptor'})], 'device_type': 'cuda', 'num_warps': 4, 'num_stages': 1, 'debug': True, 'cc': 120}
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Traceback (most recent call last):
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/compiler.py", line 356, in make_cubin
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] subprocess.run(ptxas_cmd, check=True, close_fds=False, stderr=flog)
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/subprocess.py", line 526, in run
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] raise CalledProcessError(retcode, process.args,
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] subprocess.CalledProcessError: Command '['/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/bin/ptxas', '-lineinfo', '-v', '--gpu-name=sm_120', '/tmp/tmp3969dnyb.ptx', '-o', '/tmp/tmp3969dnyb.ptx.o']' returned non-zero exit status 255.
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] During handling of the above exception, another exception occurred:
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Traceback (most recent call last):
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 531, in _precompile_config
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] binary = triton.compile(*compile_args, **compile_kwargs)
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/compiler/compiler.py", line 279, in compile
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] next_module = compile_ir(module, metadata)
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/compiler.py", line 389, in <lambda>
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] stages["cubin"] = lambda src, metadata: self.make_cubin(src, metadata, options, self.capability)
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/compiler.py", line 374, in make_cubin
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] raise RuntimeError(f'{error}\n'
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] RuntimeError: Internal Triton PTX codegen error
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] `ptxas` stderr:
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] ptxas fatal : Value 'sm_120' is not defined for option 'gpu-name'
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Repro command: /home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/bin/ptxas -lineinfo -v --gpu-name=sm_120 /tmp/tmp3969dnyb.ptx -o /tmp/tmp3969dnyb.ptx.o
E0305 20:23:38.779000 745 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
LLVM ERROR: Cannot select: intrinsic %llvm.nvvm.shfl.sync.bfly.i32
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
LLVM ERROR: Cannot select: intrinsic %llvm.nvvm.shfl.sync.bfly.i32
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
LLVM ERROR: Cannot select: intrinsic %llvm.nvvm.shfl.sync.bfly.i32
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Triton compilation failed: triton_poi_fused_stack_1
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] def triton_poi_fused_stack_1(in_ptr0, in_ptr1, out_ptr0, xnumel, XBLOCK : tl.constexpr):
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xnumel = 8192
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xoffset = tl.program_id(0) * XBLOCK
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xindex = xoffset + tl.arange(0, XBLOCK)[:]
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xmask = tl.full([XBLOCK], True, tl.int1)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] x1 = xindex // 1024
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] x0 = (xindex % 1024)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] x2 = xindex
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp0 = x1
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp1 = tl.full([1], 0, tl.int64)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp2 = tmp0 >= tmp1
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp3 = tl.full([1], 1, tl.int64)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp4 = tmp0 < tmp3
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp5 = tl.load(in_ptr0 + (1))
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp6 = tl.broadcast_to(tmp5, [XBLOCK])
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp7 = tl.where(tmp4, tmp6, 0)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp8 = tl.full([1], 0, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp9 = tmp7 + tmp8
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp10 = tl.full([XBLOCK], 8192, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp11 = tmp9 + tmp10
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp12 = tmp9 < 0
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp13 = tl.where(tmp12, tmp11, tmp9)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.device_assert(((0 <= tl.broadcast_to(tmp13, [XBLOCK])) & (tl.broadcast_to(tmp13, [XBLOCK]) < 8192)) | ~(tmp4), "index out of bounds: 0 <= tl.broadcast_to(tmp13, [XBLOCK]) < 8192")
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp15 = tl.load(in_ptr1 + (x0 + 1024*tmp13), tmp4, other=0.0).to(tl.float32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp16 = tmp0 >= tmp3
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp17 = tl.full([1], 2, tl.int64)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp18 = tmp0 < tmp17
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp19 = tmp16 & tmp18
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp20 = tl.load(in_ptr0 + (2))
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp21 = tl.broadcast_to(tmp20, [XBLOCK])
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp22 = tl.where(tmp19, tmp21, 0)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp23 = tl.full([1], 1024, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp24 = tmp22 + tmp23
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp25 = tl.full([XBLOCK], 8192, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp26 = tmp24 + tmp25
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp27 = tmp24 < 0
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp28 = tl.where(tmp27, tmp26, tmp24)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.device_assert(((0 <= tl.broadcast_to(tmp28, [XBLOCK])) & (tl.broadcast_to(tmp28, [XBLOCK]) < 8192)) | ~(tmp19), "index out of bounds: 0 <= tl.broadcast_to(tmp28, [XBLOCK]) < 8192")
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp30 = tl.load(in_ptr1 + (x0 + 1024*tmp28), tmp19, other=0.0).to(tl.float32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp31 = tmp0 >= tmp17
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp32 = tl.full([1], 3, tl.int64)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp33 = tmp0 < tmp32
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp34 = tmp31 & tmp33
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp35 = tl.load(in_ptr0 + (3))
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp36 = tl.broadcast_to(tmp35, [XBLOCK])
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp37 = tl.where(tmp34, tmp36, 0)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp38 = tl.full([1], 2048, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp39 = tmp37 + tmp38
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp40 = tl.full([XBLOCK], 8192, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp41 = tmp39 + tmp40
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp42 = tmp39 < 0
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp43 = tl.where(tmp42, tmp41, tmp39)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.device_assert(((0 <= tl.broadcast_to(tmp43, [XBLOCK])) & (tl.broadcast_to(tmp43, [XBLOCK]) < 8192)) | ~(tmp34), "index out of bounds: 0 <= tl.broadcast_to(tmp43, [XBLOCK]) < 8192")
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp45 = tl.load(in_ptr1 + (x0 + 1024*tmp43), tmp34, other=0.0).to(tl.float32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp46 = tmp0 >= tmp32
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp47 = tl.full([1], 4, tl.int64)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp48 = tmp0 < tmp47
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp49 = tmp46 & tmp48
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp50 = tl.load(in_ptr0 + (4))
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp51 = tl.broadcast_to(tmp50, [XBLOCK])
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp52 = tl.where(tmp49, tmp51, 0)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp53 = tl.full([1], 3072, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp54 = tmp52 + tmp53
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp55 = tl.full([XBLOCK], 8192, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp56 = tmp54 + tmp55
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp57 = tmp54 < 0
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp58 = tl.where(tmp57, tmp56, tmp54)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.device_assert(((0 <= tl.broadcast_to(tmp58, [XBLOCK])) & (tl.broadcast_to(tmp58, [XBLOCK]) < 8192)) | ~(tmp49), "index out of bounds: 0 <= tl.broadcast_to(tmp58, [XBLOCK]) < 8192")
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp60 = tl.load(in_ptr1 + (x0 + 1024*tmp58), tmp49, other=0.0).to(tl.float32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp61 = tmp0 >= tmp47
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp62 = tl.full([1], 5, tl.int64)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp63 = tmp0 < tmp62
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp64 = tmp61 & tmp63
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp65 = tl.load(in_ptr0 + (5))
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp66 = tl.broadcast_to(tmp65, [XBLOCK])
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp67 = tl.where(tmp64, tmp66, 0)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp68 = tl.full([1], 4096, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp69 = tmp67 + tmp68
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp70 = tl.full([XBLOCK], 8192, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp71 = tmp69 + tmp70
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp72 = tmp69 < 0
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp73 = tl.where(tmp72, tmp71, tmp69)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.device_assert(((0 <= tl.broadcast_to(tmp73, [XBLOCK])) & (tl.broadcast_to(tmp73, [XBLOCK]) < 8192)) | ~(tmp64), "index out of bounds: 0 <= tl.broadcast_to(tmp73, [XBLOCK]) < 8192")
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp75 = tl.load(in_ptr1 + (x0 + 1024*tmp73), tmp64, other=0.0).to(tl.float32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp76 = tmp0 >= tmp62
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp77 = tl.full([1], 6, tl.int64)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp78 = tmp0 < tmp77
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp79 = tmp76 & tmp78
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp80 = tl.load(in_ptr0 + (6))
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp81 = tl.broadcast_to(tmp80, [XBLOCK])
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp82 = tl.where(tmp79, tmp81, 0)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp83 = tl.full([1], 5120, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp84 = tmp82 + tmp83
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp85 = tl.full([XBLOCK], 8192, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp86 = tmp84 + tmp85
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp87 = tmp84 < 0
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp88 = tl.where(tmp87, tmp86, tmp84)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.device_assert(((0 <= tl.broadcast_to(tmp88, [XBLOCK])) & (tl.broadcast_to(tmp88, [XBLOCK]) < 8192)) | ~(tmp79), "index out of bounds: 0 <= tl.broadcast_to(tmp88, [XBLOCK]) < 8192")
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp90 = tl.load(in_ptr1 + (x0 + 1024*tmp88), tmp79, other=0.0).to(tl.float32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp91 = tmp0 >= tmp77
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp92 = tl.full([1], 7, tl.int64)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp93 = tmp0 < tmp92
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp94 = tmp91 & tmp93
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp95 = tl.load(in_ptr0 + (7))
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp96 = tl.broadcast_to(tmp95, [XBLOCK])
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp97 = tl.where(tmp94, tmp96, 0)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp98 = tl.full([1], 6144, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp99 = tmp97 + tmp98
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp100 = tl.full([XBLOCK], 8192, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp101 = tmp99 + tmp100
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp102 = tmp99 < 0
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp103 = tl.where(tmp102, tmp101, tmp99)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.device_assert(((0 <= tl.broadcast_to(tmp103, [XBLOCK])) & (tl.broadcast_to(tmp103, [XBLOCK]) < 8192)) | ~(tmp94), "index out of bounds: 0 <= tl.broadcast_to(tmp103, [XBLOCK]) < 8192")
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp105 = tl.load(in_ptr1 + (x0 + 1024*tmp103), tmp94, other=0.0).to(tl.float32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp106 = tmp0 >= tmp92
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp107 = tl.full([1], 8, tl.int64)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp108 = tmp0 < tmp107
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp109 = tl.load(in_ptr0 + (8))
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp110 = tl.broadcast_to(tmp109, [XBLOCK])
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp111 = tl.where(tmp106, tmp110, 0)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp112 = tl.full([1], 7168, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp113 = tmp111 + tmp112
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp114 = tl.full([XBLOCK], 8192, tl.int32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp115 = tmp113 + tmp114
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp116 = tmp113 < 0
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp117 = tl.where(tmp116, tmp115, tmp113)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.device_assert(((0 <= tl.broadcast_to(tmp117, [XBLOCK])) & (tl.broadcast_to(tmp117, [XBLOCK]) < 8192)) | ~(tmp106), "index out of bounds: 0 <= tl.broadcast_to(tmp117, [XBLOCK]) < 8192")
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp119 = tl.load(in_ptr1 + (x0 + 1024*tmp117), tmp106, other=0.0).to(tl.float32)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp120 = tl.where(tmp94, tmp105, tmp119)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp121 = tl.where(tmp79, tmp90, tmp120)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp122 = tl.where(tmp64, tmp75, tmp121)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp123 = tl.where(tmp49, tmp60, tmp122)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp124 = tl.where(tmp34, tmp45, tmp123)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp125 = tl.where(tmp19, tmp30, tmp124)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp126 = tl.where(tmp4, tmp15, tmp125)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.store(out_ptr0 + (x2), tmp126, None)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] metadata: {'signature': {'in_ptr0': '*i32', 'in_ptr1': '*bf16', 'out_ptr0': '*bf16', 'xnumel': 'i32'}, 'device': 0, 'constants': {'XBLOCK': 256}, 'configs': [AttrsDescriptor.from_dict({'arg_properties': {'tt.divisibility': (0, 1, 2, 3), 'tt.equal_to': ()}, 'cls': 'AttrsDescriptor'})], 'device_type': 'cuda', 'num_warps': 4, 'num_stages': 1, 'debug': True, 'cc': 120}
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Traceback (most recent call last):
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/compiler.py", line 356, in make_cubin
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] subprocess.run(ptxas_cmd, check=True, close_fds=False, stderr=flog)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/subprocess.py", line 526, in run
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] raise CalledProcessError(retcode, process.args,
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] subprocess.CalledProcessError: Command '['/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/bin/ptxas', '-lineinfo', '-v', '--gpu-name=sm_120', '/tmp/tmp3b9ld5ez.ptx', '-o', '/tmp/tmp3b9ld5ez.ptx.o']' returned non-zero exit status 255.
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] During handling of the above exception, another exception occurred:
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Traceback (most recent call last):
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 531, in _precompile_config
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] binary = triton.compile(*compile_args, **compile_kwargs)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/compiler/compiler.py", line 279, in compile
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] next_module = compile_ir(module, metadata)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/compiler.py", line 389, in <lambda>
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] stages["cubin"] = lambda src, metadata: self.make_cubin(src, metadata, options, self.capability)
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/compiler.py", line 374, in make_cubin
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] raise RuntimeError(f'{error}\n'
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] RuntimeError: Internal Triton PTX codegen error
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] `ptxas` stderr:
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] ptxas fatal : Value 'sm_120' is not defined for option 'gpu-name'
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Repro command: /home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/bin/ptxas -lineinfo -v --gpu-name=sm_120 /tmp/tmp3b9ld5ez.ptx -o /tmp/tmp3b9ld5ez.ptx.o
E0305 20:23:38.866000 747 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Triton compilation failed: triton_poi_fused__to_copy_mul_5
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] def triton_poi_fused__to_copy_mul_5(in_ptr0, in_ptr1, in_ptr2, out_ptr0, xnumel, XBLOCK : tl.constexpr):
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xnumel = 1024
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xoffset = tl.program_id(0) * XBLOCK
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xindex = xoffset + tl.arange(0, XBLOCK)[:]
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] xmask = xindex < xnumel
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] x2 = xindex
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] x0 = (xindex % 64)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] x1 = xindex // 64
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp0 = (x2 % 2)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp1 = tl.full([1], 0, tl.int64)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp2 = tmp0 >= tmp1
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp3 = tl.full([1], 1, tl.int64)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp4 = tmp0 < tmp3
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp5 = tl.load(in_ptr0 + (2*(x0 // 2) + 64*x1), xmask & tmp4, eviction_policy='evict_last', other=0.0)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp6 = tmp5.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp7 = tmp6.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp8 = tl.load(in_ptr1 + (0))
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp9 = tl.broadcast_to(tmp8, [XBLOCK])
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp10 = tl.where(tmp4, tmp9, 0)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp11 = tl.full([XBLOCK], 8192, tl.int32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp12 = tmp10 + tmp11
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp13 = tmp10 < 0
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp14 = tl.where(tmp13, tmp12, tmp10)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.device_assert(((0 <= tl.broadcast_to(tmp14, [XBLOCK])) & (tl.broadcast_to(tmp14, [XBLOCK]) < 8192)) | ~(xmask & tmp4), "index out of bounds: 0 <= tl.broadcast_to(tmp14, [XBLOCK]) < 8192")
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp16 = tl.load(in_ptr2 + (2*(x0 // 2) + 64*tmp14), xmask & tmp4, eviction_policy='evict_last', other=0.0).to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp17 = tmp16.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp18 = tmp7 * tmp17
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp19 = tl.load(in_ptr0 + (1 + 2*(x0 // 2) + 64*x1), xmask & tmp4, eviction_policy='evict_last', other=0.0)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp20 = tmp19.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp21 = tmp20.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp22 = tl.load(in_ptr2 + (1 + 2*(x0 // 2) + 64*tmp14), xmask & tmp4, eviction_policy='evict_last', other=0.0).to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp23 = tmp22.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp24 = tmp21 * tmp23
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp25 = tmp18 - tmp24
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp26 = tl.full(tmp25.shape, 0.0, tmp25.dtype)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp27 = tl.where(tmp4, tmp25, tmp26)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp28 = tmp0 >= tmp3
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp29 = tl.full([1], 2, tl.int64)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp30 = tmp0 < tmp29
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp31 = tl.load(in_ptr0 + (1 + 2*(x0 // 2) + 64*x1), xmask & tmp28, eviction_policy='evict_last', other=0.0)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp32 = tmp31.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp33 = tmp32.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp34 = tl.load(in_ptr1 + (0))
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp35 = tl.broadcast_to(tmp34, [XBLOCK])
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp36 = tl.where(tmp28, tmp35, 0)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp37 = tl.full([XBLOCK], 8192, tl.int32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp38 = tmp36 + tmp37
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp39 = tmp36 < 0
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp40 = tl.where(tmp39, tmp38, tmp36)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.device_assert(((0 <= tl.broadcast_to(tmp40, [XBLOCK])) & (tl.broadcast_to(tmp40, [XBLOCK]) < 8192)) | ~(xmask & tmp28), "index out of bounds: 0 <= tl.broadcast_to(tmp40, [XBLOCK]) < 8192")
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp42 = tl.load(in_ptr2 + (2*(x0 // 2) + 64*tmp40), tmp28 & xmask, eviction_policy='evict_last', other=0.0).to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp43 = tmp42.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp44 = tmp33 * tmp43
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp45 = tl.load(in_ptr0 + (2*(x0 // 2) + 64*x1), xmask & tmp28, eviction_policy='evict_last', other=0.0)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp46 = tmp45.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp47 = tmp46.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp48 = tl.load(in_ptr2 + (1 + 2*(x0 // 2) + 64*tmp40), tmp28 & xmask, eviction_policy='evict_last', other=0.0).to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp49 = tmp48.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp50 = tmp47 * tmp49
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp51 = tmp44 + tmp50
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp52 = tl.full(tmp51.shape, 0.0, tmp51.dtype)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp53 = tl.where(tmp28, tmp51, tmp52)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp54 = tl.where(tmp4, tmp27, tmp53)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp55 = tmp54.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp56 = tmp55.to(tl.float32)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp57 = 0.3535533905932738
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tmp58 = tmp56 * tmp57
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] tl.store(out_ptr0 + (x2), tmp58, xmask)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] metadata: {'signature': {'in_ptr0': '*fp32', 'in_ptr1': '*i32', 'in_ptr2': '*bf16', 'out_ptr0': '*fp32', 'xnumel': 'i32'}, 'device': 0, 'constants': {'XBLOCK': 256}, 'configs': [AttrsDescriptor.from_dict({'arg_properties': {'tt.divisibility': (0, 1, 2, 3, 4), 'tt.equal_to': ()}, 'cls': 'AttrsDescriptor'})], 'device_type': 'cuda', 'num_warps': 4, 'num_stages': 1, 'debug': True, 'cc': 120}
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Traceback (most recent call last):
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/compiler.py", line 356, in make_cubin
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] subprocess.run(ptxas_cmd, check=True, close_fds=False, stderr=flog)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/subprocess.py", line 526, in run
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] raise CalledProcessError(retcode, process.args,
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] subprocess.CalledProcessError: Command '['/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/bin/ptxas', '-lineinfo', '-v', '--gpu-name=sm_120', '/tmp/tmpr8yxx0lh.ptx', '-o', '/tmp/tmpr8yxx0lh.ptx.o']' returned non-zero exit status 255.
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] During handling of the above exception, another exception occurred:
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Traceback (most recent call last):
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 531, in _precompile_config
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] binary = triton.compile(*compile_args, **compile_kwargs)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/compiler/compiler.py", line 279, in compile
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] next_module = compile_ir(module, metadata)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/compiler.py", line 389, in <lambda>
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] stages["cubin"] = lambda src, metadata: self.make_cubin(src, metadata, options, self.capability)
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] File "/home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/compiler.py", line 374, in make_cubin
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] raise RuntimeError(f'{error}\n'
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] RuntimeError: Internal Triton PTX codegen error
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] `ptxas` stderr:
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] ptxas fatal : Value 'sm_120' is not defined for option 'gpu-name'
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533] Repro command: /home/eason/anaconda3/envs/fish-speech/lib/python3.10/site-packages/triton/backends/nvidia/bin/ptxas -lineinfo -v --gpu-name=sm_120 /tmp/tmpr8yxx0lh.ptx -o /tmp/tmpr8yxx0lh.ptx.o
E0305 20:23:38.890000 755 site-packages/torch/_inductor/runtime/triton_heuristics.py:533]
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
LLVM ERROR: Cannot select: intrinsic %llvm.nvvm.shfl.sync.bfly.i32
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
'sm_120' is not a recognized processor for this target (ignoring processor)
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250305+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 5090 D
Nvidia driver version: 572.60
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 9 285K
CPU family: 6
Model: 198
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 1
Stepping: 2
BogoMIPS: 7372.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 1.1 MiB (24 instances)
L1i cache: 1.5 MiB (24 instances)
L2 cache: 72 MiB (24 instances)
L3 cache: 36 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] onnxruntime==1.20.1
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] pytorch-wpe==0.0.1
[pip3] torch==2.7.0.dev20250305+cu128
[pip3] torch-complex==0.4.4
[pip3] torchaudio==2.6.0.dev20250305+cu128
[pip3] torchmetrics==1.6.2
[pip3] torchvision==0.22.0.dev20250305+cu128
[pip3] vector-quantize-pytorch==1.14.24
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.7.1.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] pytorch-lightning 2.5.0.post0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] pytorch-wpe 0.0.1 pypi_0 pypi
[conda] torch 2.7.0.dev20250305+cu128 pypi_0 pypi
[conda] torch-complex 0.4.4 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250305+cu128 pypi_0 pypi
[conda] torchmetrics 1.6.2 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250305+cu128 pypi_0 pypi
[conda] vector-quantize-pytorch 1.14.24 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,897,219,901
|
Name 'equal_valued' cannot be imported in pytorch 2.5.0
|
ByulEEEEE
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
Name 'equal_valued' cannot be imported in pytorch 2.5.0, which can be imported in version 1.13 and 2.5.1.
Traceback (most recent call last):
File "D:\PycharmProjects\pythonProject2\test.py", line 4, in <module>
from sympy.core.numbers import equal_valued
ImportError: cannot import name 'equal_valued' from 'sympy.core.numbers' (D:\mydownload\anaconda3\Lib\site-packages\sympy\core\numbers.py)
### Versions
PyTorch version: 2.5.0
Python version: 3.11.1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,897,218,480
|
Name 'equal_valued' cannot be imported in pytorch 2.5.0
|
ByulEEEEE
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
NONE
|
### 🐛 Describe the bug
Name 'equal_valued' cannot be imported in pytorch 2.5.0, which can be imported in version 1.13 and 2.5.1.
Traceback (most recent call last):
File "D:\PycharmProjects\pythonProject2\test.py", line 4, in <module>
from sympy.core.numbers import equal_valued
ImportError: cannot import name 'equal_valued' from 'sympy.core.numbers' (D:\mydownload\anaconda3\Lib\site-packages\sympy\core\numbers.py)
### Versions
PyTorch version: 2.5.0
Python version: 3.11.1
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,897,196,745
|
Issue with Sparse Tensor Matrix Multiplication and Broadcasting
|
pigu163
|
open
|
[
"module: sparse",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
I’m encountering a NotImplementedError when trying to perform matrix multiplication with sparse COO tensors that involves broadcasting. Here’s a minimal reproducible example:
`x = torch.matmul(torch.rand(327, 36).to_sparse_coo(),torch.rand(1, 36, 1))
torch.matmul(torch.rand(2000, 327).to_sparse_coo(), x)`
`x.shape -- > torch.Size([1, 327, 1])`
This produces the following error:
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[252], line 2
1 x = torch.matmul(torch.rand(327, 36).to_sparse_coo(),torch.rand(1, 36, 1))
----> 2 torch.matmul(torch.rand(2000, 327).to_sparse_coo(), x)
NotImplementedError: Could not run 'aten::as_strided' with arguments from the 'SparseCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective[/custom](http://localhost:8888/custom) build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::as_strided' is only available for these backends: [CPU, CUDA, Meta, QuantizedCPU, QuantizedCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
**It seems the first torch.matmul() for sparse-dense matrix no error popping up, but the second one does**
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.11.3 | packaged by conda-forge | (main, Apr 6 2023, 08:57:19) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 4000
Nvidia driver version: 535.183.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 7
CPU max MHz: 3900.0000
CPU min MHz: 1200.0000
BogoMIPS: 5800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 32 MiB (32 instances)
L3 cache: 44 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] pytorch-lightning==2.0.0
[pip3] torch==2.0.1+cu118
[pip3] torch-cluster==1.6.3+pt20cu118
[pip3] torch-geometric==2.6.1
[pip3] torch-scatter==2.1.2+pt20cu118
[pip3] torch-sparse==0.6.18+pt20cu118
[pip3] torch_spatiotemporal==0.9.5
[pip3] torch-spline-conv==1.2.2+pt20cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.21.5 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] pytorch-lightning 2.0.0 pypi_0 pypi
[conda] torch 2.0.1+cu118 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt20cu118 pypi_0 pypi
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt20cu118 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt20cu118 pypi_0 pypi
[conda] torch-spatiotemporal 0.9.5 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt20cu118 pypi_0 pypi
[conda] torchaudio 2.0.2+cu118 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] torchvision 0.15.2+cu118 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
2,897,131,481
|
The operator 'aten::_linalg_solve_ex.result' is not currently implemented for the MPS device
|
tuwenbo0120
|
closed
|
[
"triaged",
"module: linear algebra",
"module: mps"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
Subject: Missing Operator Support for MPS in PyTorch
Dear PyTorch Development Team,
I am writing to bring to your attention an issue I encountered while using PyTorch with the MPS device on my macOS system.
When running my code, I received the following error message: "The operator 'aten::_linalg_solve_ex.result' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on https://github.com/pytorch/pytorch/issues/141287 and mention use - case, that resulted in missing op as well as commit hash 2236df1770800ffea5697b11b0bb0d910b2e59e1."
My use - case involves [Describe your specific use - case here in detail. For example, I am working on a machine learning project where I am solving a system of linear equations using PyTorch tensors on an M1 Mac. The code that triggers this error is part of a custom neural network layer implementation that requires solving a set of linear equations for each forward pass.].
The commit hash of my current PyTorch version is 2236df1770800ffea5697b11b0bb0d910b2e59e1. I believe adding support for this operator on the MPS device would significantly improve the performance and usability of PyTorch for macOS users with MPS - enabled hardware, especially those working on numerical computing and machine learning tasks that rely on solving linear equations.
Thank you for your attention to this matter, and I look forward to seeing the support for this operator in future releases.
Best regards,
### Alternatives
_No response_
### Additional context
_No response_
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,897,095,074
|
Pytorch nightly broken Flash Attention 3 compile with sycl commit - TypeError: _write_ninja_file() got an unexpected keyword argument 'sycl_cflags'
|
FurkanGozukara
|
open
|
[
"module: build",
"triaged"
] | 4
|
NONE
|
Here more information
`torch.__version__ = 2.7.0.dev20250228+cu128`
https://github.com/Dao-AILab/flash-attention/issues/1524
https://github.com/Dao-AILab/flash-attention/issues/1524#issuecomment-2699309947
Original error was below one. Then authors did some commits and now I get below error of that one
previous error
`TypeError: _write_ninja_file() got an unexpected keyword argument 'sycl_cflags'`
```
(venv) C:\a\d\hopper>python setup.py build_ext --parallel 8 bdist_wheel
torch.__version__ = 2.7.0.dev20250228+cu128
C:\a\venv\lib\site-packages\setuptools\dist.py:530: UserWarning: Normalizing '3.0.0.b1' to '3.0.0b1'
warnings.warn(tmpl.format(**locals()))
running build_ext
building 'flash_attn_3_cuda' extension
Emitting ninja build file C:\a\d\hopper\build\temp.win-amd64-cpython-310\Release\build.ninja...
Traceback (most recent call last):
File "C:\a\d\hopper\setup.py", line 618, in <module>
setup(
File "C:\a\venv\lib\site-packages\setuptools\__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "C:\a\venv\lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
return run_commands(dist)
File "C:\a\venv\lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "C:\a\venv\lib\site-packages\setuptools\_distutils\dist.py", line 968, in run_commands
self.run_command(cmd)
File "C:\a\venv\lib\site-packages\setuptools\dist.py", line 1217, in run_command
super().run_command(command)
File "C:\a\venv\lib\site-packages\setuptools\_distutils\dist.py", line 987, in run_command
cmd_obj.run()
File "C:\a\venv\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
_build_ext.run(self)
File "C:\a\venv\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 346, in run
self.build_extensions()
File "C:\a\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1007, in build_extensions
build_ext.build_extensions(self)
File "C:\a\venv\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 464, in build_extensions
self._build_extensions_parallel()
File "C:\a\venv\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 487, in _build_extensions_parallel
fut.result()
File "C:\Python310\lib\concurrent\futures\_base.py", line 458, in result
return self.__get_result()
File "C:\Python310\lib\concurrent\futures\_base.py", line 403, in __get_result
raise self._exception
File "C:\Python310\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\a\venv\lib\site-packages\setuptools\command\build_ext.py", line 246, in build_extension
_build_ext.build_extension(self, ext)
File "C:\a\venv\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 547, in build_extension
objects = self.compiler.compile(
File "C:\a\venv\lib\site-packages\torch\utils\cpp_extension.py", line 975, in win_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "C:\a\venv\lib\site-packages\torch\utils\cpp_extension.py", line 2107, in _write_ninja_file_and_compile_objects
_write_ninja_file(
TypeError: _write_ninja_file() got an unexpected keyword argument 'sycl_cflags'
(venv) C:\a\d\hopper>
```
The error I get after their fix (doesnt work on Windows)
```
(venv) C:\a\d\hopper>python setup.py build_ext --parallel 8 bdist_wheel
Submodule 'csrc/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path '../csrc/cutlass'
Cloning into 'C:/a/d/csrc/cutlass'...
Submodule path '../csrc/cutlass': checked out 'afa1772203677c5118fcd82537a9c8fefbcc7008'
torch.__version__ = 2.7.0.dev20250228+cu128
C:\a\venv\lib\site-packages\setuptools\dist.py:530: UserWarning: Normalizing '3.0.0.b1' to '3.0.0b1'
warnings.warn(tmpl.format(**locals()))
running build_ext
building 'flash_attn_3_cuda' extension
creating C:\a\d\hopper\build
creating C:\a\d\hopper\build\temp.win-amd64-cpython-310
creating C:\a\d\hopper\build\temp.win-amd64-cpython-310\Release
creating C:\a\d\hopper\build\temp.win-amd64-cpython-310\Release\instantiations
Emitting ninja build file C:\a\d\hopper\build\temp.win-amd64-cpython-310\Release\build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/133] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcc --generate-dependencies-with-compile --dependency-output C:\a\d\hopper\build\temp.win-amd64-cpython-310\Release\flash_prepare_scheduler.obj.d -std=c++17 --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /wd4624 -Xcompiler /wd4067 -Xcompiler /wd4068 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IC:\a\d\hopper -IC:\a\d\csrc\cutlass\include -IC:\a\venv\lib\site-packages\torch\include -IC:\a\venv\lib\site-packages\torch\include\torch\csrc\api\include "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include" -IC:\a\venv\include -IC:\Python310\include -IC:\Python310\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" -c C:\a\d\hopper\flash_prepare_scheduler.cu -o C:\a\d\hopper\build\temp.win-amd64-cpython-310\Release\flash_prepare_scheduler.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --threads 2 -O3 -std=c++17 --ftemplate-backtrace-limit=0 --use_fast_math --resource-usage -lineinfo -DCUTE_SM90_EXTENDED_MMA_SHAPES_ENABLED -DCUTLASS_DEBUG_TRACE_LEVEL=0 -DNDEBUG -D_USE_MATH_DEFINES -Xcompiler=/Zc:__cplusplus -gencode arch=compute_90a,code=sm_90a -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=flash_attn_3_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode arch=compute_80,code=sm_80
flash_prepare_scheduler.cu
flash_prepare_scheduler.cu
flash_prepare_scheduler.cu
flash_prepare_scheduler.cu
ptxas info : 8 bytes gmem, 64 bytes cmem[4]
ptxas info : Compiling entry function '_ZN5flash32prepare_varlen_num_blocks_kernelEiiiPKiS1_S1_S1_S1_S1_iiiiiN7cutlass10FastDivmodES3_PiS4_S4_S4_' for 'sm_80'
ptxas info : Function properties for _ZN5flash32prepare_varlen_num_blocks_kernelEiiiPKiS1_S1_S1_S1_S1_iiiiiN7cutlass10FastDivmodES3_PiS4_S4_S4_
0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Used 22 registers, used 1 barriers, 4 bytes smem, 496 bytes cmem[0]
ptxas info : Compile time = 0.000 ms
ptxas info : 8 bytes gmem
ptxas info : Compiling entry function '_ZN5flash32prepare_varlen_num_blocks_kernelEiiiPKiS1_S1_S1_S1_S1_iiiiiN7cutlass10FastDivmodES3_PiS4_S4_S4_' for 'sm_90a'
ptxas info : Function properties for _ZN5flash32prepare_varlen_num_blocks_kernelEiiiPKiS1_S1_S1_S1_S1_iiiiiN7cutlass10FastDivmodES3_PiS4_S4_S4_
0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Used 26 registers, used 1 barriers, 4 bytes smem
ptxas info : Compile time = 0.000 ms
tmpxft_00008768_00000000-7_flash_prepare_scheduler.compute_80.cudafe1.cpp
[2/133] cl /showIncludes /nologo /O2 /W3 /GL /DNDEBUG /MD /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc -IC:\a\d\hopper -IC:\a\d\csrc\cutlass\include -IC:\a\venv\lib\site-packages\torch\include -IC:\a\venv\lib\site-packages\torch\include\torch\csrc\api\include "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include" -IC:\a\venv\include -IC:\Python310\include -IC:\Python310\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" -c C:\a\d\hopper\flash_api.cpp /FoC:\a\d\hopper\build\temp.win-amd64-cpython-310\Release\flash_api.obj -O3 -std=c++17 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=flash_attn_3_cuda -D_GLIBCXX_USE_CXX11_ABI=0 /std:c++17
FAILED: C:/a/d/hopper/build/temp.win-amd64-cpython-310/Release/flash_api.obj
cl /showIncludes /nologo /O2 /W3 /GL /DNDEBUG /MD /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc -IC:\a\d\hopper -IC:\a\d\csrc\cutlass\include -IC:\a\venv\lib\site-packages\torch\include -IC:\a\venv\lib\site-packages\torch\include\torch\csrc\api\include "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include" -IC:\a\venv\include -IC:\Python310\include -IC:\Python310\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.26100.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" -c C:\a\d\hopper\flash_api.cpp /FoC:\a\d\hopper\build\temp.win-amd64-cpython-310\Release\flash_api.obj -O3 -std=c++17 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=flash_attn_3_cuda -D_GLIBCXX_USE_CXX11_ABI=0 /std:c++17
cl : Command line warning D9002 : ignoring unknown option '-O3'
cl : Command line warning D9002 : ignoring unknown option '-std=c++17'
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(434): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(438): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(442): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(446): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(450): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(454): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(458): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(462): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(466): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(470): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(474): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(478): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(482): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(486): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(490): warning C4996: 'cusparseColorInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(494): warning C4996: 'cusparseColorInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(498): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(502): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(767): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(782): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(797): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(812): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(827): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(842): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(857): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(872): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(887): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(902): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(903): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(918): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(919): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(934): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(935): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(950): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(951): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(967): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(970): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(986): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(989): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1005): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1008): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1024): warning C4996: 'bsrsv2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1027): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1125): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1142): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1159): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1176): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1193): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1210): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1227): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1244): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1261): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1278): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1279): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1296): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1297): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1314): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1315): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1332): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1333): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1351): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1356): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1374): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1379): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1397): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1402): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1420): warning C4996: 'bsrsm2Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1425): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1435): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1443): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1451): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1459): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1467): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1479): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1491): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1503): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1515): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1527): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1539): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1551): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1563): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1575): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1576): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1588): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1589): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1601): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1602): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1614): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1615): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1627): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1628): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1640): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1641): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1653): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1654): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1666): warning C4996: 'csrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1667): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1673): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1681): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1689): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1697): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1705): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1719): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1733): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1747): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1761): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1775): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1789): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1803): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1817): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1831): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1832): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1846): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1847): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1861): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1862): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1876): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1877): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1891): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1892): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1906): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1907): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1921): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1922): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1936): warning C4996: 'bsrilu02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1937): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1943): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1955): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1967): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1979): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(1991): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2003): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2015): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2027): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2039): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2051): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2052): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2064): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2065): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2077): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2078): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2090): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2091): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2103): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2104): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2116): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2117): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2129): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2130): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2142): warning C4996: 'csric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2143): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2149): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2163): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2177): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2191): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2205): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2219): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2233): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2247): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2261): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2275): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2276): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2290): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2291): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2305): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2306): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2320): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2321): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2335): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2336): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2350): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2351): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2366): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2367): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2381): warning C4996: 'bsric02Info_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(2382): warning C4996: 'cusparseSolvePolicy_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(3055): warning C4996: 'cusparseColorInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(3070): warning C4996: 'cusparseColorInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(3085): warning C4996: 'cusparseColorInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(3100): warning C4996: 'cusparseColorInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4181): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4193): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4205): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4217): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4230): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4243): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4256): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4269): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4282): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4295): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4308): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4321): warning C4996: 'csru2csrInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4630): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4648): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4664): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4681): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4697): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4712): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4729): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4746): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4761): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4782): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4803): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4822): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4842): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4862): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4880): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4900): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4920): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include\cusparse.h(4938): warning C4996: 'pruneInfo_t': The type will be removed in the next major release
C:\a\d\hopper\flash_api.cpp(264): error C2121: '#': invalid character: possibly the result of a macro expansion
C:\a\d\hopper\flash_api.cpp(366): error C3409: empty att
```
cc @malfet @seemethere
| true
|
2,896,838,470
|
F.scaled_dot_product_attention calculation output is nan when in dynamic dim under torch.compile mode
|
HiIcy
|
open
|
[
"triaged",
"oncall: pt2",
"module: sdpa"
] | 0
|
NONE
|
### 🐛 Describe the bug
I have the following piece of code, when executing the scaled_dot_product_attention calculation, using torch.compile mode, if the input shape values of two consecutive cases are different, the second case outputs nan values. There is no such issue when using eager mode. any help or analyize, thank you ?
```
import torch
import math
import torch._dynamo
#from flash_attn import flash_attn_func
import torch.nn.functional as F
from torch._dynamo.testing import rand_strided
#torch._inductor.config.debug = True
#torch._dynamo.config.cache_size_limit=1
#torch._dynamo.config.replay_record_enabled = True
#from torch.nn.attention import sdpa_kernel, SDPBackend
#torch._dynamo.config.suppress_errors = True
#@torch.compile(backend="aot_eager")
#@torch.compile
@torch.compile(backend="aot_eager")
def pytorch_func(q, k, v, causal=True):
o = torch.nn.functional.scaled_dot_product_attention(q,k,v)
return o
def init_seeds(seed=0):
import torch.backends.cudnn as cudnn
import numpy as np
import random
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
cudnn.benchmark, cudnn.deterministic = (False, True)
init_seeds(62)
bsz = 16
sql = 1
nh = 24
hd = 128
ie = [
[8, 900, 20, 64],
[1, 1536, 24, 128],
[3, 25, 46, 128],
[2, 900, 20, 64],
[2, 900, 20, 1],
[1,24,1536,128],
[4, 2, 16, 32],
#[4, 2, 16, 32],
#[2,8,4096,40]
]
for bsz, nh, sql, hd in ie[-2:]:
dtype = torch.float16
print(f"shape:(bsz, sql, nh, hd) ({bsz}, {sql}, {nh}, {hd}), dtype: {dtype}, causal: false")
print(f"shape:(bsz, nh, sql, hd) ({bsz}, {nh}, {sql}, {hd}), dtype: {dtype}, causal: false")
q = torch.randn((bsz, nh, sql, hd)).to("cuda", dtype)
print("q: ", q.shape, " ", q.stride())
print(q.is_contiguous())
q.requires_grad_()
k = torch.rand_like(q)
print(k.is_contiguous())
k.requires_grad_()
v = torch.rand_like(q)
print(v.is_contiguous())
v.requires_grad_()
o = pytorch_func(q, k, v)
#print(o.stride())
print(q)
print(".......")
print(o)
import pdb
s = o.sum()
#s.backward()
print("size: ", o.shape, " ", o.stride())
#print(q.grad)
#pdb.set_trace()
print("bye")
```
output:
```
[nan, nan, nan, ..., nan, nan, nan]],
[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]]],
[[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]],
[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]]]], device='cuda:0',
dtype=torch.float16, grad_fn=<CompiledFunctionBackward>)
```
### Versions
env info:
```
PyTorch version: 2.4.0a0+07cecf4168.nv24.05
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800 80GB PCIe
GPU 1: NVIDIA A800 80GB PCIe
GPU 2: NVIDIA A800 80GB PCIe
GPU 3: NVIDIA A800 80GB PCIe
GPU 4: NVIDIA A800 80GB PCIe
GPU 5: NVIDIA A800 80GB PCIe
Nvidia driver version: 535.54.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 176
On-line CPU(s) list: 0-175
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Xeon(R) Platinum 8458P
BIOS Model name: Intel(R) Xeon(R) Platinum 8458P
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 44
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 5400.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.1 MiB (88 instances)
L1i cache: 2.8 MiB (88 instances)
L2 cache: 176 MiB (88 instances)
L3 cache: 165 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-43,88-131
NUMA node1 CPU(s): 44-87,132-175
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] onnx==1.16.0
[pip3] optree==0.11.0
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==3.0.0+989adb9a2
[pip3] torch==2.4.0a0+07cecf4168.nv24.5
[pip3] torch-tensorrt==2.4.0a0
[pip3] torchvision==0.19.0a0
[conda] Could not collect
```
cc @chauhang @penguinwu
| true
|
2,896,794,855
|
DISABLED test_inlined_functions_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 6
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inlined_functions_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38209835007).
Over the past 3 hours, it has been determined flaky in 30 workflow(s) with 60 failures and 30 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inlined_functions_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,896,697,376
|
When Ues torch::jit::Module in UE env, there are some structured exception caused by calling module.forward()
|
rjzhou06
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
it only causes on UE env, when i running the same case on independent cxx project,no error or exception.

```
// © 2023 Kaya Adrian.
#include "Layers/Network/AtumNeuralNetworkJit.h"
#include "FunctionLibraries/AtumLibraryTensors.h"
#include "torch/script.h"
#include "Tensors/IAtumTensor.h"
#include <excpt.h>
#include <sstream>
bool UAtumNeuralNetworkJit::LoadJitFile(FString jit)
{
if (!this->m_networkJit.IsValid()) {
this->m_load_from = TCHAR_TO_UTF8(*jit);
torch::jit::Module module_ = torch::jit::load(this->m_load_from);
this->m_networkJit = MakeShared<torch::jit::Module>(module_);
this->m_networkJit->to(AtumEnums::Cast(IAtumTensor::GetDefaultDeviceType()));
return true;
}
return false;
}
bool UAtumNeuralNetworkJit::ForwardTest()
{
torch::Tensor input = torch::zeros({128});
torch::jit::Stack stack;
stack.push_back(input.to(c10::kFloat));
c10::IValue ivalue;
ivalue = this->m_networkJit->forward(stack);
if (ivalue.isTensor()) {
std::stringstream ss;
ss << ivalue.toTensor();
UE_LOG(LogTemp, Warning, TEXT("%s"), UTF8_TO_TCHAR(ss.str().c_str()));
return true;
}
return false;
}
```
this expction can not be catched by try-catch.
### Versions
UE ver: 5.4.2
libtorch ver: 2.5.1+cu124
cuda ver: 12.6
windows ver: 11
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,896,694,432
|
Enable Direct Use of Arm Compute Library (ACL) in ATen
|
fadara01
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"module: arm",
"topic: not user facing",
"ciflow/linux-aarch64",
"arm priority"
] | 7
|
COLLABORATOR
|
ACL is already built with PyTorch as a shared library when USE_MKLDNN_ACL is set.
Currently, it is only used indirectly in ATen via oneDNN for AArch64 targets. However there are cases where it makes sense to utilize ACL directly without oneDNN as an intermediary - e.g. quantization. See #145942, #147337, #146620.
This patch enables such use cases by exposing ACL to ATen
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,896,667,516
|
[triton 3.3] inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCUDA::test_comprehensive_floor_divide_cuda_float16
|
davidberard98
|
closed
|
[
"oncall: pt2"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```
PYTORCH_TEST_WITH_INDUCTOR=1 python inductor/test_torchinductor_opinfo.py -k test_comprehensive_floor_divide_cuda_float16
```
From https://hud.pytorch.org/pr/pytorch/pytorch/148492#38210074518
### Versions
https://github.com/pytorch/pytorch/pull/148492
cc @chauhang @penguinwu
| true
|
2,896,663,736
|
[triton 3.3] inductor/test_aot_inductor.py::AOTInductorTestABICompatibleGpu::test_none_args_aot_codegen_cuda
|
davidberard98
|
closed
|
[
"oncall: pt2"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
from https://github.com/pytorch/pytorch/pull/148492
### Versions
```PYTORCH_TEST_WITH_INDUCTOR=1 python inductor/test_aot_inductor.py -k test_none_args_aot_codegen_cuda```
https://hud.pytorch.org/pr/pytorch/pytorch/148492#38210074518
cc @chauhang @penguinwu
| true
|
2,896,611,452
|
exported modules using custom autograd functions will ignore custom backward function
|
Gajoo
|
closed
|
[
"module: autograd",
"triaged",
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 2
|
NONE
|
### 🐛 Describe the bug
I was experimenting with a gradient function that is not mathematically accurate and for a while I thought it was doing great until I this issue. Simply speaking if a model is exported, it only considers the forward implementation of the custom function and computes automatically from the forward pass. This is different from default behavior where the backward pass goes through the custom implementation of backward function. Below is the recreation of this bug.
```
import torch
class bad_func(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
return x
@staticmethod
def backward(ctx, g):
return g * 0.5
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.param = torch.nn.Parameter(torch.tensor(1, dtype=torch.float))
def forward(self, x):
return bad_func.apply(x * self.param)
m = Model()
t = torch.tensor([1.0, -1.0], dtype=torch.float)
def check_grad(model):
sm = model(t).square().sum()
print(sm)
sm.backward()
print(type(model), model.param.grad)
model.param.grad = None
check_grad(m)
m_c = torch.export.export(m, (t,))
check_grad(m_c.module())
```
this code would print the following which is unexpected.
```
tensor(2., grad_fn=<SumBackward0>)
<class '__main__.Model'> tensor(2.)
tensor(2., grad_fn=<SumBackward0>)
<class 'torch.fx.graph_module.GraphModule.__new__.<locals>.GraphModuleImpl'> tensor(4.)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080
GPU 1: NVIDIA GeForce RTX 4080 SUPER
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 2920X 12-Core Processor
CPU family: 23
Model: 8
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
BogoMIPS: 7000.11
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr virt_ssbd arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 768 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 8 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0+cu124
[pip3] torchviz==0.0.2
[pip3] triton==3.2.0
[conda] Could not collect
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,896,572,096
|
[Break XPU] Add test/kernel.errors.txt to .gitignore.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148323
* #147727
* __->__ #148538
* #148534
Intel GPU user mode driver may generate kernel.errors.txt files in
current working directory in certain scenarios. It includes diagnostic
information but does necessarily indicates the issue with an
application. This is a known issue and will be fixed in newer version of driver.
| true
|
2,896,557,998
|
[dtensor] add CuDNN SDPA op support to DTensor
|
XilunWu
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (dtensor)",
"module: context parallel",
"release notes: context parallel"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148537
### Summary
This PR adds `_scaled_dot_product_cudnn_attention` and `_scaled_dot_product_cudnn_attention_backward` to DTensor ops
### Test
`pytest test/distributed/tensor/test_attention.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,896,537,403
|
Computing only the n first rows of a distance matrix with pdist
|
cyrilmory
|
open
|
[
"triaged",
"enhancement",
"module: python frontend"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Would it be possible to add an argument to pdist, so that it computes and returns only the n first rows of the upper triangular distance matrix ?
This need arises when trying to determine whether a set of points are within a certain distance D of each other. If the dataset is too large, a way to go is to split it into chunks (e.g. the cells of a rectangular grid), then for each cell, compute the distance matrix between points in the cell and points in the cells within distance D (including current cell).
This can currently be achieved in two ways:
- running pdist and throwing away most lines of the upper-triangular matrix it returns. It works but can be a huge waste, especially when only 10% or less of the rows are actually required
- running pairwise_distance(v1, v2), where v1 and v2 are as long as the number of indices in the n first rows of the upper triangular distance matrix, with n the number of points in the current cell. Having to build v1 and v2 significantly slows the computation and increases its memory requirement
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD
| true
|
2,896,512,830
|
[inductor][cpu]speech_transformer failure in 2025-03-02 nightly release
|
LifengWang
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
speech_transformer failure
the bad commit: ce2f680e0009550ef0dc594f375d542662fcb7e5
the suspected guilty commit: 276dfe8150f228496e65b29c626a38eb1ef1dcde
[torchbench-speech_transformer-inference-float32-dynamic-default-single-performance-crash_guilty_commit.log](https://inteltf-jenk.sh.intel.com/job/inductor_local_guilty_commit_search/2045/artifact/2025_03_04/inductor_log/torchbench-speech_transformer-inference-float32-dynamic-default-single-performance-crash_guilty_commit.log)
Repoduce script:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance torchbench speech_transformer float32 first dynamic default
Error log
```
# bash inductor_single_run.sh multiple inference performance torchbench speech_transformer float32 first dunamic default
Testing with inductor.
multi-threads testing....
loading model: 0it [00:01, ?it/s]
cpu eval speech_transformer
ERROR:common:Backend dynamo failed in warmup()
Traceback (most recent call last):
File "/workspace/pytorch/benchmarks/dynamo/common.py", line 2554, in warmup
fn(model, example_inputs)
File "/workspace/pytorch/torch/_dynamo/eval_frame.py", line 586, in _fn
return fn(*args, **kwargs)
File "/workspace/pytorch/benchmarks/dynamo/torchbench.py", line 451, in forward_pass
return mod(*inputs)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/benchmark/torchbenchmark/models/speech_transformer/speech_transformer/transformer/transformer.py", line 27, in forward
encoder_padded_outputs, *_ = self.encoder(padded_input, input_lengths)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/benchmark/torchbenchmark/models/speech_transformer/speech_transformer/transformer/encoder.py", line 61, in forward
non_pad_mask = get_non_pad_mask(padded_input, input_lengths=input_lengths)
File "/workspace/benchmark/torchbenchmark/models/speech_transformer/speech_transformer/transformer/encoder.py", line 63, in torch_dynamo_resume_in_forward_at_61
slf_attn_mask = get_attn_pad_mask(padded_input, input_lengths, length)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 1422, in __call__
return self._torchdynamo_orig_callable(
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 1203, in __call__
result = self._inner_convert(
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 594, in __call__
return _compile(
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 1053, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/workspace/pytorch/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 755, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 900, in _compile_inner
check_fn = CheckFunctionManager(
File "/workspace/pytorch/torch/_dynamo/guards.py", line 2510, in __init__
raise AssertionError(f"Guard check failed: {reasons}")
AssertionError: Guard check failed: 7/0: ___check_obj_id(___from_numpy(L['self']._modules['layer_stack']._modules['0']._modules['slf_attn']._modules['attention'].temperature), 140582834324320)
warmup_failed
```
### Versions
PyTorch version: 2.7.0a0+gitce2f680
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) 6972P
CPU family: 6
Model: 173
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 9 MiB (192 instances)
L1i cache: 12 MiB (192 instances)
L2 cache: 384 MiB (192 instances)
L3 cache: 960 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] torch==2.7.0a0+gitce2f680
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0a0+c670ad8
[pip3] torchdata==0.7.0a0+11bb5b8
[pip3] torchmultimodal==0.1.0b0
[pip3] torchtext==0.16.0a0+b0ebddc
[pip3] torchvision==0.19.0a0+d23a6e1
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] mkl 2024.2.2 ha957f24_16 conda-forge
[conda] mkl-include 2025.0.1 hf2ce2f3_20 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.7.0a0+gitce2f680 dev_0 <develop>
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchaudio 2.6.0a0+c670ad8 pypi_0 pypi
[conda] torchdata 0.7.0a0+11bb5b8 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchtext 0.16.0a0+b0ebddc pypi_0 pypi
[conda] torchvision 0.19.0a0+d23a6e1 pypi_0 pypi
cc @chauhang @penguinwu @chuanqi129
| true
|
2,896,343,273
|
[Break XPU][Inductor UT] Generalize device-bias code introduced by #146866.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148323
* #147727
* #148538
* __->__ #148534
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,896,205,375
|
Skip buffer in dense update
|
zoranzhao
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 17
|
MEMBER
|
Summary:
as title.
PyTorch Module buffer will not be published in delta publishing. In Quinn's previous diff, constant type annotations have been introduced.
In addition to skip constant, we also need to skip buffer if it is not found in the user-provided delta weights list
Test Plan: https://docs.google.com/document/d/1wiqUo0PyZ4g6YJIJlL_LE084ZEuE74iu74gZjqGGjWY/edit?tab=t.0#heading=h.dby6cwiw1xrn
Differential Revision: D69553929
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.