id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,858,279,205
|
torch.isin does not support scalar `test_element` under torch.compile
|
meetmul
|
closed
|
[
"triaged",
"oncall: pt2",
"module: decompositions"
] | 3
|
NONE
|
### 🐛 Describe the bug
According to the doc: https://pytorch.org/docs/stable/generated/torch.isin.html, `test_element` can be either vector or scalar, but this API will directly raise exception when receiving scalar `test_element` under torch.compile.
Please run the following code to reproduce this issue:
```python
import torch
cf = torch.compile(torch.isin)
elements = torch.tensor([1,2,3,4])
test_elements = 1
cf(elements,test_elements)
```
I also tried this code on eager mode and find that `torch.isin` works normally without exception.
It would be nice if this API's behavior can be consistent with the documentation under torch.compile.
### Error logs
```
torch._inductor.exc.InductorError: AssertionError: size=[], stride=[1]
```
### Versions
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250217+cu124
[pip3] torchaudio==2.6.0.dev20250217+cu124
[pip3] torchvision==0.22.0.dev20250217+cu124
cc @chauhang @penguinwu @SherlockNoMad
| true
|
2,858,207,623
|
Comm reordering can make Inductor use variable before its definition
|
lw
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When using PyTorch 2.6.0 with some code that uses tensor parallelism I encountered an issue that manifests as follows:
```
File "/tmp/torchinductor_lcw/5r/c5rh5j7ln7q5ww6b23zxiuv7bdxprxg7iwjsed32bcix7r7helem.py", line 2815, in call
buf40 = torch.ops._c10d_functional.all_gather_into_tensor.default(buf39, 2, '2')
^^^^^
UnboundLocalError: cannot access local variable 'buf39' where it is not associated with a value
```
With the help of @yifuwang and @yf225 we found that it had already been fixed in main in https://github.com/pytorch/pytorch/pull/142822.
We should backport that PR into v2.6.1!
### Versions
PyTorch 2.6.0 from PyPI
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,858,179,950
|
Never ending compile
|
bhack
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"oncall: export",
"module: aotinductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I'am forking this ticket from https://github.com/pytorch/pytorch/issues/147323
I supposed that it was just a long running compile session but it seems never going to end
`100.0 3.6 122:34.20 cc1plus`:
How we are going to debug these cases?
### Error logs
_No response_
### Versions
nightly
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @yushangdi
| true
|
2,858,112,087
|
torch.export.export fails when one input is a class inheriting from torch.nn.Module
|
xadupre
|
open
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
``transformers.cache_utils.DynamicCache`` inherits from ``torch.nn.Module``. It seems to confuse ``torch.export.export`` and gives the following error:
```text
File ".../site-packages/torch/export/_trace.py", line 1697, in _export_to_aten_ir_make_fx
raise UserError(UserErrorType.CONSTRAINT_VIOLATION, str(e)) # noqa: B904
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.UserError: Constraints violated (batch)! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of batch = L['args'][0][0].size()[0] in the specified range batch <= 1024 are valid because batch was inferred to be a constant (3).
- Not all values of batch = L['args'][0][1]['key_cache'][0].size()[0] in the specified range batch <= 1024 are valid because batch was inferred to be a constant (3).
- Not all values of batch = L['args'][0][1]['value_cache'][0].size()[0] in the specified range batch <= 1024 are valid because batch was inferred to be a constant (3).
Suggested fixes:
batch = 3
```
After the base class is removed, ``torch.export.export`` works.
```python
class BaseDummyClass:
pass
DynamicCache.__bases__ = (BaseDummyClass,)
```
Full example:
```python
from typing import Any, Dict, List, Tuple
import torch
import transformers
from transformers.cache_utils import DynamicCache
def registers_dynamic_cache():
def flatten_dynamic_cache(
dynamic_cache: DynamicCache,
) -> Tuple[List[Any], torch.utils._pytree.Context]:
"""Serialize a DynamicCache with python objects for ``torch.export.export``."""
flat = [
(k, getattr(dynamic_cache, k))
for k in ["key_cache", "value_cache"]
if hasattr(dynamic_cache, k)
]
return [f[1] for f in flat], [f[0] for f in flat]
def flatten_with_keys_dynamic_cache(d: Dict[Any, Any]) -> Tuple[
List[Tuple[torch.utils._pytree.KeyEntry, Any]], torch.utils._pytree.Context,
]:
"""Serialize a DynamicCache with python objects for ``torch.export.export``."""
import torch
values, context = flatten_dynamic_cache(d)
return [(torch.utils._pytree.MappingKey(k), v) for k, v in zip(context, values)], context
def unflatten_dynamic_cache(
values: List[Any],
context: torch.utils._pytree.Context,
output_type=None,
) -> DynamicCache:
"""Restore a DynamicCache from python objects."""
from transformers.cache_utils import DynamicCache
cache = DynamicCache()
values = dict(zip(context, values))
for k, v in values.items():
setattr(cache, k, v)
return cache
# Register subclasses as pytree nodes.
# This is necessary to export a model using DynamicCache as inputs.
torch.utils._pytree.register_pytree_node(
DynamicCache,
flatten_dynamic_cache,
unflatten_dynamic_cache,
serialized_type_name=f"{DynamicCache.__module__}.{DynamicCache.__name__}",
flatten_with_keys_fn=flatten_with_keys_dynamic_cache,
)
torch.fx._pytree.register_pytree_flatten_spec(
DynamicCache, lambda x, _: [x.key_cache, x.value_cache]
)
def change_base_class_for_dynamic_cache():
"""
:class:`transformers.cache_utils.DynamicCache` inherits from
:class:`torch.nn.Module`. It seems to confuse :func:`torch.export.export`
and gives the following error:
::
File ".../site-packages/torch/export/_trace.py", line 1697, in _export_to_aten_ir_make_fx
raise UserError(UserErrorType.CONSTRAINT_VIOLATION, str(e)) # noqa: B904
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.UserError: Constraints violated (batch)! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of batch = L['args'][0][0].size()[0] in the specified range batch <= 1024 are valid because batch was inferred to be a constant (3).
- Not all values of batch = L['args'][0][1]['key_cache'][0].size()[0] in the specified range batch <= 1024 are valid because batch was inferred to be a constant (3).
- Not all values of batch = L['args'][0][1]['value_cache'][0].size()[0] in the specified range batch <= 1024 are valid because batch was inferred to be a constant (3).
Suggested fixes:
batch = 3
After the base class is removed, :func:`torch.export.export` works.
"""
class BaseDummyClass:
pass
DynamicCache.__bases__ = (BaseDummyClass,)
registers_dynamic_cache()
# See the documentation of this function.
change_base_class_for_dynamic_cache()
class ModelTakingDynamicCacheAsInput(torch.nn.Module):
def forward(self, x, dc):
kc = torch.cat(dc.key_cache, axis=1)
vc = torch.cat(dc.value_cache, axis=1)
length = dc.get_seq_length() if dc is not None else 0
ones = torch.zeros(
(
dc.key_cache[0].shape[0],
dc.key_cache[0].shape[1],
length,
dc.key_cache[0].shape[-1],
)
)
w = vc + kc + ones
y = w.sum(axis=2, keepdim=True)
return x + y
x = torch.randn(3, 8, 7, 1)
cache = DynamicCache(1)
cache = DynamicCache(1)
cache.update(torch.ones((3, 8, 5, 6)), (torch.ones((3, 8, 5, 6)) * 2), 0)
model = ModelTakingDynamicCacheAsInput()
expected = model(x, cache)
batch = torch.export.Dim("batch", min=1, max=1024)
clength = torch.export.Dim("clength", min=1, max=1024)
ep = torch.export.export(
model,
(x, cache),
dynamic_shapes=({0: batch}, [[{0: batch, 2: clength}], [{0: batch, 2: clength}]]),
strict=False,
)
mod = ep.module()
torch.testing.assert_close(expected, mod(x, cache))
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250214+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.5
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] optree==0.14.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250214+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250214+cu126
[pip3] torchvision==0.22.0.dev20250214+cu126
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,858,073,038
|
[MPS] Fix incorrect size for uint3 arg
|
blawrence-ont
|
open
|
[
"triaged",
"open source",
"release notes: mps"
] | 3
|
CONTRIBUTOR
|
With the metal validation layer enabled I get the following error:
> validateComputeFunctionArguments:844: failed assertion `Compute
> Function(naive_matmul_half): argument sizes[0] from buffer(4) with
> offset(0) and length(12) has space for 12 bytes, but argument has a
> length(16).'
The spec (https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf) states that a `uint3` is 16 bytes in size but we only provide a buffer of size 12 here (3 x uint32_t). The other uses of `uint3` (Quantized.mm and Indexing.mm) pad this to 16 bytes correctly, so do the same here.
I did a quick spot check and couldn't find any other places where we use the wrong size.
| true
|
2,858,071,944
|
[MPS] Fix metallib embedding in static builds
|
blawrence-ont
|
open
|
[
"triaged",
"open source",
"release notes: mps"
] | 4
|
CONTRIBUTOR
|
`-sectcreate` doesn't have any effect on static libraries, so when building as such we have to let the client do that part.
This fixes static builds since they currently trigger this exception: https://github.com/pytorch/pytorch/blob/71855a1cad1346a27a83984e245bbd16a7b56f53/aten/src/ATen/native/mps/OperationUtils.mm#L953
| true
|
2,858,009,026
|
[compile] Modularize very long compilation
|
bhack
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"oncall: export",
"module: aotinductor"
] | 33
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
On a model export/compile I see that there is a very very long stage (more then 1 hours) compiling a single generated c++ final file that is more that 78K+ lines
```bash
g++ /tmp/torchinductor_root/<hash>/<hash>.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_AVX5+..
```
This is going to really require a lot of time, without any intermediate progress, and using only a single core at 100%.
Why isn't possible a modularization and parallelization of this monolithic generated c++ file?
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @yushangdi @malfet @seemethere
| true
|
2,857,924,311
|
Add NEON implementation for 8 bit quantized embedding bag on aarch64
|
annop-w
|
closed
|
[
"module: cpu",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: performance",
"ciflow/linux-aarch64",
"arm priority"
] | 6
|
CONTRIBUTOR
|
This improves performance by ~5.5x on NeoverseV1 cores using the following benchmarking script:
```
import torch
import torch.nn as nn
import numpy as np
import torch.autograd.profiler as profiler
np.random.seed(0)
torch.manual_seed(0)
class SimpleEmbeddingBagModel(nn.Module):
def __init__(self, num_embeddings, embedding_dim):
super(SimpleEmbeddingBagModel, self).__init__()
weights = torch.from_numpy((np.random.random_sample((num_embeddings, embedding_dim)) + 1).astype(np.float32))
obs = torch.ao.quantization.PerChannelMinMaxObserver(dtype=torch.quint8, qscheme=torch.per_channel_affine_float_qparams, ch_axis=0)
obs(weights)
qparams = obs.calculate_qparams()
qweight = torch.quantize_per_channel(weights, qparams[0], qparams[1], axis=0, dtype=torch.quint8)
# Defining the EmbeddingBag layer
self.qembedding_bag = torch.ao.nn.quantized.EmbeddingBag(num_embeddings, embedding_dim, _weight=qweight,
mode='sum', include_last_offset=True, dtype=torch.quint8)
def forward(self, input, offsets):
# Forward pass through the EmbeddingBag layer
result = self.qembedding_bag(input, offsets, per_sample_weights=None)
return result
num_embeddings = 40000000
embedding_dim = 128
model = SimpleEmbeddingBagModel(num_embeddings=num_embeddings, embedding_dim=embedding_dim)
model.eval()
multi_hot = 100
batch_size = 400
input_tensor = torch.randint(0, num_embeddings, (batch_size * multi_hot,), dtype=torch.long)
offsets = torch.tensor(range(0, batch_size * multi_hot + 1, multi_hot))
with torch.no_grad():
# warm up
_ = model(input_tensor, offsets)
with profiler.profile(with_stack=True, profile_memory=False, record_shapes=True) as prof:
for i in range(100):
_ = model(input_tensor, offsets)
print(prof.key_averages(group_by_input_shape=True).table(sort_by='self_cpu_time_total', row_limit=50))
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,857,861,529
|
[FSDP] Moving module's view tensor to device
|
mieshkiwrk
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 4
|
NONE
|
### 🐛 Describe the bug
When wrapping `nn.Module` with `FSDP` it's moving tensors to device using `tensor.data = tensor.to(device)` instead of `tensor = tensor.to(device)`, source: [torch/distributed/fsdp/_init_utils.py#L1025](https://github.com/pytorch/pytorch/blob/main/torch/distributed/fsdp/_init_utils.py#L1025)
Due to that, when module contains view tensor it results with information that view is on device while it's base tensor is on cpu which as far as I understand is wrong and not expected behavior.
Experiment below shows that performing inplace op on base tensor is not affecting view's data.
Is is expected to move tensor using `.data` field instead of whole tensor?
```python
import torch
import torch.nn as nn
device = 'cuda'
def debug_tensor(t, name, prefix=''):
def print_data():
for val in t.flatten():
print(val.item(), end=' ')
print(f'\n')
print(f'{prefix}# Tensor {name}')
print(f'{prefix}\t Shape: {t.shape}')
print(f'{prefix}\t Is view: {"yes" if t._is_view() else "no"}')
print(f'{prefix}\t Dtype: {t.dtype}')
print(f'{prefix}\t Device: {t.device}')
print(f'{prefix}\t Data: ', end='')
print_data()
if t._base is not None:
debug_tensor(t._base, f'{name}._base', '\t')
class DummyModule(nn.Module):
def __init__(self):
super().__init__()
self.t = torch.arange(2).expand((1,-1))
def forward(self):
self.t._base.add_(2)
return self.t
mod = DummyModule()
buffer = mod.t
buffer.data = buffer.to(device)
res = mod()
debug_tensor(res, 'res')
```
Output:
```
# Tensor res
Shape: torch.Size([1, 2])
Is view: yes
Dtype: torch.int64
Device: cuda:0
Data: 0 1
# Tensor res._base
Shape: torch.Size([2])
Is view: no
Dtype: torch.int64
Device: cpu
Data: 2 3
```
### Versions
PT 2.6
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
2,857,806,740
|
[TESTING] [NO MERGE] Testing new triton commit for release/2.7
|
jataylo
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ciflow/unstable",
"ciflow/rocm",
"ci-no-td",
"ciflow/inductor-rocm",
"ciflow/rocm-mi300",
"ciflow/inductor-perf-test-nightly-rocm"
] | 66
|
COLLABORATOR
|
testing only
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,857,647,461
|
Fix the tiny doc descriptions
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147319
As the title stated
| true
|
2,857,554,472
|
Torch.export.export produces a graph with inplace operations
|
anzr299
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 3
|
NONE
|
### 🐛 Describe the bug
Graphmodule produced by torch.export.export is supposed to produce no operations with in-place operators as far as my knowledge goes from the [docs](https://pytorch.org/docs/stable/export.html#an-example). I have the following code which can show an example of this case(Also applies to torchvision.models.swin_v2_s model for example).
```
class sample(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(3, 3)
def forward(self, x: torch.Tensor):
# Emulating part of the torchvision SWIN masks generation impelemntation
y = x.new_zeros((3, 3))
y[1, :] = 2.0
y[2, :] = 3.0
return self.linear(x) + y
ex_input = torch.ones((1, 3, 3, 3))
ep = torch.export.export(sample(), args=(ex_input,)).module() # produces a graph with inplace ops like fill_
```
Below is GraphModule.code for reference.
```
def forward(self, x):
x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
linear_weight = self.linear.weight
linear_bias = self.linear.bias
lifted_tensor_0 = self.lifted_tensor_0
lifted_tensor_1 = self.lifted_tensor_1
new_zeros = torch.ops.aten.new_zeros.default(x, [3, 3], pin_memory = False)
lift_fresh_copy = torch.ops.aten.lift_fresh_copy.default(lifted_tensor_0); lifted_tensor_0 = None
select = torch.ops.aten.select.int(new_zeros, 0, 1)
slice_1 = torch.ops.aten.slice.Tensor(select, 0, 0, 9223372036854775807); select = None
fill_ = torch.ops.aten.fill_.Tensor(slice_1, lift_fresh_copy); slice_1 = lift_fresh_copy = fill_ = None
lift_fresh_copy_1 = torch.ops.aten.lift_fresh_copy.default(lifted_tensor_1); lifted_tensor_1 = None
select_1 = torch.ops.aten.select.int(new_zeros, 0, 2)
slice_2 = torch.ops.aten.slice.Tensor(select_1, 0, 0, 9223372036854775807); select_1 = None
fill__1 = torch.ops.aten.fill_.Tensor(slice_2, lift_fresh_copy_1); slice_2 = lift_fresh_copy_1 = fill__1 = None
linear = torch.ops.aten.linear.default(x, linear_weight, linear_bias); x = linear_weight = linear_bias = None
add = torch.ops.aten.add.Tensor(linear, new_zeros); linear = new_zeros = None
return pytree.tree_unflatten((add,), self._out_spec)
```
## Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.28.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
Stepping: 7
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 576 KiB (18 instances)
L1i cache: 576 KiB (18 instances)
L2 cache: 18 MiB (18 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.19.2
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0+cu126
[pip3] torchmetrics==1.0.1
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
```
CC: @pianpwk @angelayi
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,857,536,338
|
Tensorboard `add_video()` broken for `moviepy>=2.0`
|
araffin
|
open
|
[
"triaged",
"module: tensorboard"
] | 1
|
NONE
|
### 🐛 Describe the bug
Since https://github.com/Zulko/moviepy/pull/1340 (and [release 2.0](https://github.com/Zulko/moviepy/releases/tag/v2.0.0)), moviepy directly exposes imports.
So the current
https://github.com/pytorch/pytorch/blob/e8b20f6ef39e006e6da90de736ae85a1ba55c159/torch/utils/tensorboard/summary.py#L658-L664
does not work and need to be replaced by `from moviepy import ImageSequenceClip`.
Downgrading to `moviepy==1.0.3` also solves the issue.
### Versions
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-17-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture : x86_64
Mode(s) opératoire(s) des processeurs : 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Boutisme : Little Endian
Processeur(s) : 16
Liste de processeur(s) en ligne : 0-15
Identifiant constructeur : AuthenticAMD
Nom de modèle : AMD Ryzen 7 7840U w/ Radeon 780M Graphics
Famille de processeur : 25
Modèle : 116
Thread(s) par cœur : 2
Cœur(s) par socket : 8
Socket(s) : 1
Révision : 1
Frequency boost: enabled
CPU(s) scaling MHz: 35%
Vitesse maximale du processeur en MHz : 5132,0000
Vitesse minimale du processeur en MHz : 400,0000
BogoMIPS : 6587,93
Drapaux : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d amd_lbr_pmc_freeze
Virtualisation : AMD-V
Cache L1d : 256 KiB (8 instances)
Cache L1i : 256 KiB (8 instances)
Cache L2 : 8 MiB (8 instances)
Cache L3 : 16 MiB (1 instance)
Nœud(s) NUMA : 1
Nœud NUMA 0 de processeur(s) : 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.3
[pip3] onnxruntime==1.20.0
[pip3] torch==2.6.0+cpu
[pip3] torchaudio==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[conda] numpy 2.2.3 pypi_0 pypi
[conda] torch 2.6.0+cpu pypi_0 pypi
[conda] torchaudio 2.5.1+cpu pypi_0 pypi
[conda] torchvision 0.20.1+cpu pypi_0 pypi
| true
|
2,857,435,790
|
[ROCm][Windows] Fix unrecognized constexpr std::memcpy for HIP-clang
|
m-gallus
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 15
|
CONTRIBUTOR
|
Since in MSVC's 2019/2022 implementation of STL memcpy is not defined as a constexpr function, HIP clang compiler on Windows cannot evaluate the following memcopy as one that could be resolved during the compile time. To resolve this, a `__builtin_memcpy` is used instead which doesn't have this limitation.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,857,375,810
|
[ROCm] Introduce AMD specific inductor gemm tuning
|
jataylo
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: rocm",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/inductor-rocm",
"ciflow/inductor-periodic"
] | 16
|
COLLABORATOR
|
Replaces https://github.com/pytorch/pytorch/pull/143286
Adds ROCm specific MM configs for max-autotune incorporating ROCm specific triton tuning kernargs such as waves_per_eu, kpack, matrix_instr_nonkdim. This PR also introduces behavior to allow tuning for GROUP_M in triton gemm case.
Dynamo huggingface inference benchmarks:
`TORCHINDUCTOR_MAX_AUTOTUNE=1 TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS="TRITON" python huggingface.py --performance --inference --bfloat16 --backend=inductor`
GEOMEAN speedup (before): | 1.35x
GEOMEAN speedup (after): | 1.42x
name | Eager - abs latency | old - abs_latency | old - speedup | new - abs_latency | new - speedup
-- | -- | -- | -- | -- | --
AlbertForMaskedLM | 26.22 | 26.52 | 98.86% | 24.58 | 106.67%
AlbertForQuestionAnswering | 25.96 | 26.40 | 98.33% | 24.10 | 107.73%
AllenaiLongformerBase | 21.03 | 10.65 | 197.50% | 10.49 | 200.58%
BartForCausalLM | 7.77 | 9.76 | 79.63% | 8.79 | 88.46%
BartForConditionalGeneration | 14.44 | 12.86 | 112.26% | 11.96 | 120.70%
BertForMaskedLM | 8.10 | 8.82 | 91.89% | 8.57 | 94.53%
BertForQuestionAnswering | 6.82 | 7.32 | 93.20% | 7.10 | 96.18%
BlenderbotForCausalLM | 10.97 | 11.39 | 96.34% | 10.10 | 108.65%
BlenderbotSmallForCausalLM | 5.91 | 5.44 | 108.72% | 4.82 | 122.67%
BlenderbotSmallForConditionalGeneration | 12.64 | 9.65 | 130.94% | 9.11 | 138.83%
CamemBert | 8.35 | 9.15 | 91.24% | 8.86 | 94.27%
DebertaForMaskedLM | 10.92 | 6.09 | 179.44% | 5.90 | 185.05%
DebertaForQuestionAnswering | 14.29 | 7.70 | 185.59% | 7.26 | 196.75%
DebertaV2ForMaskedLM | 15.47 | 10.22 | 151.32% | 9.34 | 165.55%
DebertaV2ForQuestionAnswering | 14.98 | 6.11 | 245.28% | 6.28 | 238.40%
DistilBertForMaskedLM | 8.37 | 8.70 | 96.30% | 8.22 | 101.92%
DistilBertForQuestionAnswering | 10.21 | 10.54 | 96.88% | 10.39 | 98.36%
DistillGPT2 | 8.77 | 6.78 | 129.40% | 6.31 | 138.88%
ElectraForCausalLM | 10.32 | 4.70 | 219.45% | 4.60 | 224.29%
ElectraForQuestionAnswering | 11.48 | 5.62 | 204.20% | 5.44 | 210.95%
GPT2ForSequenceClassification | 6.21 | 5.72 | 108.50% | 5.58 | 111.26%
GoogleFnet | 26.51 | 20.81 | 127.37% | 19.91 | 133.11%
LayoutLMForMaskedLM | 12.09 | 7.99 | 151.28% | 7.66 | 157.80%
LayoutLMForSequenceClassification | 10.62 | 6.49 | 163.67% | 6.25 | 169.95%
M2M100ForConditionalGeneration | 14.98 | 10.20 | 146.79% | 9.89 | 151.42%
MBartForCausalLM | 7.67 | 9.78 | 78.44% | 8.87 | 86.55%
MBartForConditionalGeneration | 13.45 | 12.69 | 105.99% | 12.03 | 111.82%
MT5ForConditionalGeneration | 19.96 | 5.32 | 375.37% | 5.08 | 393.01%
MegatronBertForCausalLM | 13.22 | 7.86 | 168.07% | 7.18 | 184.01%
MegatronBertForQuestionAnswering | 15.62 | 11.81 | 132.21% | 11.02 | 141.68%
MobileBertForMaskedLM | 26.63 | 10.82 | 245.99% | 11.95 | 222.73%
MobileBertForQuestionAnswering | 23.53 | 7.55 | 311.51% | 9.53 | 247.03%
OPTForCausalLM | 7.33 | 7.64 | 95.93% | 7.56 | 96.90%
PLBartForCausalLM | 8.73 | 7.63 | 114.40% | 7.37 | 118.58%
PLBartForConditionalGeneration | 10.46 | 8.50 | 122.98% | 8.16 | 128.13%
PegasusForCausalLM | 7.18 | 7.37 | 97.42% | 6.64 | 108.22%
PegasusForConditionalGeneration | 16.47 | 16.66 | 98.87% | 14.18 | 116.13%
RobertaForCausalLM | 10.30 | 9.95 | 103.52% | 9.52 | 108.25%
RobertaForQuestionAnswering | 6.37 | 7.13 | 89.28% | 6.79 | 93.87%
T5ForConditionalGeneration | 12.40 | 6.72 | 184.51% | 6.48 | 191.16%
T5Small | 12.02 | 6.66 | 180.55% | 6.32 | 190.33%
TrOCRForCausalLM | 14.12 | 13.31 | 106.11% | 12.45 | 113.41%
XGLMForCausalLM | 16.48 | 6.23 | 264.52% | 6.35 | 259.51%
XLNetLMHeadModel | 74.87 | 62.23 | 120.32% | 57.95 | 129.19%
YituTechConvBert | 20.21 | 10.50 | 192.48% | 9.97 | 202.72%
We are also seeing improvement ~9% on internal addmm benchmark
This PR will also slightly reduce the compilation time on AMD max-autotune as before this change we assess every config with matrix_instr_nonkdim [0, 16] but we remove this and use 16 for all configs with this update.
No CI to test the max-autotune perf currently but this will be enabled via https://github.com/pytorch/pytorch/pull/148672 after which we can investigate more tuning updates and config pruning
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,857,196,966
|
[Inductor][CPP] Eliminate the overhead of BRGEMM fetching for Half micro gemm on CPU Inductor
|
CaoE
|
open
|
[
"open source",
"Stale",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 2
|
COLLABORATOR
|
Split `brgemm` method into `brgemm_create` and `brgemm_execute` to avoid the overhead of key hashing and fetching for oneDNN BRGEMM object. Such overhead is not negligible when the gemm is very fast, e.g., on small shapes.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,857,099,062
|
[pt2-benchmarks] Compiler reset on every run
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Internal benchmarks call `run` in a loop. Compiler reset gives a clean env
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,857,090,118
|
return value of Work.exception() can't be used in Python
|
sanshang-nv
|
open
|
[
"oncall: distributed",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Can't get error message in python code with function `Work.exception()`. [link](https://github.com/pytorch/pytorch/blob/2b30e94fc04878066b60554682368ee0b92d0128/torch/_C/_distributed_c10d.pyi#L263)
Or what's the right way to use this API?
```
def do_comm_test(group):
rank = torch.distributed.get_rank()
local_rank = int(os.getenv('LOCAL_RANK'))
elem_num = 1024*1024*1024
device = f'cuda:{local_rank % 8}'
output_tensor_list = [
torch.zeros(elem_num, dtype=torch.int64, device=device)
for _ in range(dist.get_world_size(group))
]
input_tensor = torch.arange(elem_num, dtype=torch.int64, device=device) + rank * elem_num
work = torch.distributed.all_gather(
output_tensor_list,
input_tensor,
group,
True
)
work.wait()
exc = work.exception()
if exc is not None:
raise exc
```
Error message:
```
[rank0]: TypeError: Unregistered type : std::__exception_ptr::exception_ptr
[rank0]: The above exception was the direct cause of the following exception:
[rank0]: Traceback (most recent call last):
[rank0]: File "/workspace/distributedpulse/example/demo.py", line 73, in <module>
[rank0]: do_comm()
[rank0]: File "/workspace/distributedpulse/example/demo.py", line 65, in do_comm
[rank0]: do_comm_test(group)
[rank0]: File "/workspace/distributedpulse/example/demo.py", line 29, in do_comm_test
[rank0]: exc = work.exception()
[rank0]: TypeError: Unable to convert function return value to a Python type! The signature was
[rank0]: (self: torch._C._distributed_c10d.Work) -> std::__exception_ptr::exception_ptr
[rank0]: Did you forget to `#include <pybind11/stl.h>`? Or <pybind11/complex.h>,
[rank0]: <pybind11/functional.h>, <pybind11/chrono.h>, etc. Some automatic
[rank0]: conversions are optional and require extra headers to be included
[rank0]: when compiling your pybind11 module.
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0a0+b465a5843b.nv24.09
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1030-nvidia-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2801.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall n
q avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr am
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,857,069,718
|
Fix test_device_memory_allocated
|
Stonepia
|
closed
|
[
"open source",
"Merged",
"module: testing",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu",
"module: xpu"
] | 5
|
CONTRIBUTOR
|
Fixes #147310
The `torch.ones` allocates memory and is released immediately, thus the following assertion will fail.
This PR stores it into a temp variable to fix it.
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,857,060,067
|
[XPU] test_device_memory_allocated failed
|
Stonepia
|
closed
|
[
"triaged",
"module: testing",
"module: xpu"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The following test failed:
```Bash
pytest -k test_device_memory_allocated test/test_xpu.py
```
Failure message:
```Bash
File "/home/pytorch/test/test_xpu.py", line 483, in test_device_memory_allocated
self.assertGreater(torch.xpu.memory_allocated(0), current_alloc[0])
AssertionError: 0 not greater than 0
```
This is because the allocated memory is released immediately. So we need a PR to fix it.
### Versions
[conda] pytorch-triton-xpu 3.2.0+gite98b6fcb pypi_0 pypi
[conda] torch 2.7.0.dev20250212+xpu pypi_0 pypi
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,857,035,354
|
Why doesn't work.wait work?
|
FieeFlip
|
closed
|
[
"oncall: distributed"
] | 4
|
NONE
|
### 🐛 Describe the bug
```Python
import os
import torch
import torch.distributed as dist
import time
def main():
# 初始化进程组(后端使用NCCL优化GPU通信)
dist.init_process_group(backend="nccl")
# 获取当前进程信息
rank = dist.get_rank()
local_rank = int(os.environ["LOCAL_RANK"])
world_size = dist.get_world_size()
# 设置当前GPU设备
torch.cuda.set_device(local_rank)
# 创建不同进程的初始张量(每个GPU生成不同数据)
tensor = torch.randn((10000,10000), device=f'cuda:{local_rank}') * (rank + 1)
#print(f"Rank {rank} 初始数据: {tensor.cpu().numpy()}")
# 执行all_reduce求和操作(同步所有GPU数据)
# dist.barrier()
for i in range(3):
a = time.time()
work = dist.all_reduce(tensor, op=dist.ReduceOp.SUM, async_op=True)
work.wait()
b = time.time()
#time.sleep(3)
if work.is_completed(): # 检查工作是否完成
print(f"rank:{rank}, duration:{work._get_duration()}ms")
else:
print("Work did not succeed!")
#print(f"Rank {rank} 聚合结果: {tensor.cpu().numpy()}")
print(f"setp: {i}, Rank {rank} AllReduce耗时: {(b - a)*1000:.4f}ms")
time.sleep(3)
if local_rank == 0:
print("-------------------------------------------------------------------------------------")
# 清理进程组
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
```Python
import os
import torch
os.environ['TORCH_NCCL_ENABLE_TIMING'] = "1"
torch.__version__ # '2.4.1+cu121'
```
exec command:
```bash
torchrun --nproc_per_node=2 /kaggle/working/distributed_allreduce.py
```
The result is:
```bash
Work did not succeed!Work did not succeed!
setp: 0, Rank 1 AllReduce耗时: 349.9851mssetp: 0, Rank 0 AllReduce耗时: 349.4253ms
-------------------------------------------------------------------------------------
Work did not succeed!
setp: 1, Rank 0 AllReduce耗时: 0.2873ms
Work did not succeed!
setp: 1, Rank 1 AllReduce耗时: 0.2887ms
-------------------------------------------------------------------------------------
Work did not succeed!
setp: 2, Rank 0 AllReduce耗时: 0.2966ms
Work did not succeed!
setp: 2, Rank 1 AllReduce耗时: 0.2232ms
-------------------------------------------------------------------------------------
```
Does it mean that when `sync_op=False`, `work.wait` also doesn't work?
### Versions
--2025-02-17 07:42:42-- https://raw.github.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.github.com (raw.github.com)... 185.199.109.133, 185.199.108.133, 185.199.110.133, ...
Connecting to raw.github.com (raw.github.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py [following]
--2025-02-17 07:42:42-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24353 (24K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[===================>] 23.78K --.-KB/s in 0.001s
2025-02-17 07:42:42 (34.4 MB/s) - ‘collect_env.py’ saved [24353/24353]
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.6.56+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.26
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB (2 instances)
L1i cache: 64 KiB (2 instances)
L2 cache: 2 MiB (2 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] jaxlib==0.4.26+cuda12.cudnn89
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvtx==0.2.10
[pip3] onnx==1.17.0
[pip3] optree==0.12.1
[pip3] pynvjitlink-cu12==0.3.0
[pip3] pytorch-ignite==0.5.1
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.4.1+cu121
[pip3] torchaudio==2.4.1+cu121
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.6.0
[pip3] torchsummary==1.5.1
[pip3] torchtune==0.4.0
[pip3] torchvision==0.19.1+cu121
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,857,029,125
|
Update slow tests
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 3
|
COLLABORATOR
|
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
| true
|
2,857,015,371
|
Match view node and _unsafe_view node, as they have same schema
|
pralay-das
|
closed
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 7
|
CONTRIBUTOR
|
**Description:** we have observed that in some cases in the pattern it is creating a `view` node but in the original model that replace with `_unsafe_view` node. Because of both of the schemas are same, so we will match these nodes and proceed further.
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,856,957,670
|
Allow XPU device for validating the arguments to sparse compressed tensor factory functions
|
xytintel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: sparse",
"ciflow/xpu",
"release notes: xpu"
] | 3
|
CONTRIBUTOR
|
During Sparse tensor conversion, a validity check is performed. We need to allow XPU to pass this check.
| true
|
2,856,934,034
|
[Dynamo] Allow dynamo to handle 'or' operator between two dicts
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 22
|
CONTRIBUTOR
|
Fixes #146538
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,856,910,042
|
Optimize `Sequential` methods description
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: docs"
] | 10
|
CONTRIBUTOR
|
Fixes #146892
Add methods description and examples for [`Sequential` document](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html)
## Test Result
### Before

### After



| true
|
2,856,808,094
|
fp16 channels_last created Nan in batchnorm backward
|
jthakurH
|
closed
|
[
"module: cpu",
"triaged",
"bug"
] | 1
|
NONE
|
### 🐛 Describe the bug
If input is set with channels_last then there is Nan in output of backward
```
import torch
op = torch.nn.BatchNorm2d(num_features=3, affine=True, track_running_stats=True)
ifm = torch.empty(size=[16, 3, 224, 224]).uniform_(0, 1).to(dtype=torch.float16)
ifm = ifm.contiguous(memory_format=torch.channels_last) # this creates Nan in output
ifm = ifm.requires_grad_()
res = op(ifm)
bwd_tensor = torch.empty(size=res.shape).uniform_(0, 1).to(dtype=torch.float16)
res.backward(bwd_tensor)
print(ifm.grad)
```
Output
```
tensor([[[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]],
[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
```
### Versions
[pip3] numpy==1.23.5
[pip3] torch==2.6.0
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,856,804,638
|
Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/xpu"
] | 6
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [b421032c8fed40df5eaee395c2e7f5f8a7bcc815](https://github.com/intel/torch-xpu-ops/commit/b421032c8fed40df5eaee395c2e7f5f8a7bcc815), includes:
- Correct int4 weight pack implementation
- Enhance build system: only build one shared library for the user
| true
|
2,856,628,135
|
Remove deprecate method and attirbute in `LRScheduler`
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"suppress-bc-linter",
"release notes: optim"
] | 14
|
CONTRIBUTOR
|
Following [#99270 suggestion](https://github.com/pytorch/pytorch/issues/99270#issuecomment-1511656408), remove deprecate method `LRScheduler.print_lr`
_____
# BC-breaking note
**`LRScheduler.print_lr()` along with the `verbose` kwarg to the LRScheduler constructor has been deprecated since release 2.2. Please use `LRScheduler.get_last_lr()` to access the learning rate instead.**
`print_lr` and `verbose` were confusing, not properly documented and were little used, as described in #99270, so we deprecated them in 2.2. Now, we complete the deprecation by removing them completely. To access and print the learning rate of a LRScheduler:
In 2.6.0
```
optim = ...
lrsched = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, verbose=True)
// lrsched will internally call print_lr
```
In 2.7.0
```
optim = ...
lrsched = torch.optim.lr_scheduler.ReduceLROnPlateau(optim)
print(lrsched.get_last_lr())
```
| true
|
2,856,610,589
|
switch from deprecated `find_package(CUDA)` to `find_package(CUDAToolkit)`
|
h-vetinari
|
open
|
[
"module: mkldnn",
"open source"
] | 2
|
CONTRIBUTOR
|
Towards #76082
In conda-forge we've recently started running into hard problems related to #76082; the vast majority of our builds (pytorch itself excluded now, but true for most of its dependents, e.g. `torch{vision,audio,...}`) get built on agents without a physical GPU. It's sufficient to have a complete toolchain (without the GPU) available to build the packages correctly, but pytorch's use of the long-deprecated `find_package(CUDA)` breaks this; see https://github.com/conda-forge/cuda-feedstock/issues/59, https://github.com/conda-forge/pytorch-cpu-feedstock/issues/333, etc.
To help get the builds for 2.5 & 2.6 back on track, I came up with the following hack job of a patch. It's the (intentionally) minimal set of changes I could come up with - while it's really not upstreamable as-is, I thought I'd at least "throw over the fence" what ended up working for us.
pytorch has a sprawling set of CMake files, and (for example) the double-loading of CUDA and CUDAToolkit is also something I don't pretend to understand, so this will need some support from people who are knowledgeable about this. As an example of where my intention of keeping things simple for the patch we carry on the feedstock stands in contrast to being able to merge this:
`torch_cuda_get_nvcc_gencode_flag` relies on functionality (`cuda_select_nvcc_arch_flags`) that's not available anymore when switching to `find_package(CUDAToolkit)` - AFAIU, because it should be replaced by `CMAKE_CUDA_ARCHITECTURES`.
To avoid auditing and rewriting all the call-sites of `torch_cuda_get_nvcc_gencode_flag`, I just vendored `cuda_select_nvcc_arch_flags` [from CMake](https://github.com/Kitware/CMake/blob/master/Modules/FindCUDA/select_compute_arch.cmake). A proper solution would of course mean deeper surgery on this.
PS. at least one of the vendored submodules (`tensorpipe`) also [contains](https://github.com/pytorch/tensorpipe/blob/52791a2fd214b2a9dc5759d36725909c1daa7f2e/tensorpipe/CMakeLists.txt#L237) calls to `find_package(CUDA)`. I don't know how an update there would even be possible, given that https://github.com/pytorch/tensorpipe has been archived. (edit: nevermind, this is discussed in the issue already; either tensorpipe needs to be dropped or the submodule needs to be pointed to a remote that contains https://github.com/pytorch/tensorpipe/pull/454).
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,856,571,238
|
[MPS][BE] Turn `exec_unary_kernel` as MetalShaderLibrary method
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147299
* #147297
* #147296
And delete duplicate implementations from SpecialOps and UnaryKernel.
Change input and output arguments order for SpecialOps kernels to match those of UnaryOps
Fixes https://github.com/pytorch/pytorch/issues/146770
| true
|
2,856,544,973
|
[Inductor][CPP] Add the legalize low fp support for index expr
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147298
**Summary**
Fix issue: https://github.com/pytorch/pytorch/issues/147279. The test case produced a low-precision floating-point value using `ops.index_expr`, but the CPP backend did not handle its legalization. This PR adds support for it.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_cpu_repro.py -k test_low_fp_index_expr_issue_147279
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,856,543,455
|
[BE] Make `exec_unary_kernel` take TensorIterator as argument
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147299
* __->__ #147297
* #147296
| true
|
2,856,543,329
|
[BE] Switch all structured funcs to stubs
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147299
* #147297
* __->__ #147296
No need to have separate foobar_out_mps when registering a dispatch to foobar_stub will do
And this makes `exec_unary_kernel` defined in UnaryKernel.mm and
SpecialOps.mm look very similar
| true
|
2,856,475,051
|
Hi, want to know pytorch 2.6 for libtorch what new features have contain in cpp?
|
mullerhai
|
closed
|
[] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
HI,
I use libtorch for cpp code ,pytorch 2.6 release ,so for libtorch 2.6 what new features contains?
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,856,435,153
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,856,383,844
|
Fix overflow in checkInBoundsForStorage
|
mikaylagawarecki
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147293
| true
|
2,856,335,055
|
Fix arvr macOS buck pytorch builds
|
stepanhruda
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 7
|
CONTRIBUTOR
|
Summary:
X-link: https://github.com/ctrl-labs/src2/pull/42453
buck arvr macOS builds had a few issues that needed fixing.
Test Plan: build with buck
Differential Revision: D69722372
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,856,320,479
|
Do not use username for inductor default_cache_dir
|
cocktailpeanut
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing",
"module: inductor"
] | 4
|
NONE
|
The existing approach of using the session username fails when:
1. There is no username for the current session
2. The username includes one or more spaces (compile fails)
Unless there's an important reason why this needs to be based on username, it seems much cleaner to just use "torchinductor_cache_dir", which fixes all of the above issues.
If there is an important reason for this, please let me know.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,856,201,558
|
[BE]: Enable ruff rule SIM113
|
Skylion007
|
closed
|
[
"oncall: distributed",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"release notes: quantization",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Lint rules that tells the user to avoid keeping track of their own counter and use the builtin enumerate when possible.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,856,195,976
|
fixed optimizer load_state_dict
|
egg-west
|
open
|
[
"triaged",
"open source",
"release notes: optim"
] | 2
|
CONTRIBUTOR
|
Fixes #147288
| true
|
2,856,195,820
|
Calling optimizer.load_state_dict causes Tensors lost
|
egg-west
|
open
|
[
"module: optimizer",
"triaged"
] | 5
|
CONTRIBUTOR
|
# Description
`Optimizer.load_state_dict(state_dict: StateDict)` will overwrite its own Tensors with the provided `state_dict` instead of copying the Tensor values from `state_dict` to the existing tensors.
Here is an example code to show the issue
```python
import numpy as np
import torch
from torch import nn
"""Test whether the load_dict of an optimizer create new Tensors"""
# A toy problem that maps a 3-dim zeros to 3-dim ones
y = torch.zeros((1, 3))
x = torch.ones((1, 3))
def get_first_optim_state(opt):
opt_state = None
for group in opt.param_groups:
for p in group["params"]:
if "exp_avg" in opt.state[p]:
opt_state = opt.state[p]['exp_avg']
break
break
return opt_state
class Net(nn.Module):
def __init__(self):
super().__init__()
self.a = nn.Linear(3, 2)
self.b = nn.Linear(2, 3)
def forward(self, x):
return self.b(self.a(x))
# Adam initializes its states when taking steps
# Step 1. Train a_model for 2 steps
a_model = Net()
a_optim = torch.optim.Adam(a_model.parameters())
a_optim_state_id = 0
for step_id in range(2):
y_ = a_model(x)
loss = ((y_ - y)**2).mean()
a_optim.zero_grad()
loss.backward()
a_optim.step()
a_optim_state = get_first_optim_state(a_optim)
a_optim_state_id = id(a_optim_state)
print(f"{a_optim_state=}, {a_optim_state_id=}\n\n")
# Step 2. Train b_model for 2 steps
b_model = Net()
b_optim = torch.optim.Adam(b_model.parameters())
b_optim_state_id = 0
for step_id in range(2):
y_ = b_model(x)
loss = ((y_ - y)**2).mean()
b_optim.zero_grad()
loss.backward()
b_optim.step()
b_optim_state = get_first_optim_state(b_optim)
b_optim_state_id = id(b_optim_state)
print(f"{b_optim_state=}, {b_optim_state_id=}\n\n")
a_optim.load_state_dict(b_optim.state_dict())
new_a_optim_state = get_first_optim_state(a_optim)
new_a_optim_id = id(new_a_optim_state)
# check `load_state_dict` on values
print(f"{new_a_optim_state=}\n")
# check `load_state_dict` on memory
print(f"{new_a_optim_id=}")
print(f"{(new_a_optim_id == a_optim_state_id)=}") # False
print(f"{(new_a_optim_id == b_optim_state_id)=}") # True
```
# Motivation for the change
- enable `optimizer.load_state_dict` to have consistent behavior to `Tensor.load_state_dict` and `Module.load_state_dict` (They copy values)
- disable unexpected Tensor creation
- Say we create a reference for the state of an optimizer. After the optimizer loads some state_dict, this reference fails to locate the up-to-date optimizer states.
# Solution
Please see the PR below. The `state_dict` of an optimizer is customized by the creators, hence we need to support customized data types (e.g. an instance of a customized class, where some elements might be Tensors).
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,856,132,762
|
[inductor][fuzzer] legacy `torch.native_batch_norm` should be removed
|
WLFJ
|
open
|
[
"triaged",
"oncall: pt2",
"module: decompositions"
] | 4
|
NONE
|
### 🐛 Describe the bug
Eager Mode works fine, but Inductor hits assertion error.
Reproduce Example:
```python
import torch
@torch.compile
def f(*args):
sym_0, sym_1, sym_2, sym_3, sym_4, sym_5, sym_6, sym_7 = args
var_442 = torch.empty_permuted(size=sym_0, physical_layout=sym_1)
var_647 = torch.triu_indices(row=sym_2, col=sym_3, offset=sym_4)
return torch.native_batch_norm(input=var_442, weight=var_647, bias=None, running_mean=None, running_var=None, training=sym_5, momentum=sym_6, eps=sym_7)
res = f((0,), (0,), 1024, 1, 0, False, 1., 1.,)
print(res)
```
### Error logs
```
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_decomp/decompositions.py", line 1939, in _native_batch_norm_legit_no_stats
output, save_mean, save_rstd, _, _ = native_batch_norm_helper(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_decomp/decompositions.py", line 1774, in native_batch_norm_helper
assert running_mean is not None and running_var is not None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method native_batch_norm of type object at 0x7f1255f8ef60>(*(), **{'input': FakeTensor(..., size=(0,)), 'weight': FakeTensor(..., size=(2, 1), dtype=torch.int64), 'bias': None, 'running_mean': None, 'running_var': None, 'training': False, 'momentum': 1.0, 'eps': 1.0}):
from user code:
File "/home/yvesw/reborn2-expr/250216-bugs/test-11.py", line 9, in f
return torch.native_batch_norm(input=var_442, weight=var_647, bias=None, running_mean=None, running_var=None, training=sym_5, momentum=sym_6, eps=sym_7)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @chauhang @penguinwu @SherlockNoMad
| true
|
2,856,038,349
|
[MPS][BE] Use stubs for floor/ceil/round/trunc
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147266
* __->__ #147286
To avoid duplicating logic that those ops are no-ops for integral dtypes
(And in preparation of adding `round_decimals` that calls round_stub if decimals are 0)
Tested for the corner cases by manually invoking `round`, `trunc`, `floor` and `ceil` for int dtypes
| true
|
2,856,037,907
|
[MPS][BE] Use stubs for floor/ceil/round/trunc
|
malfet
|
closed
|
[
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
To avoid duplicating logic that those ops are no-ops for integral dtypes
(And in preparation of adding `round_decimals` that calls round_stub if decimals are 0)
| true
|
2,855,994,427
|
torch.softmax gives wierd result under specific condition on cuda double tensors with multiple rows and 781 columns.
|
Xenadon
|
open
|
[
"module: nn",
"module: cuda",
"triaged"
] | 4
|
NONE
|
### 🐛 Describe the bug
`torch.softmax(torch.zeros(c,781, dtype=torch.float64, device='cuda'),dim=1)` gives wrong answer for `c > 1`.
```python
import torch
print(torch.softmax(torch.zeros(1,781, dtype=torch.float64, device='cuda'),dim=1)) # correct
print(torch.softmax(torch.zeros(2,781, dtype=torch.float64, device='cuda'),dim=1)) # wrong
print(torch.softmax(torch.zeros(3,781, dtype=torch.float64, device='cuda'),dim=1)) # wrong
print(torch.softmax(torch.zeros(4,781, dtype=torch.float64, device='cuda'),dim=1)) # wrong
print(torch.softmax(torch.zeros(5,781, dtype=torch.float64, device='cuda'),dim=1)) # wrong
print(torch.softmax(torch.zeros(2,780, dtype=torch.float64, device='cuda'),dim=1)) # correct
print(torch.softmax(torch.zeros(2,782, dtype=torch.float64, device='cuda'),dim=1)) # correct
```
output:
```
tensor([[0.0013, 0.0013, 0.0013, 0.0013, 0.0013, 0.0013, 0.0013, 0.0013, 0.0013,
# ... (many lines of 0.0013)
0.0013, 0.0013, 0.0013, 0.0013, 0.0013, 0.0013, 0.0013, 0.0013, 0.0013,
0.0013, 0.0013, 0.0013, 0.0013, 0.0013, 0.0013, 0.0013]],
device='cuda:0', dtype=torch.float64)
tensor([[0.0013, 0.0013, 0.0013, ..., 0.0006, 0.0006, 0.0006],
[0.0006, 0.0006, 0.0006, ..., 0.0006, 0.0006, 0.0006]],
device='cuda:0', dtype=torch.float64)
tensor([[0.0013, 0.0013, 0.0013, ..., 0.0006, 0.0006, 0.0006],
[0.0006, 0.0006, 0.0006, ..., 0.0006, 0.0006, 0.0006],
[0.0006, 0.0006, 0.0006, ..., 0.0013, 0.0013, 0.0013]],
device='cuda:0', dtype=torch.float64)
tensor([[0.0013, 0.0013, 0.0013, ..., 0.0006, 0.0006, 0.0006],
[0.0006, 0.0006, 0.0006, ..., 0.0006, 0.0006, 0.0006],
[0.0006, 0.0006, 0.0006, ..., 0.0006, 0.0006, 0.0006],
[0.0006, 0.0006, 0.0006, ..., 0.0006, 0.0006, 0.0006]],
device='cuda:0', dtype=torch.float64)
tensor([[0.0013, 0.0013, 0.0013, ..., 0.0006, 0.0006, 0.0006],
[0.0006, 0.0006, 0.0006, ..., 0.0006, 0.0006, 0.0006],
[0.0006, 0.0006, 0.0006, ..., 0.0006, 0.0006, 0.0006],
[0.0006, 0.0006, 0.0006, ..., 0.0006, 0.0006, 0.0006],
[0.0006, 0.0006, 0.0006, ..., 0.0013, 0.0013, 0.0013]],
device='cuda:0', dtype=torch.float64)
tensor([[0.0013, 0.0013, 0.0013, ..., 0.0013, 0.0013, 0.0013],
[0.0013, 0.0013, 0.0013, ..., 0.0013, 0.0013, 0.0013]],
device='cuda:0', dtype=torch.float64)
tensor([[0.0013, 0.0013, 0.0013, ..., 0.0013, 0.0013, 0.0013],
[0.0013, 0.0013, 0.0013, ..., 0.0013, 0.0013, 0.0013]],
device='cuda:0', dtype=torch.float64)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 556.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-14900HX
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
BogoMIPS: 4838.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 32 MiB (16 instances)
L3 cache: 36 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] equitorch==0.1
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torch_cluster==1.6.3+pt25cu124
[pip3] torch-geometric==2.6.1
[pip3] torch_scatter==2.1.2+pt25cu124
[pip3] torch_sparse==0.6.18+pt25cu124
[pip3] torch_spline_conv==1.2.2+pt25cu124
[pip3] triton==3.2.0
[conda] equitorch 0.1 pypi_0 pypi
[conda] numpy 2.2.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt25cu124 pypi_0 pypi
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt25cu124 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt25cu124 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt25cu124 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim @eqy
| true
|
2,855,993,524
|
SerializeError for ScriptObject in AOTInductor
|
vbharadwaj-bk
|
closed
|
[
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 5
|
NONE
|
### 🐛 Describe the bug
I am trying to export a model that uses a custom C++ class (a `_C.ScriptObject`) as a component of the model state. The export runs fine, but I get an SerializeError with `aoti_compile_and_package`. Minimal code below (the C++ class is called `Test` and holds a single integer as its state).
The problem comes from `_export/serde/serialize.py`, where TorchBindObject is not recognized - the error string below. On the other hand, TorchBindObject.value is a _C.ScriptObject that the serializer can handle. Perhaps there is a missing call to `.value` somewhere?
Error output:
```bash
...
File "/global/homes/v/vbharadw/.local/perlmutter/python-3.11/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 713, in serialize_inputs
arg=self.serialize_input(args[i], schema_arg.type),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/global/homes/v/vbharadw/.local/perlmutter/python-3.11/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1006, in serialize_input
raise SerializeError(f"Unsupported argument type: {type(arg)} with schema arg_type {arg_type}")
torch._export.serde.serialize.SerializeError: Unsupported argument type: <class 'torch._inductor.ir.TorchBindObject'> with schema arg_type __torch__.torch.classes.torch_wrapper.Test
```
Minimal Python binding code
```python
torch.ops.load_library(torch_wrapper.__file__)
@torch._library.register_fake_class("torch_wrapper::Test")
class FakeTest:
def __init__(
self,
x: int) -> None:
self.x = x
@classmethod
def __obj_unflatten__(cls, flattened_test):
return cls(**dict(flattened_test))
def __len__(self):
return 0
def __setstate__(self, state_dict):
self.x = state_dict["x"]
@torch.library.register_fake("torch_wrapper::add_constant")
def fake_add_constant(test, x):
return x.new_empty(*x.shape)
test = torch.classes.torch_wrapper.Test(5)
x = torch.ones(2, 2)
class Mod(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, x: torch.Tensor):
return torch.ops.torch_wrapper.add_constant(test, x)
exported_program = torch.export.export(Mod(), args=(x,), strict=False)
print(exported_program)
output_path = torch._inductor.aoti_compile_and_package( # Fails here
exported_program,
package_path=os.path.join(os.getcwd(), "model.pt2"),
)
```
Export Result:
```bash
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, obj_lifted_custom_0, x: "f32[2, 2]"):
# File: /global/cfs/cdirs/m1982/vbharadw/equivariant_spmm/openequivariance/extlib/__init__.py:54 in forward, code: return torch.ops.torch_wrapper.add_constant(test, x)
add_constant: "f32[2, 2]" = torch.ops.torch_wrapper.add_constant.default(obj_lifted_custom_0, x); obj_lifted_custom_0 = x = None
return (add_constant,)
```
C++ Code
```c++
#include <pybind11/pybind11.h>
#include <ATen/Operators.h>
#include <torch/all.h>
#include <torch/library.h>
#include <iostream>
using namespace std;
class __attribute__ ((visibility ("default"))) Test : public torch::CustomClassHolder {
public:
int64_t x;
Test(int64_t x_in) {
x = x_in;
}
std::tuple<std::tuple<std::string, int64_t>> __obj_flatten__() {
return std::tuple(std::tuple("x", this->x));
}
};
torch::Tensor add_constant(
const c10::intrusive_ptr<Test>& instance, torch::Tensor x) {
torch::Tensor result = torch::zeros(x.sizes(), x.options());
const float* x_ptr = x.data_ptr<float>();
float* result_ptr = result.data_ptr<float>();
for(int64_t i = 0; i < x.numel(); i++) {
result_ptr[i] = ((float) instance->x) + x_ptr[i];
}
return result;
}
TORCH_LIBRARY_FRAGMENT(torch_wrapper, m) {
m.class_<Test>("Test")
.def(torch::init<int64_t>())
.def("__obj_flatten__", &Test::__obj_flatten__)
.def("__len__", [](const c10::intrusive_ptr<Test>& test) -> int64_t {
return 0;
})
.def_pickle(
// __getstate__
[](const c10::intrusive_ptr<Test>& self)
-> int64_t {
return self->x;
},
// __setstate__
[](int64_t state)
-> c10::intrusive_ptr<Test> {
return c10::make_intrusive<Test>(state);
});
m.def("add_constant(__torch__.torch.classes.torch_wrapper.Test test, Tensor x) -> Tensor");
};
TORCH_LIBRARY_IMPL(torch_wrapper, CPU, m) {
m.impl("add_constant", &add_constant);
};
PYBIND11_MODULE(torch_wrapper, m) {};
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: SUSE Linux Enterprise Server 15 SP4 (x86_64)
GCC version: (SUSE Linux) 12.3.0
Clang version: Could not collect
CMake version: version 3.20.4
Libc version: glibc-2.31
Python version: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:43:09) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.14.21-150400.24.111_12.0.91-cray_shasta_c-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 550.127.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7713 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
BogoMIPS: 3992.88
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] cuequivariance-ops-torch-cu11==0.2.0
[pip3] cuequivariance-ops-torch-cu12==0.2.0
[pip3] cuequivariance-torch==0.2.0
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torch-ema==0.3
[pip3] torch-geometric==2.6.1
[pip3] torchmetrics==1.5.0
[pip3] triton==3.2.0
[conda] libblas 3.9.0 20_linux64_mkl conda-forge
[conda] libcblas 3.9.0 20_linux64_mkl conda-forge
[conda] liblapack 3.9.0 20_linux64_mkl conda-forge
[conda] mkl 2023.2.0 h84fe81f_50496 conda-forge
[conda] numpy 1.26.3 py311h64a7726_0 conda-forge
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi
| true
|
2,855,992,412
|
assert fails to trigger inside torch.compile
|
ad8e
|
open
|
[
"triaged",
"oncall: pt2",
"module: decompositions",
"module: inductor",
"internal ramp-up task"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```
import torch
@torch.compile
def func():
a = torch.tensor([1.0, -2.0])
test = torch.all(a > 0)
print(test)
assert test, "should throw"
print("should not run")
func()
```
Output:
```
tensor(False)
should not run
```
Removing the torch.compile causes the assert to work.
Searching finds https://github.com/pytorch/pytorch/issues/120409, no idea if related
### Error logs
_No response_
### Versions
```
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] gpytorch==1.13
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.17.4.dev0+torch251cu126
[pip3] numpy==1.24.4
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.0
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[pip3] welford-torch==0.2.4
[conda] Could not collect
```
cc @chauhang @penguinwu @SherlockNoMad @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,855,983,879
|
[inductor][dynamo][fuzzer] `torch.bucketize` causes dynamo maximum recursion depth exceeded error
|
WLFJ
|
open
|
[
"triaged",
"bug",
"oncall: pt2",
"module: dynamo"
] | 0
|
NONE
|
### 🐛 Describe the bug
Reproduce Example:
```python
import torch
@torch.compile
def f(*args):
var_92, sym_1 = args
return torch.bucketize(sym_1, var_92)
var_92 = torch.randn(size=(1000,))
res = f(var_92, 1)
print(res)
```
This example works fine in Eager Mode, but causes maximum recursion depth exceeded with `torch.compile`.
### Error logs
Error log too long. Please look at [Here](https://gist.github.com/WLFJ/43e30aea69f4be1876bd9ad00660f207)
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,855,930,735
|
[inductor][fuzzer] Inconsistent Behavior with `aten.linalg_tensorinv`
|
WLFJ
|
open
|
[
"triaged",
"oncall: pt2",
"topic: fuzzer"
] | 3
|
NONE
|
### 🐛 Describe the bug
# Description
When executing the following PyTorch program using `torch.compile`, the result is inconsistent with the expected mathematical behavior.
```python
import torch
@torch.compile
def f(*args):
sym_5, sym_6, sym_7, sym_8 = args
var_714 = torch.ops.aten.fft_fftfreq(n=sym_5, d=sym_6, pin_memory=sym_7)
return torch.ops.aten.linalg_tensorinv(self=var_714, ind=sym_8)
res = f(1, 1.0, 0, 1)
print(res)
```
# Unexpected Behavior
The program prints:
```
tensor([inf])
```
However, the matrix described in the code should be singular, making inversion mathematically impossible.
fall back to eager mode reveal this:
```
Traceback (most recent call last):
File "/home/yvesw/reborn2-expr/250216-bugs/test-9.py", line 9, in <module>
res = f(1, 1.0, 0, 1)
^^^^^^^^^^^^^^^
File "/home/yvesw/reborn2-expr/250216-bugs/test-9.py", line 7, in f
return torch.ops.aten.linalg_tensorinv(self=var_714, ind=sym_8)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_ops.py", line 1156, in __call__
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._C._LinAlgError: inv: The diagonal element 1 is zero, the inversion could not be completed because the input matrix is singular.
```
### Error logs
This is a silent Inconsistent Behavior, feel free to ask me for any further logs you need!
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @chauhang @penguinwu
| true
|
2,855,915,051
|
[inductor][fuzzer] Compilation Error in `torch.arange` + `torch.sum` with `torch.float16`
|
WLFJ
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 2
|
NONE
|
### 🐛 Describe the bug
Reproduce example:
```python
import torch
def f(*args):
sym_0, sym_1, sym_2, sym_3 = args
var_228 = torch.arange(start=sym_0, end=sym_1, dtype=sym_2)
return torch.sum(var_228, dim=sym_3)
res = f(300, 1024, torch.float16, (0,))
print(res)
res = torch.compile(f)(300, 1024, torch.float16, (0,))
print(res)
```
### Error logs
```
tensor(inf, dtype=torch.float16)
Traceback (most recent call last):
File "/home/yvesw/reborn2-expr/250216-bugs/test-7.py", line 12, in <module>
res = torch.compile(f)(300, 1024, torch.float16, (0,))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 752, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 737, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1402, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1122, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1986, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/graph.py", line 2028, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 2757, in load_by_key_path
mod = _reload_python_module(key, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/runtime/compile_tasks.py", line 51, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_yvesw/ws/cwsa64tecdx44pmb4n6sft6qbyqsezxmv4rtcrkwgeiagsqdketk.py", line 66, in <module>
async_compile.wait(globals())
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/async_compile.py", line 346, in wait
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 3251, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 2251, in future
result = get_result()
^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 2041, in load_fn
future.result()
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 2082, in _worker_compile_cpp
cpp_builder.build()
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/cpp_builder.py", line 1530, in build
run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/cpp_builder.py", line 347, in run_compile_cmd
_run_compile_cmd(cmd_line, cwd)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/cpp_builder.py", line 342, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._inductor.exc.InductorError: CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_AVX2 -shared -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fexcess-precision=fast -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -fno-tree-loop-vectorize -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -fopenmp -I/home/yvesw/miniconda3/envs/torch-preview/include/python3.11 -I/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include -I/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -mavx2 -mfma -mf16c -D_GLIBCXX_USE_CXX11_ABI=1 -ltorch -ltorch_cpu -ltorch_python -lgomp -L/home/yvesw/miniconda3/envs/torch-preview/lib -L/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/lib -o /tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.so
Output:
/tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp: In function ‘void kernel(half*)’:
/tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp:17:53: error: no match for ‘operator+’ (operand types are ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ and ‘at::vec::CPU_CAPABILITY::Vectorized<c10::Half>’)
17 | tmp_acc0_vec = tmp_acc0_vec + tmp2;
| ~~~~~~~~~~~~ ^ ~~~~
| | |
| | at::vec::CPU_CAPABILITY::Vectorized<c10::Half>
| at::vec::CPU_CAPABILITY::VectorizedN<float, 2>
In file included from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/NumericUtils.h:14,
from /tmp/torchinductor_yvesw/3b/c3bi5gk6mslf6u4iaqafhxm64z6u65e3eain4xlary5blqnvv6xx.h:19,
from /tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp:2:
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:353:22: note: candidate: ‘template<class T> constexpr c10::complex<U> c10::operator+(const complex<U>&)’
353 | constexpr complex<T> operator+(const complex<T>& val) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:353:22: note: candidate expects 1 argument, 2 provided
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:363:22: note: candidate: ‘template<class T> constexpr c10::complex<U> c10::operator+(const complex<U>&, const complex<U>&)’
363 | constexpr complex<T> operator+(const complex<T>& lhs, const complex<T>& rhs) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:363:22: note: template argument deduction/substitution failed:
/tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp:17:55: note: ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ is not derived from ‘const c10::complex<U>’
17 | tmp_acc0_vec = tmp_acc0_vec + tmp2;
| ^~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:369:22: note: candidate: ‘template<class T> constexpr c10::complex<U> c10::operator+(const complex<U>&, const T&)’
369 | constexpr complex<T> operator+(const complex<T>& lhs, const T& rhs) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:369:22: note: template argument deduction/substitution failed:
/tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp:17:55: note: ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ is not derived from ‘const c10::complex<U>’
17 | tmp_acc0_vec = tmp_acc0_vec + tmp2;
| ^~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:375:22: note: candidate: ‘template<class T> constexpr c10::complex<U> c10::operator+(const T&, const complex<U>&)’
375 | constexpr complex<T> operator+(const T& lhs, const complex<T>& rhs) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:375:22: note: template argument deduction/substitution failed:
/tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp:17:55: note: ‘at::vec::CPU_CAPABILITY::Vectorized<c10::Half>’ is not derived from ‘const c10::complex<U>’
17 | tmp_acc0_vec = tmp_acc0_vec + tmp2;
| ^~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:443:28: note: candidate: ‘template<class fT, class iT, typename std::enable_if<(is_floating_point_v<fT> && is_integral_v<iT>), int>::type <anonymous> > constexpr c10::complex<U> c10::operator+(const complex<U>&, const iT&)’
443 | constexpr c10::complex<fT> operator+(const c10::complex<fT>& a, const iT& b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:443:28: note: template argument deduction/substitution failed:
/tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp:17:55: note: ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ is not derived from ‘const c10::complex<U>’
17 | tmp_acc0_vec = tmp_acc0_vec + tmp2;
| ^~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:448:28: note: candidate: ‘template<class fT, class iT, typename std::enable_if<(is_floating_point_v<fT> && is_integral_v<iT>), int>::type <anonymous> > constexpr c10::complex<U> c10::operator+(const iT&, const complex<U>&)’
448 | constexpr c10::complex<fT> operator+(const iT& a, const c10::complex<fT>& b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/complex.h:448:28: note: template argument deduction/substitution failed:
/tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp:17:55: note: ‘at::vec::CPU_CAPABILITY::Vectorized<c10::Half>’ is not derived from ‘const c10::complex<U>’
17 | tmp_acc0_vec = tmp_acc0_vec + tmp2;
| ^~~~
In file included from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8,
from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec.h:7,
from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/functional_base.h:6,
from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/functional.h:3,
from /tmp/torchinductor_yvesw/3b/c3bi5gk6mslf6u4iaqafhxm64z6u65e3eain4xlary5blqnvv6xx.h:39:
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec_base.h:660:41: note: candidate: ‘template<class T> at::vec::CPU_CAPABILITY::Vectorized<T> at::vec::CPU_CAPABILITY::operator+(const Vectorized<T>&, const Vectorized<T>&)’
660 | template <class T> Vectorized<T> inline operator+(const Vectorized<T> &a, const Vectorized<T> &b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec_base.h:660:41: note: template argument deduction/substitution failed:
/tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp:17:55: note: ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ is not derived from ‘const at::vec::CPU_CAPABILITY::Vectorized<T>’
17 | tmp_acc0_vec = tmp_acc0_vec + tmp2;
| ^~~~
In file included from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1195:
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec_n.h:351:37: note: candidate: ‘template<class T, int N> at::vec::CPU_CAPABILITY::VectorizedN<T, N> at::vec::CPU_CAPABILITY::operator+(const VectorizedN<T, N>&, const VectorizedN<T, N>&)’
351 | VECTORIZEDN_DEFINE_BINARY_OP_GLOBAL(operator+)
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec_n.h:320:28: note: in definition of macro ‘VECTORIZEDN_DEFINE_BINARY_OP_GLOBAL’
320 | inline VectorizedN<T, N> op( \
| ^~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec_n.h:351:37: note: template argument deduction/substitution failed:
351 | VECTORIZEDN_DEFINE_BINARY_OP_GLOBAL(operator+)
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec_n.h:320:28: note: in definition of macro ‘VECTORIZEDN_DEFINE_BINARY_OP_GLOBAL’
320 | inline VectorizedN<T, N> op( \
| ^~
/tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp:17:55: note: ‘at::vec::CPU_CAPABILITY::Vectorized<c10::Half>’ is not derived from ‘const at::vec::CPU_CAPABILITY::VectorizedN<T, N>’
17 | tmp_acc0_vec = tmp_acc0_vec + tmp2;
| ^~~~
In file included from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16.h:126,
from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/NumericUtils.h:8:
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:86:1: note: candidate: ‘c10::BFloat16 c10::operator+(const BFloat16&, const BFloat16&)’
86 | operator+(const BFloat16& a, const BFloat16& b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:86:27: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘const c10::BFloat16&’
86 | operator+(const BFloat16& a, const BFloat16& b) {
| ~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:146:30: note: candidate: ‘float c10::operator+(BFloat16, float)’
146 | inline C10_HOST_DEVICE float operator+(BFloat16 a, float b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:146:49: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::BFloat16’
146 | inline C10_HOST_DEVICE float operator+(BFloat16 a, float b) {
| ~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:159:30: note: candidate: ‘float c10::operator+(float, BFloat16)’
159 | inline C10_HOST_DEVICE float operator+(float a, BFloat16 b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:159:46: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘float’
159 | inline C10_HOST_DEVICE float operator+(float a, BFloat16 b) {
| ~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:187:31: note: candidate: ‘double c10::operator+(BFloat16, double)’
187 | inline C10_HOST_DEVICE double operator+(BFloat16 a, double b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:187:50: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::BFloat16’
187 | inline C10_HOST_DEVICE double operator+(BFloat16 a, double b) {
| ~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:200:31: note: candidate: ‘double c10::operator+(double, BFloat16)’
200 | inline C10_HOST_DEVICE double operator+(double a, BFloat16 b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:200:48: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘double’
200 | inline C10_HOST_DEVICE double operator+(double a, BFloat16 b) {
| ~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:215:33: note: candidate: ‘c10::BFloat16 c10::operator+(BFloat16, int)’
215 | inline C10_HOST_DEVICE BFloat16 operator+(BFloat16 a, int b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:215:52: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::BFloat16’
215 | inline C10_HOST_DEVICE BFloat16 operator+(BFloat16 a, int b) {
| ~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:228:33: note: candidate: ‘c10::BFloat16 c10::operator+(int, BFloat16)’
228 | inline C10_HOST_DEVICE BFloat16 operator+(int a, BFloat16 b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:228:47: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int’
228 | inline C10_HOST_DEVICE BFloat16 operator+(int a, BFloat16 b) {
| ~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:243:33: note: candidate: ‘c10::BFloat16 c10::operator+(BFloat16, int64_t)’
243 | inline C10_HOST_DEVICE BFloat16 operator+(BFloat16 a, int64_t b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:243:52: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::BFloat16’
243 | inline C10_HOST_DEVICE BFloat16 operator+(BFloat16 a, int64_t b) {
| ~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:256:33: note: candidate: ‘c10::BFloat16 c10::operator+(int64_t, BFloat16)’
256 | inline C10_HOST_DEVICE BFloat16 operator+(int64_t a, BFloat16 b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/BFloat16-inl.h:256:51: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int64_t’ {aka ‘long int’}
256 | inline C10_HOST_DEVICE BFloat16 operator+(int64_t a, BFloat16 b) {
| ~~~~~~~~^
In file included from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn.h:240,
from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/NumericUtils.h:9:
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:34:1: note: candidate: ‘c10::Float8_e4m3fn c10::operator+(const Float8_e4m3fn&, const Float8_e4m3fn&)’
34 | operator+(const Float8_e4m3fn& a, const Float8_e4m3fn& b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:34:32: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘const c10::Float8_e4m3fn&’
34 | operator+(const Float8_e4m3fn& a, const Float8_e4m3fn& b) {
| ~~~~~~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:88:30: note: candidate: ‘float c10::operator+(Float8_e4m3fn, float)’
88 | inline C10_HOST_DEVICE float operator+(Float8_e4m3fn a, float b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:88:54: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e4m3fn’
88 | inline C10_HOST_DEVICE float operator+(Float8_e4m3fn a, float b) {
| ~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:102:30: note: candidate: ‘float c10::operator+(float, Float8_e4m3fn)’
102 | inline C10_HOST_DEVICE float operator+(float a, Float8_e4m3fn b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:102:46: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘float’
102 | inline C10_HOST_DEVICE float operator+(float a, Float8_e4m3fn b) {
| ~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:131:31: note: candidate: ‘double c10::operator+(Float8_e4m3fn, double)’
131 | inline C10_HOST_DEVICE double operator+(Float8_e4m3fn a, double b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:131:55: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e4m3fn’
131 | inline C10_HOST_DEVICE double operator+(Float8_e4m3fn a, double b) {
| ~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:145:31: note: candidate: ‘double c10::operator+(double, Float8_e4m3fn)’
145 | inline C10_HOST_DEVICE double operator+(double a, Float8_e4m3fn b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:145:48: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘double’
145 | inline C10_HOST_DEVICE double operator+(double a, Float8_e4m3fn b) {
| ~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:161:38: note: candidate: ‘c10::Float8_e4m3fn c10::operator+(Float8_e4m3fn, int)’
161 | inline C10_HOST_DEVICE Float8_e4m3fn operator+(Float8_e4m3fn a, int b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:161:62: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e4m3fn’
161 | inline C10_HOST_DEVICE Float8_e4m3fn operator+(Float8_e4m3fn a, int b) {
| ~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:174:38: note: candidate: ‘c10::Float8_e4m3fn c10::operator+(int, Float8_e4m3fn)’
174 | inline C10_HOST_DEVICE Float8_e4m3fn operator+(int a, Float8_e4m3fn b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:174:52: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int’
174 | inline C10_HOST_DEVICE Float8_e4m3fn operator+(int a, Float8_e4m3fn b) {
| ~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:189:38: note: candidate: ‘c10::Float8_e4m3fn c10::operator+(Float8_e4m3fn, int64_t)’
189 | inline C10_HOST_DEVICE Float8_e4m3fn operator+(Float8_e4m3fn a, int64_t b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:189:62: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e4m3fn’
189 | inline C10_HOST_DEVICE Float8_e4m3fn operator+(Float8_e4m3fn a, int64_t b) {
| ~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:202:38: note: candidate: ‘c10::Float8_e4m3fn c10::operator+(int64_t, Float8_e4m3fn)’
202 | inline C10_HOST_DEVICE Float8_e4m3fn operator+(int64_t a, Float8_e4m3fn b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fn-inl.h:202:56: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int64_t’ {aka ‘long int’}
202 | inline C10_HOST_DEVICE Float8_e4m3fn operator+(int64_t a, Float8_e4m3fn b) {
| ~~~~~~~~^
In file included from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz.h:139,
from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/NumericUtils.h:10:
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:35:1: note: candidate: ‘c10::Float8_e4m3fnuz c10::operator+(const Float8_e4m3fnuz&, const Float8_e4m3fnuz&)’
35 | operator+(const Float8_e4m3fnuz& a, const Float8_e4m3fnuz& b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:35:34: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘const c10::Float8_e4m3fnuz&’
35 | operator+(const Float8_e4m3fnuz& a, const Float8_e4m3fnuz& b) {
| ~~~~~~~~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:89:30: note: candidate: ‘float c10::operator+(Float8_e4m3fnuz, float)’
89 | inline C10_HOST_DEVICE float operator+(Float8_e4m3fnuz a, float b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:89:56: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e4m3fnuz’
89 | inline C10_HOST_DEVICE float operator+(Float8_e4m3fnuz a, float b) {
| ~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:103:30: note: candidate: ‘float c10::operator+(float, Float8_e4m3fnuz)’
103 | inline C10_HOST_DEVICE float operator+(float a, Float8_e4m3fnuz b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:103:46: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘float’
103 | inline C10_HOST_DEVICE float operator+(float a, Float8_e4m3fnuz b) {
| ~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:132:31: note: candidate: ‘double c10::operator+(Float8_e4m3fnuz, double)’
132 | inline C10_HOST_DEVICE double operator+(Float8_e4m3fnuz a, double b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:132:57: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e4m3fnuz’
132 | inline C10_HOST_DEVICE double operator+(Float8_e4m3fnuz a, double b) {
| ~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:146:31: note: candidate: ‘double c10::operator+(double, Float8_e4m3fnuz)’
146 | inline C10_HOST_DEVICE double operator+(double a, Float8_e4m3fnuz b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:146:48: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘double’
146 | inline C10_HOST_DEVICE double operator+(double a, Float8_e4m3fnuz b) {
| ~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:162:40: note: candidate: ‘c10::Float8_e4m3fnuz c10::operator+(Float8_e4m3fnuz, int)’
162 | inline C10_HOST_DEVICE Float8_e4m3fnuz operator+(Float8_e4m3fnuz a, int b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:162:66: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e4m3fnuz’
162 | inline C10_HOST_DEVICE Float8_e4m3fnuz operator+(Float8_e4m3fnuz a, int b) {
| ~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:175:40: note: candidate: ‘c10::Float8_e4m3fnuz c10::operator+(int, Float8_e4m3fnuz)’
175 | inline C10_HOST_DEVICE Float8_e4m3fnuz operator+(int a, Float8_e4m3fnuz b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:175:54: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int’
175 | inline C10_HOST_DEVICE Float8_e4m3fnuz operator+(int a, Float8_e4m3fnuz b) {
| ~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:190:40: note: candidate: ‘c10::Float8_e4m3fnuz c10::operator+(Float8_e4m3fnuz, int64_t)’
190 | inline C10_HOST_DEVICE Float8_e4m3fnuz operator+(Float8_e4m3fnuz a, int64_t b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:190:66: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e4m3fnuz’
190 | inline C10_HOST_DEVICE Float8_e4m3fnuz operator+(Float8_e4m3fnuz a, int64_t b) {
| ~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:203:40: note: candidate: ‘c10::Float8_e4m3fnuz c10::operator+(int64_t, Float8_e4m3fnuz)’
203 | inline C10_HOST_DEVICE Float8_e4m3fnuz operator+(int64_t a, Float8_e4m3fnuz b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e4m3fnuz-inl.h:203:58: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int64_t’ {aka ‘long int’}
203 | inline C10_HOST_DEVICE Float8_e4m3fnuz operator+(int64_t a, Float8_e4m3fnuz b) {
| ~~~~~~~~^
In file included from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half.h:419,
from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2.h:17,
from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/NumericUtils.h:11:
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:107:29: note: candidate: ‘c10::Half c10::operator+(const Half&, const Half&)’
107 | inline C10_HOST_DEVICE Half operator+(const Half& a, const Half& b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:107:51: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘const c10::Half&’
107 | inline C10_HOST_DEVICE Half operator+(const Half& a, const Half& b) {
| ~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:157:30: note: candidate: ‘float c10::operator+(Half, float)’
157 | inline C10_HOST_DEVICE float operator+(Half a, float b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:157:45: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Half’
157 | inline C10_HOST_DEVICE float operator+(Half a, float b) {
| ~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:171:30: note: candidate: ‘float c10::operator+(float, Half)’
171 | inline C10_HOST_DEVICE float operator+(float a, Half b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:171:46: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘float’
171 | inline C10_HOST_DEVICE float operator+(float a, Half b) {
| ~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:200:31: note: candidate: ‘double c10::operator+(Half, double)’
200 | inline C10_HOST_DEVICE double operator+(Half a, double b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:200:46: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Half’
200 | inline C10_HOST_DEVICE double operator+(Half a, double b) {
| ~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:214:31: note: candidate: ‘double c10::operator+(double, Half)’
214 | inline C10_HOST_DEVICE double operator+(double a, Half b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:214:48: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘double’
214 | inline C10_HOST_DEVICE double operator+(double a, Half b) {
| ~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:230:29: note: candidate: ‘c10::Half c10::operator+(Half, int)’
230 | inline C10_HOST_DEVICE Half operator+(Half a, int b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:230:44: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Half’
230 | inline C10_HOST_DEVICE Half operator+(Half a, int b) {
| ~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:243:29: note: candidate: ‘c10::Half c10::operator+(int, Half)’
243 | inline C10_HOST_DEVICE Half operator+(int a, Half b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:243:43: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int’
243 | inline C10_HOST_DEVICE Half operator+(int a, Half b) {
| ~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:258:29: note: candidate: ‘c10::Half c10::operator+(Half, int64_t)’
258 | inline C10_HOST_DEVICE Half operator+(Half a, int64_t b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:258:44: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Half’
258 | inline C10_HOST_DEVICE Half operator+(Half a, int64_t b) {
| ~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:271:29: note: candidate: ‘c10::Half c10::operator+(int64_t, Half)’
271 | inline C10_HOST_DEVICE Half operator+(int64_t a, Half b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Half-inl.h:271:47: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int64_t’ {aka ‘long int’}
271 | inline C10_HOST_DEVICE Half operator+(int64_t a, Half b) {
| ~~~~~~~~^
In file included from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2.h:148:
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:42:1: note: candidate: ‘c10::Float8_e5m2 c10::operator+(const Float8_e5m2&, const Float8_e5m2&)’
42 | operator+(const Float8_e5m2& a, const Float8_e5m2& b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:42:30: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘const c10::Float8_e5m2&’
42 | operator+(const Float8_e5m2& a, const Float8_e5m2& b) {
| ~~~~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:96:30: note: candidate: ‘float c10::operator+(Float8_e5m2, float)’
96 | inline C10_HOST_DEVICE float operator+(Float8_e5m2 a, float b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:96:52: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e5m2’
96 | inline C10_HOST_DEVICE float operator+(Float8_e5m2 a, float b) {
| ~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:110:30: note: candidate: ‘float c10::operator+(float, Float8_e5m2)’
110 | inline C10_HOST_DEVICE float operator+(float a, Float8_e5m2 b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:110:46: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘float’
110 | inline C10_HOST_DEVICE float operator+(float a, Float8_e5m2 b) {
| ~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:139:31: note: candidate: ‘double c10::operator+(Float8_e5m2, double)’
139 | inline C10_HOST_DEVICE double operator+(Float8_e5m2 a, double b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:139:53: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e5m2’
139 | inline C10_HOST_DEVICE double operator+(Float8_e5m2 a, double b) {
| ~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:153:31: note: candidate: ‘double c10::operator+(double, Float8_e5m2)’
153 | inline C10_HOST_DEVICE double operator+(double a, Float8_e5m2 b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:153:48: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘double’
153 | inline C10_HOST_DEVICE double operator+(double a, Float8_e5m2 b) {
| ~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:169:36: note: candidate: ‘c10::Float8_e5m2 c10::operator+(Float8_e5m2, int)’
169 | inline C10_HOST_DEVICE Float8_e5m2 operator+(Float8_e5m2 a, int b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:169:58: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e5m2’
169 | inline C10_HOST_DEVICE Float8_e5m2 operator+(Float8_e5m2 a, int b) {
| ~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:182:36: note: candidate: ‘c10::Float8_e5m2 c10::operator+(int, Float8_e5m2)’
182 | inline C10_HOST_DEVICE Float8_e5m2 operator+(int a, Float8_e5m2 b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:182:50: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int’
182 | inline C10_HOST_DEVICE Float8_e5m2 operator+(int a, Float8_e5m2 b) {
| ~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:197:36: note: candidate: ‘c10::Float8_e5m2 c10::operator+(Float8_e5m2, int64_t)’
197 | inline C10_HOST_DEVICE Float8_e5m2 operator+(Float8_e5m2 a, int64_t b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:197:58: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e5m2’
197 | inline C10_HOST_DEVICE Float8_e5m2 operator+(Float8_e5m2 a, int64_t b) {
| ~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:210:36: note: candidate: ‘c10::Float8_e5m2 c10::operator+(int64_t, Float8_e5m2)’
210 | inline C10_HOST_DEVICE Float8_e5m2 operator+(int64_t a, Float8_e5m2 b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2-inl.h:210:54: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int64_t’ {aka ‘long int’}
210 | inline C10_HOST_DEVICE Float8_e5m2 operator+(int64_t a, Float8_e5m2 b) {
| ~~~~~~~~^
In file included from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz.h:138,
from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/NumericUtils.h:12:
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:39:1: note: candidate: ‘c10::Float8_e5m2fnuz c10::operator+(const Float8_e5m2fnuz&, const Float8_e5m2fnuz&)’
39 | operator+(const Float8_e5m2fnuz& a, const Float8_e5m2fnuz& b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:39:34: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘const c10::Float8_e5m2fnuz&’
39 | operator+(const Float8_e5m2fnuz& a, const Float8_e5m2fnuz& b) {
| ~~~~~~~~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:93:30: note: candidate: ‘float c10::operator+(Float8_e5m2fnuz, float)’
93 | inline C10_HOST_DEVICE float operator+(Float8_e5m2fnuz a, float b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:93:56: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e5m2fnuz’
93 | inline C10_HOST_DEVICE float operator+(Float8_e5m2fnuz a, float b) {
| ~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:107:30: note: candidate: ‘float c10::operator+(float, Float8_e5m2fnuz)’
107 | inline C10_HOST_DEVICE float operator+(float a, Float8_e5m2fnuz b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:107:46: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘float’
107 | inline C10_HOST_DEVICE float operator+(float a, Float8_e5m2fnuz b) {
| ~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:136:31: note: candidate: ‘double c10::operator+(Float8_e5m2fnuz, double)’
136 | inline C10_HOST_DEVICE double operator+(Float8_e5m2fnuz a, double b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:136:57: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e5m2fnuz’
136 | inline C10_HOST_DEVICE double operator+(Float8_e5m2fnuz a, double b) {
| ~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:150:31: note: candidate: ‘double c10::operator+(double, Float8_e5m2fnuz)’
150 | inline C10_HOST_DEVICE double operator+(double a, Float8_e5m2fnuz b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:150:48: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘double’
150 | inline C10_HOST_DEVICE double operator+(double a, Float8_e5m2fnuz b) {
| ~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:166:40: note: candidate: ‘c10::Float8_e5m2fnuz c10::operator+(Float8_e5m2fnuz, int)’
166 | inline C10_HOST_DEVICE Float8_e5m2fnuz operator+(Float8_e5m2fnuz a, int b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:166:66: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e5m2fnuz’
166 | inline C10_HOST_DEVICE Float8_e5m2fnuz operator+(Float8_e5m2fnuz a, int b) {
| ~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:179:40: note: candidate: ‘c10::Float8_e5m2fnuz c10::operator+(int, Float8_e5m2fnuz)’
179 | inline C10_HOST_DEVICE Float8_e5m2fnuz operator+(int a, Float8_e5m2fnuz b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:179:54: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int’
179 | inline C10_HOST_DEVICE Float8_e5m2fnuz operator+(int a, Float8_e5m2fnuz b) {
| ~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:194:40: note: candidate: ‘c10::Float8_e5m2fnuz c10::operator+(Float8_e5m2fnuz, int64_t)’
194 | inline C10_HOST_DEVICE Float8_e5m2fnuz operator+(Float8_e5m2fnuz a, int64_t b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:194:66: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘c10::Float8_e5m2fnuz’
194 | inline C10_HOST_DEVICE Float8_e5m2fnuz operator+(Float8_e5m2fnuz a, int64_t b) {
| ~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:207:40: note: candidate: ‘c10::Float8_e5m2fnuz c10::operator+(int64_t, Float8_e5m2fnuz)’
207 | inline C10_HOST_DEVICE Float8_e5m2fnuz operator+(int64_t a, Float8_e5m2fnuz b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/c10/util/Float8_e5m2fnuz-inl.h:207:58: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘int64_t’ {aka ‘long int’}
207 | inline C10_HOST_DEVICE Float8_e5m2fnuz operator+(int64_t a, Float8_e5m2fnuz b) {
| ~~~~~~~~^
In file included from /home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:14:
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec256/vec256_bfloat16.h:681:29: note: candidate: ‘at::vec::CPU_CAPABILITY::Vectorized<c10::BFloat16> at::vec::CPU_CAPABILITY::operator+(const Vectorized<c10::BFloat16>&, const Vectorized<c10::BFloat16>&)’
681 | Vectorized<BFloat16> inline operator+(const Vectorized<BFloat16>& a, const Vectorized<BFloat16>& b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec256/vec256_bfloat16.h:681:67: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘const at::vec::CPU_CAPABILITY::Vectorized<c10::BFloat16>&’
681 | Vectorized<BFloat16> inline operator+(const Vectorized<BFloat16>& a, const Vectorized<BFloat16>& b) {
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec256/vec256_bfloat16.h:885:25: note: candidate: ‘at::vec::CPU_CAPABILITY::Vectorized<c10::Half> at::vec::CPU_CAPABILITY::operator+(const Vectorized<c10::Half>&, const Vectorized<c10::Half>&)’
885 | Vectorized<Half> inline operator+(const Vectorized<Half>& a, const Vectorized<Half>& b) {
| ^~~~~~~~
/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/include/ATen/cpu/vec/vec256/vec256_bfloat16.h:885:59: note: no known conversion for argument 1 from ‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ to ‘const at::vec::CPU_CAPABILITY::Vectorized<c10::Half>&’
885 | Vectorized<Half> inline operator+(const Vectorized<Half>& a, const Vectorized<Half>& b) {
| ~~~~~~~~~~~~~~~~~~~~~~~~^
/tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp:24:57: error: no matching function for call to ‘sum_masked_reduce(at::vec::CPU_CAPABILITY::VectorizedN<float, 2>&, at::vec::CPU_CAPABILITY::Vectorized<c10::Half>&, int64_t)’
24 | tmp_acc0_vec = sum_masked_reduce(tmp_acc0_vec, tmp2, static_cast<int64_t>(4L));
| ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/tmp/torchinductor_yvesw/3b/c3bi5gk6mslf6u4iaqafhxm64z6u65e3eain4xlary5blqnvv6xx.h:170:3: note: candidate: ‘template<class T> T sum_masked_reduce(const T&, const T&, int64_t)’
170 | T sum_masked_reduce(const T& a, const T& b, const int64_t tail_size) {
| ^~~~~~~~~~~~~~~~~
/tmp/torchinductor_yvesw/3b/c3bi5gk6mslf6u4iaqafhxm64z6u65e3eain4xlary5blqnvv6xx.h:170:3: note: template argument deduction/substitution failed:
/tmp/torchinductor_yvesw/ho/chole63jz4of2nowqa7bme4sza2oknwb4jzyrongdynezyxsv2uk.cpp:24:57: note: deduced conflicting types for parameter ‘const T’ (‘at::vec::CPU_CAPABILITY::VectorizedN<float, 2>’ and ‘at::vec::CPU_CAPABILITY::Vectorized<c10::Half>’)
24 | tmp_acc0_vec = sum_masked_reduce(tmp_acc0_vec, tmp2, static_cast<int64_t>(4L));
| ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
and generated cpp code [here](https://gist.github.com/WLFJ/49e67c5b9c99769daed25bb9272d740b)
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @chauhang @penguinwu
| true
|
2,855,905,611
|
[inductor][fuzzer] `ZeroDivisionError` in `torch.unsafe_split` when input empty size tensor with zero `split_size`
|
WLFJ
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
NONE
|
### 🐛 Describe the bug
Reproduce example:
```python
import torch
def f(*args):
sym_0, sym_1, sym_2 = args
var_485 = torch.ones(sym_0)
return torch.unsafe_split(var_485, split_size=sym_1, dim=sym_2)
res = f((0,), 0, -1,)
print(res) # (tensor([]), )
res = torch.compile(f)((0,), 0, -1,) # ZeroDivisionError
print(res)
```
The behavior is different between Eager Mode and Inductor.
And if `split_size` > input dim, Eager Mode doesn't handle it as an error, but inductor causes `list assignment index out of range` error, add more input parameter range check is needed.
### Error logs
```
Traceback (most recent call last):
File "/home/yvesw/reborn2-expr/250216-bugs/test-5.py", line 12, in <module>
res = torch.compile(f)((0,), 0, -1,)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 752, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 737, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1402, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1057, in codegen_and_compile
graph.run(*example_inputs)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/graph.py", line 851, in run
return super().run(*args)
^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1436, in run_node
result = super().run_node(n)
^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/fx/interpreter.py", line 236, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1139, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1129, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/lowering.py", line 462, in wrapped
out = decomp_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/lowering.py", line 1798, in split
FloorDiv(x_size + sizes - 1, sizes)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/sympy/core/cache.py", line 72, in wrapper
retval = cfunc(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/sympy/core/function.py", line 466, in __new__
result = super().__new__(cls, *args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/sympy/core/cache.py", line 72, in wrapper
retval = cfunc(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/sympy/core/function.py", line 307, in __new__
evaluated = cls.eval(*args)
^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/utils/_sympy/functions.py", line 223, in eval
raise ZeroDivisionError("division by zero")
torch._inductor.exc.InductorError: LoweringException: ZeroDivisionError: division by zero
target: aten.split.Tensor
args[0]: TensorBox(StorageBox(
Pointwise(
'cpu',
torch.float32,
def inner_fn(index):
i0 = index
tmp0 = ops.constant(1, torch.float32)
return tmp0
,
ranges=[0],
origin_node=full_default,
origins=OrderedSet([full_default])
)
))
args[1]: 0
args[2]: -1
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,855,901,075
|
[inductor][fuzzer] decomposition failed on `torch.unsafe_chunk` with empty size input tensor
|
WLFJ
|
open
|
[
"triaged",
"oncall: pt2",
"module: decompositions"
] | 0
|
NONE
|
### 🐛 Describe the bug
The inductor crashed when `torch.unsafe_chunk` with an empty size tensor.
```python
import torch
print(torch.__version__)
def f(*args):
sym_0, sym_1 = args
var_6 = torch.zeros(sym_0)
return torch.unsafe_chunk(var_6, chunks=sym_1, dim=0)
res = f((0,), 4,)
print('eager', res)
res = torch.compile(f)((0,), 4,) # crashed!
print('inductor', res)
```
### Error logs
```
2.7.0.dev20250209+cu124
eager (tensor([]), tensor([]), tensor([]), tensor([]))
Traceback (most recent call last):
File "/home/yvesw/reborn2-expr/250216-bugs/test-4.py", line 14, in <module>
res = torch.compile(f)((0,), 4,)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1372, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1156, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 564, in __call__
return _compile(
^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1000, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 725, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 759, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 235, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 679, in transform
tracer.run()
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2984, in run
super().run()
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1118, in run
while self.step():
^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1028, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 714, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2456, in CALL
self._call(inst)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2450, in _call
self.call_function(fn, args, kwargs)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 952, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/variables/torch.py", line 1019, in call_function
tensor_variable = wrap_fx_proxy(
^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2190, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2256, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2352, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 3086, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 3021, in get_fake_value
ret_val = wrap_fake_exception(
^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2554, in wrap_fake_exception
return fn()
^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 3022, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 3162, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 3138, in run_node
return node.target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_decomp/decompositions.py", line 1885, in unsafe_chunk_py_impl
split_sizes = [split_size for _ in chunks]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method unsafe_chunk of type object at 0x7f066e98ef60>(*(FakeTensor(..., size=(0,)),), **{'chunks': 4, 'dim': 0}):
'int' object is not iterable
from user code:
File "/home/yvesw/reborn2-expr/250216-bugs/test-4.py", line 9, in f
return torch.unsafe_chunk(var_6, chunks=sym_1, dim=0)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
2.7.0.dev20250209+cu124
cc @chauhang @penguinwu @SherlockNoMad @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,855,892,635
|
[Pyper] Enable GQA in PMA module
|
mengluy0125
|
open
|
[
"fb-exported"
] | 7
|
CONTRIBUTOR
|
Summary: Add an option to use gqa instead of pma
Test Plan:
# how to set gqa
```
def enable_gqa(job):
job = job.set_arg_path("arch.mtml_model.shared_arch.pytorch_interformer.interformer.interformer_config.megaformer_config.use_gqa", True)
return job
```
# local reproduce
```
CUDA_VISIBLE_DEVICES=5 buck2 run mode/opt //aps_models/ads/ecosystem/tooling/tools/efficient_module_suite/pyper_models:pyper_model_perf_benchmark -- --flow_id 688722956 --shrink_model --mfu_profile_module "impl.shared_arch.pytorch_dhen.interformer" --use_synthetic_data
```
| Metric | Value |
|:-------------------|:-----------|
| Batch size | 10 |
| GPU type | H100 |
| Latency | 124.48 ms |
| Model size | 2035.85 MB |
| Flops | 35.04 G |
| Flops/example | 3.50 G |
| TFLOPS/sec | 0.28 |
| MFU | 0.04% |
| Activation/example | 201.19 MB |
Trace link: https://our.intern.facebook.com/intern/perfdoctor/trace_view?filepath=tree/traces/efficient_module_suite/mtml_link_click_model.Feb_14_14_35_39_trace.json.gz&bucket=pyper_traces
snapshot link: https://www.internalfb.com/manifold/explorer/ai_efficiency/tree/gpu_snapshot/mtml_link_click_model.Feb_14_14_35_39.snapshot.pickle
# E2E
Differential Revision: D69557675
| true
|
2,855,833,080
|
[inductor][fuzzer] `torch.ops.aten.lift` causes internal assertion `isFunctionalTensor` fail
|
WLFJ
|
open
|
[
"module: crash",
"triaged",
"oncall: pt2",
"module: empty tensor",
"topic: fuzzer"
] | 1
|
NONE
|
### 🐛 Describe the bug
# Bug Description
`torch.ops.aten.lift` works fine in eager mode, but cause `TORCH_INTERNAL_ASSERT(!at::functionalization::impl::isFunctionalTensor(self));` failed in inductor.
For example:
```python
import torch
print(torch.__version__)
def f(sym_0, sym_1, sym_2):
var_365 = torch.randint(low= sym_0, high= sym_1, size= sym_2)
return torch.ops.aten.lift(var_365)
res = f(0, 100, [2, 3])
print(res)
res = torch.compile(f)(0, 100, [2, 3])
print(res)
```
Running result:
```
2.7.0.dev20250209+cu124
tensor([[40, 80, 11],
[42, 51, 88]])
Traceback (most recent call last):
File "/home/yvesw/reborn2-expr/250216-bugs/lift.py", line 12, in <module>
res = torch.compile(f)(0, 100, [2, 3])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1487, in _call_user_compiler
raise BackendCompilerFailed(
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1466, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 131, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/__init__.py", line 2339, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 2163, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1158, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 779, in load
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1143, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 671, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 197, in inner
flat_f_outs = f(*flat_f_args)
^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 875, in functional_call
out = PropagateUnbackedSymInts(mod).run(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 7053, in run_node
result = super().run_node(n)
^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/fx/interpreter.py", line 236, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/fx/interpreter.py", line 316, in call_function
return target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_ops.py", line 1156, in __call__
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_subclasses/functional_tensor.py", line 527, in __torch_dispatch__
outs_unwrapped = func._op_dk(
^^^^^^^^^^^^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: !at::functionalization::impl::isFunctionalTensor(self) INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp":190, please report a bug to PyTorch.
While executing %lift : [num_users=1] = call_function[target=torch.ops.aten.lift](args = (%var_365,), kwargs = {})
GraphModule: class GraphModule(torch.nn.Module):
def forward(self):
# File: /home/yvesw/reborn2-expr/250216-bugs/lift.py:6 in f, code: var_365 = torch.randint(low= sym_0, high= sym_1, size= sym_2)
var_365: "i64[2, 3][3, 1]" = torch.randint(low = 0, high = 100, size = [2, 3])
# File: /home/yvesw/reborn2-expr/250216-bugs/lift.py:7 in f, code: return torch.ops.aten.lift(var_365)
lift: "i64[2, 3][3, 1]" = torch.ops.aten.lift(var_365); var_365 = None
return (lift,)
Original traceback:
File "/home/yvesw/reborn2-expr/250216-bugs/lift.py", line 7, in f
return torch.ops.aten.lift(var_365)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
PyTorch 2.7.0.dev20250209+cu124
cc @chauhang @penguinwu
| true
|
2,855,821,215
|
PaddleOCR PyTorch conflicts
|
monkeycc
|
open
|
[
"module: binaries",
"module: windows",
"triaged"
] | 3
|
NONE
|
```
python -m pip install paddlepaddle-gpu==2.6.2 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/
pip install paddleocr
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
```
```python
from paddleocr import PaddleOCR
import torch
```
paddleocr 2.9.1
pytorch 2.6.0
Cannot be used simultaneously
But using them separately is normal
```python
import torch
File "D:\anaconda3\envs\xxxx\lib\site-packages\torch\__init__.py", line 137, in <module>
raise err
OSError: [WinError 127] Cannot find the specified program。 Error loading "D:\anaconda3\envs\xxxx\lib\site-packages\torch\lib\shm.dll" or one of its dependencies
```
cc @seemethere @malfet @osalpekar @atalman @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,855,788,371
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,855,781,151
|
[experimental][fbcode] delayed compile
|
bobrenjc93
|
closed
|
[
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147272
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D69728173](https://our.internmc.facebook.com/intern/diff/D69728173)
| true
|
2,855,780,471
|
[experimental] delayed compile
|
bobrenjc93
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147271
* #147270
| true
|
2,855,780,449
|
[experimental] delayed compile
|
bobrenjc93
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147271
* __->__ #147270
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,855,755,887
|
[experimental] delayed export
|
bobrenjc93
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147269
* #147265
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,855,714,246
|
flex_attention throws `CUDA error: an illegal memory access was encountered`
|
mauriceweiler
|
closed
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 2
|
NONE
|
### 🐛 Describe the bug
When calling torch.compiled `flex_attention` after a `max_pool2d` operation I encounter an error:
```
RuntimeError: CUDA error: an illegal memory access was encountered
```
To align the data layout expected by `flex_attention` and `max_pool2d` I am using `.reshape` and `.moveaxis` operations.
I observed that the issue is fixed when calling `.contiguous` after both `.moveaxis` ops but reoccurs when calling `.clone()` afterwards.
To reproduce this issue, run `python reproducer.py --setting N`, where N=0,...,3:
- [N=0]: This version uses `.contiguous` and runs without error.
- [N=1]: Here we switch off `.contiguous` after `.moveaxis`, resulting in the illegal memory access error.
- [N=2]: If we `.clone` the feature tensor after the pooling operation it breaks again, despite using `.contiguous`.
- [N=3]: If we only use the first `.contiguous`, there is yet another error `torch._inductor.exc.InductorError: LoweringException: AssertionError: Query must be contiguous in the last dimension`
`reproducer.py`:
``` python
import argparse
import torch
from torch.nn.attention.flex_attention import flex_attention, create_block_mask
flex_attention_compiled = torch.compile(flex_attention)
def pool(feat, kernel_size, stride, padding, CONTIGUOUS_1, CONTIGUOUS_2, CLONE):
B,H,N,C = feat.shape
X = int(N**.5)
feat = feat.reshape(B*H,X,X,C) # reshape to square pixel grid, treat heads as batch dimension
feat = feat.moveaxis(-1,1) # (BH,C,X,X), as required for pytorch grid ops like pool2d
if CONTIGUOUS_1: # <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
feat = feat.contiguous() # <<< REQUIRED TO PREVENT ILLEGAL MEMORY ACCESS ERROR !!! <<<<<
feat = torch.nn.functional.max_pool2d(feat, kernel_size, stride, padding)
feat = feat.moveaxis(1,-1) # (BH,C,X',X')
if CONTIGUOUS_2: # <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
feat = feat.contiguous() # <<< REQUIRED TO PREVENT ILLEGAL MEMORY ACCESS ERROR !!! <<<<<
feat = feat.reshape(B,H,-1,C) # (B,H,N',C), as required by flex_attention
if CLONE: # <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
feat = feat.clone() # <<< REINTRODUCES ERROR EVEN WHEN .contiguous IS USED !!! <<<<
return feat
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--setting',
help = 'switches between experimental settings 0,...,3',
type = int)
args = parser.parse_args()
# Everything works out when using .contiguous after both .moveaxis in pool
if args.setting == 0:
CONTIGUOUS_1 = True
CONTIGUOUS_2 = True
CLONE = False
# Switching off .contiguous yields:
# RuntimeError: CUDA error: an illegal memory access was encountered
# (this works when not compiling flex_attention)
elif args.setting == 1:
CONTIGUOUS_1 = False
CONTIGUOUS_2 = False
CLONE = False
# Adding a .clone() before passing features into flex_attention again leads to an illegal memory
# error despite using .contiguous.
elif args.setting == 2:
CONTIGUOUS_1 = True
CONTIGUOUS_2 = True
CLONE = True
# When using only the first .contiguous, we get yet another error:
# torch._inductor.exc.InductorError: LoweringException: AssertionError: Query must be contiguous in the last dimension
elif args.setting == 3:
CONTIGUOUS_1 = True
CONTIGUOUS_2 = False
CLONE = True
else:
raise ValueError('Invalid setting passed, should be in 0,...,3.')
B = 64
H = 8
C = 64
N = 32**2
feat = torch.randn(B,H,N,C).cuda()
feat = pool(feat, kernel_size=5, stride=1, padding=0,
CONTIGUOUS_1=CONTIGUOUS_1, CONTIGUOUS_2=CONTIGUOUS_2, CLONE=CLONE)
print('pool:', feat.shape, feat[0,0,0,0])
feat = flex_attention_compiled(feat, feat, feat)
print('attn:', feat.shape, feat[0,0,0,0]) # accessing feat is required to surface the error
```
@BoyuanFeng
### Error logs
Illegal memory access error for `python reproducer --setting 1` and `python reproducer --setting 2`:
```
File ".../reproducer_illegal_memory_error.py", line 71, in <module>
print('attn:', feat.shape, feat[0,0,0,0]) # accessing feat is required to surface the error
~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_tensor.py", line 590, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_tensor_str.py", line 702, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File ".../.venv/lib/python3.13/site-packages/torch/_tensor_str.py", line 621, in _str_intern
tensor_str = _tensor_str(self, indent)
File ".../.venv/lib/python3.13/site-packages/torch/_tensor_str.py", line 353, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File ".../.venv/lib/python3.13/site-packages/torch/_tensor_str.py", line 145, in __init__
nonzero_finite_vals = torch.masked_select(
tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0)
)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
For `python reproducer --setting 3` we get another error `torch._inductor.exc.InductorError: LoweringException: AssertionError: Query must be contiguous in the last dimension`:
```
Traceback (most recent call last):
File ".../reproducer_illegal_memory_error.py", line 70, in <module>
feat = flex_attention_compiled(feat, feat, feat)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 589, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 752, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
e.__traceback__
) from None
File ".../.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 737, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
gm, example_inputs, inputs_to_check, **graph_kwargs
)
File ".../.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1405, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1060, in codegen_and_compile
graph.run(*example_inputs)
~~~~~~~~~^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_inductor/graph.py", line 855, in run
return super().run(*args)
~~~~~~~~~~~^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
~~~~~~~~~~~~~^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_inductor/graph.py", line 1440, in run_node
result = super().run_node(n)
File ".../.venv/lib/python3.13/site-packages/torch/fx/interpreter.py", line 236, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_inductor/graph.py", line 1143, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
e.__traceback__
) from None
File ".../.venv/lib/python3.13/site-packages/torch/_inductor/graph.py", line 1133, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File ".../.venv/lib/python3.13/site-packages/torch/_inductor/lowering.py", line 462, in wrapped
out = decomp_fn(*args, **kwargs)
File ".../.venv/lib/python3.13/site-packages/torch/_inductor/kernel/flex_attention.py", line 1260, in flex_attention
assert q_strides[-1] == 1, "Query must be contiguous in the last dimension"
^^^^^^^^^^^^^^^^^^
torch._inductor.exc.InductorError: LoweringException: AssertionError: Query must be contiguous in the last dimension
target: flex_attention
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cuda:0', torch.float32, size=[64, 8, 784, 64], stride=[401408, 50176, 1, 784]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cuda:0', torch.float32, size=[64, 8, 784, 64], stride=[401408, 50176, 1, 784]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cuda:0', torch.float32, size=[64, 8, 784, 64], stride=[401408, 50176, 1, 784]))
))
args[3]: Subgraph(name='sdpa_score0', graph_module=<lambda>(), graph=None)
args[4]: (1, 1, TensorBox(StorageBox(
ComputedBuffer(name='buf2', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x7f29d51ee660>, ranges=[1, 1, 1]))
)), TensorBox(StorageBox(
ComputedBuffer(name='buf3', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x7f29d51efba0>, ranges=[1, 1, 1, 1]))
)), None, None, TensorBox(StorageBox(
ComputedBuffer(name='buf4', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x7f29d52087c0>, ranges=[1, 1, 1]))
)), TensorBox(StorageBox(
ComputedBuffer(name='buf5', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x7f29d5208ea0>, ranges=[1, 1, 1, 1]))
)), None, None, 1073741824, 1073741824, Subgraph(name='sdpa_mask0', graph_module=<lambda>(), graph=None))
args[5]: 0.125
args[6]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'WRITE_DQ': True, 'OUTPUT_LOGSUMEXP': True}
args[7]: ()
args[8]: ()
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250213+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.13.1 (main, Dec 19 2024, 14:32:25) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7532 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 119%
CPU max MHz: 2400.0000
CPU min MHz: 1500.0000
BogoMIPS: 4799.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250213+cu126
[pip3] torchaudio==2.6.0.dev20250213+cu126
[pip3] torchvision==0.22.0.dev20250213+cu126
[conda] Could not collect
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,855,702,037
|
flex_attention with N<128 tokens throws `CUDA error: device-side assert triggered`
|
mauriceweiler
|
closed
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 4
|
NONE
|
### 🐛 Describe the bug
When using torch.compiled flex_attention with N<128 tokens I get the following error:
```
RuntimeError: Triton Error [CUDA]: device-side assert triggered
```
It seems to be related to using a BlockMask with `H>1`. For `H=1` everything works out.
To reproduce this issue, run `python reproducer.py --setting N`, where N=0,...,2:
- [N=0]: This version uses N=129>=128 and runs without issues.
- [N=1]: When using N=127<128, I get above mentioned error.
- [N=2]: I found that the error no longer arises when using H=1 instead of H=H in create_block_mask.
`reproducer.py`:
``` python
import argparse
import torch
from torch.nn.attention.flex_attention import flex_attention, create_block_mask
create_block_mask_compiled = torch.compile(create_block_mask)
flex_attention_compiled = torch.compile(flex_attention)
def causal_mask_mod(b_idx, h_idx, q_idx, kv_idx):
return q_idx >= kv_idx
def attn(feat, H_BlockMask):
B,H,N,C = feat.shape
block_mask = create_block_mask_compiled(mask_mod = causal_mask_mod,
B = B,
H = H_BlockMask,
Q_LEN = N,
KV_LEN = N,
device = 'cuda',
BLOCK_SIZE = 128)
feat = flex_attention_compiled(feat, feat, feat, block_mask=block_mask)
return feat
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--setting',
help = 'switches between experimental settings 0,...,2',
type = int)
args = parser.parse_args()
B = 64
H = 8
C = 32
# For N>=128, flex_attention works as expected.
if args.setting == 0:
N = 129
H_BlockMask = H
# For N<128, we get RuntimeError: Triton Error [CUDA]: device-side assert triggered
elif args.setting == 1:
N = 127
H_BlockMask = H
# When setting H=1 in create_block_mask, everything works despite N<128.
elif args.setting == 2:
N = 127
H_BlockMask = 1
else:
raise ValueError('Invalid setting passed, should be in 0,...,2.')
feat = torch.randn(B,H,N,C).cuda()
feat = attn(feat, H_BlockMask)
print('attn:', feat.shape, feat[0,0,0,0])
```
@BoyuanFeng
### Error logs
```
...
/tmp/torchinductor_/w5/cw5zbz6dpqb6xzefogmkyteasgcf35expdzhnx4kxx2aadexgphx.py:119: unknown: block: [61,0,0], thread: [30,0,0] Assertion failed.
/tmp/torchinductor_/w5/cw5zbz6dpqb6xzefogmkyteasgcf35expdzhnx4kxx2aadexgphx.py:119: unknown: block: [61,0,0], thread: [31,0,0] Assertion failed.
Traceback (most recent call last):
File ".../reproducer_device-side_assert.py", line 56, in <module>
print('attn:', feat.shape, feat[0,0,0,0])
~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_tensor.py", line 590, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_tensor_str.py", line 702, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File ".../.venv/lib/python3.13/site-packages/torch/_tensor_str.py", line 621, in _str_intern
tensor_str = _tensor_str(self, indent)
File ".../.venv/lib/python3.13/site-packages/torch/_tensor_str.py", line 353, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File ".../.venv/lib/python3.13/site-packages/torch/_tensor_str.py", line 146, in __init__
tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0)
~~~~~~~~~~~~~~^^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250213+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.13.1 (main, Dec 19 2024, 14:32:25) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7532 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 119%
CPU max MHz: 2400.0000
CPU min MHz: 1500.0000
BogoMIPS: 4799.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250213+cu126
[pip3] torchaudio==2.6.0.dev20250213+cu126
[pip3] torchvision==0.22.0.dev20250213+cu126
[conda] Could not collect
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,855,694,472
|
[MPS] Implement and test round.decimals
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147266
* #147286
If inductor can do it, why not eager
| true
|
2,855,656,718
|
[experimental] delayed compile
|
bobrenjc93
|
closed
|
[
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147269
* __->__ #147265
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D69708204](https://our.internmc.facebook.com/intern/diff/D69708204)
| true
|
2,855,649,776
|
`Dim.DYNAMIC` inferred to be constant
|
bhack
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
example_frames = torch.randn(1, num_frames, H, W, 3, device=device, dtype=input_dtype)
dynamic_shapes = {
"video": {1: Dim.DYNAMIC},
"query_points": {0: torch.export.Dim.STATIC},
}
```
### Error logs
```python
I0215 18:22:50.860000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:3299] [0/0] create_env
I0215 18:22:50.927000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4562] [0/0] create_symbol s0 = 250 for L['video'].size()[1] [2, int_oo] (_dynamo/variables/builder.py:2913 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0" or to suppress this message run with TORCHDYNAMO_EXTENDED_ADVICE="0"
V0215 18:22:50.929000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:6922] [0/0] runtime_assert True == True [statically known]
V0215 18:22:51.012000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:6698] [0/0] eval size_oblivious(Ne(s0, 1)) == True [statically known]
V0215 18:22:51.018000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:6698] [0/0] eval size_oblivious(Ne(3*s0, 3)) == True [statically known]
V0215 18:22:51.025000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:6922] [0/0] runtime_assert True == True [statically known]
V0215 18:22:51.027000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:6698] [0/0] eval size_oblivious(Eq(s0, 1)) == False [statically known]
I0215 18:22:51.108000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:6560] [0/0] eval 3*s0 >= 16 [guard added] x = F.interpolate(x, size=resolution, mode='bilinear', align_corners=False) # orkspace/tapnet/tapnet/torch/utils.py:39 in bilinear (_decomp/decompositions.py:3846 in _upsample_linear), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="3*s0 >= 16"
V0215 18:22:51.137000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:6698] [0/0] eval size_oblivious(Ne(Mod(1, s0), 0)) == True [statically known]
I0215 18:22:51.146000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:6560] [0/0] eval Eq(s0, 250) [guard added] for start_idx in range(0, video_resize.shape[0], chunk_size): # orkspace/tapnet/tapnet/torch/tapir_model.py:239 in get_feature_grids (_dynamo/variables/tensor.py:1242 in evaluate_expr), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(s0, 250)"
V0215 18:22:51.147000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:6000] [0/0] _update_var_to_range s0 = VR[250, 250] (update)
I0215 18:22:51.148000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:6163] [0/0] set_replacement s0 = 250 (range_refined_to_singleto dynamic_shapes = {
"video": {1: Dim.DYNAMIC},n) VR[250, 250]
V0215 18:22:57.454000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:6687] [0/0] eval 250 [trivial]
I0215 18:23:10.997000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4690] [0/0] produce_guards
V0215 18:23:10.997000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['video'].size()[0] 1 None
V0215 18:23:10.998000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['video'].size()[1] 250 RelaxedUnspecConstraint(warn_only=False)
V0215 18:23:10.998000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['video'].size()[2] 512 None
V0215 18:23:10.998000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['video'].size()[3] 512 None
V0215 18:23:10.998000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['video'].size()[4] 3 None
V0215 18:23:10.999000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['video'].stride()[0] 196608000 None
V0215 18:23:10.999000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['video'].stride()[1] 786432 None
V0215 18:23:10.999000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['video'].stride()[2] 1536 None
V0215 18:23:10.999000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['video'].stride()[3] 3 None
V0215 18:23:10.999000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['video'].stride()[4] 1 None
V0215 18:23:10.999000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['video'].storage_offset() 0 None
V0215 18:23:11.000000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['query_points'].size()[0] 1 None
V0215 18:23:11.000000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['query_points'].size()[1] 50 None
V0215 18:23:11.000000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['query_points'].size()[2] 3 None
V0215 18:23:11.000000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['query_points'].stride()[0] 150 None
V0215 18:23:11.000000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['query_points'].stride()[1] 3 None
V0215 18:23:11.000000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['query_points'].stride()[2] 1 None
V0215 18:23:11.000000 11289 site-packages/torch/fx/experimental/symbolic_shapes.py:4910] [0/0] track_symint L['query_points'].storage_offset() 0 None
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] Error while creating guard:
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] Name: ''
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] Source: shape_env
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] Create Function: SHAPE_ENV
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] Guard Types: None
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] Code List: None
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] Object Weakref: None
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] Guarded Class Weakref: None
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] Traceback (most recent call last):
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_guards.py", line 356, in create
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] return self.create_fn(builder, self)
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/guards.py", line 1938, in SHAPE_ENV
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] python_code_parts, verbose_code_parts = _get_code_parts(
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] ^^^^^^^^^^^^^^^^
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/guards.py", line 1921, in _get_code_parts
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] return output_graph.shape_env.produce_guards_verbose(
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5360, in produce_guards_verbose
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] raise ConstraintViolationError(
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['video'].size()[1])! For more information, run with TORCH_LOGS="+dynamic".
E0215 18:23:11.001000 11289 site-packages/torch/_guards.py:358] [0/0] - Not all values of RelaxedUnspecConstraint(L['video'].size()[1]) are valid because L['video'].size()[1] was inferred to be a constant (250).
E0215 18:23:11.005000 11289 site-packages/torch/_guards.py:360] [0/0] Created at:
E0215 18:23:11.005000 11289 site-packages/torch/_guards.py:360] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 679, in transform
E0215 18:23:11.005000 11289 site-packages/torch/_guards.py:360] [0/0] tracer = InstructionTranslator(
E0215 18:23:11.005000 11289 site-packages/torch/_guards.py:360] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2863, in __init__
E0215 18:23:11.005000 11289 site-packages/torch/_guards.py:360] [0/0] output=OutputGraph(
E0215 18:23:11.005000 11289 site-packages/torch/_guards.py:360] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 356, in __init__
E0215 18:23:11.005000 11289 site-packages/torch/_guards.py:360] [0/0] self.init_ambient_guards()
E0215 18:23:11.005000 11289 site-packages/torch/_guards.py:360] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 505, in init_ambient_guards
E0215 18:23:11.005000 11289 site-packages/torch/_guards.py:360] [0/0] self.guards.add(ShapeEnvSource().make_guard(GuardBuilder.SHAPE_ENV))
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/torch/export/_trace.py", line 694, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1647, in inner
raise constraint_violation_error
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1602, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1392, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 584, in __call__
return _compile(
^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1020, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 745, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 881, in _compile_inner
check_fn = CheckFunctionManager(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/guards.py", line 2463, in __init__
guard.create(builder)
File "/opt/conda/lib/python3.11/site-packages/torch/_guards.py", line 356, in create
return self.create_fn(builder, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/guards.py", line 1938, in SHAPE_ENV
python_code_parts, verbose_code_parts = _get_code_parts(
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/guards.py", line 1921, in _get_code_parts
return output_graph.shape_env.produce_guards_verbose(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5360, in produce_guards_verbose
raise ConstraintViolationError(
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['video'].size()[1])! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of RelaxedUnspecConstraint(L['video'].size()[1]) are valid because L['video'].size()[1] was inferred to be a constant (250).
```
### Versions
nightly
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,855,426,096
|
How to trigger several independent communications simultaneously?
|
Ind1x1
|
open
|
[
"oncall: distributed",
"triaged"
] | 0
|
NONE
|
For example, in training with 4 GPUs, I divide the GPUs into pairs and create two communication groups: group1 = dist.new_group([0, 1]) and group2 = dist.new_group([2, 3]). If I want to run independent dist.all_gather operations within both communication groups simultaneously, it results in an error. I'd like to ask how to implement this correctly.
```
File "/home/yeleyi/anaconda3/envs/torch/lib/python3.10/site-packages/deepspeed/comm/torch.py", line 209, in all_gather
return torch.distributed.all_gather(tensor_list=tensor_list, tensor=tensor, group=group, async_op=async_op)
File "/home/yeleyi/anaconda3/envs/torch/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
return func(*args, **kwargs)
File "/home/yeleyi/anaconda3/envs/torch/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2617, in all_gather
work = group.allgather([tensor_list], [tensor])
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
Last error:
socketStartConnect: Connect to 192.168.1.91<48217> failed : Software caused connection abort
node06:1913795:1914481 [2] NCCL INFO Setting affinity for GPU 2 to 0fffff,ff000000,0fffffff
node06:1913796:1914482 [3] NCCL INFO Setting affinity for GPU 3 to 0fffff,ff000000,0fffffff
node06:1913795:1914481 [2] NCCL INFO Channel 00/04 : 0 1
node06:1913795:1914481 [2] NCCL INFO Channel 01/04 : 0 1
node06:1913795:1914481 [2] NCCL INFO Channel 02/04 : 0 1
node06:1913795:1914481 [2] NCCL INFO Channel 03/04 : 0 1
node06:1913795:1914481 [2] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [2] 1/-1/-1->0->-1 [3] -1/-1/-1->0->1
node06:1913795:1914481 [2] NCCL INFO P2P Chunksize set to 131072
node06:1913796:1914482 [3] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [2] -1/-1/-1->1->0 [3] 0/-1/-1->1->-1
node06:1913796:1914482 [3] NCCL INFO P2P Chunksize set to 131072
node06:1913795:1914481 [2] NCCL INFO Channel 00/0 : 0[2] -> 1[3] via P2P/CUMEM
node06:1913796:1914482 [3] NCCL INFO Channel 00/0 : 1[3] -> 0[2] via P2P/CUMEM
node06:1913795:1914481 [2] NCCL INFO Channel 01/0 : 0[2] -> 1[3] via P2P/CUMEM
node06:1913796:1914482 [3] NCCL INFO Channel 01/0 : 1[3] -> 0[2] via P2P/CUMEM
node06:1913795:1914481 [2] NCCL INFO Channel 02/0 : 0[2] -> 1[3] via P2P/CUMEM
node06:1913796:1914482 [3] NCCL INFO Channel 02/0 : 1[3] -> 0[2] via P2P/CUMEM
node06:1913795:1914481 [2] NCCL INFO Channel 03/0 : 0[2] -> 1[3] via P2P/CUMEM
node06:1913796:1914482 [3] NCCL INFO Channel 03/0 : 1[3] -> 0[2] via P2P/CUMEM
node06:1913796:1914482 [3] NCCL INFO Connected all rings
node06:1913796:1914482 [3] NCCL INFO Connected all trees
node06:1913796:1914482 [3] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
node06:1913795:1914481 [2] NCCL INFO Connected all rings
node06:1913796:1914482 [3] NCCL INFO 4 coll channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
node06:1913795:1914481 [2] NCCL INFO Connected all trees
node06:1913795:1914481 [2] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
node06:1913795:1914481 [2] NCCL INFO 4 coll channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
node06:1913795:1914481 [2] NCCL INFO comm 0x1a9590b0 rank 0 nranks 2 cudaDev 2 nvmlDev 2 busId 6c000 commId 0xdd736563a6f28c07 - Init COMPLETE
node06:1913796:1914482 [3] NCCL INFO comm 0x1931a220 rank 1 nranks 2 cudaDev 3 nvmlDev 3 busId 6d000 commId 0xdd736563a6f28c07 - Init COMPLETE
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,855,311,984
|
Add the memory and dispatch to the logging module.
|
jokercw147
|
open
|
[
"triaged",
"open source",
"Stale"
] | 7
|
NONE
|
We want to print logs for the memory and dispatch separately. Therefore, the memory and dispatch log modules are added to this PR.
| true
|
2,855,306,692
|
`F.interpolate()` + `torch.compile(dynamic=True)` produces wrong shape
|
gau-nernst
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
NONE
|
### 🐛 Describe the bug
```python
import torch
import torch.nn.functional as F
@torch.compile(dynamic=True)
def f(x):
return F.interpolate(x, scale_factor=1 / 300, mode="linear")
f(torch.randn(1, 8, 396 * 300)).shape # torch.Size([1, 8, 395]) -> wrong shape, should be (1, 8, 396)
```
- This does not happen for static shape compile.
- Replacing 300 with 100 does not produce the issue.
- `mode="nearest"` returns wrong shape
cc @chauhang @penguinwu @ezyang @bobrenjc93 You might be interested. Discovered this bug while trying dynamic-shape compile Kokoro
A workaround is to explicitly calculate output shape
```python
@torch.compile(dynamic=True)
def f(x):
return F.interpolate(x, size=(x.shape[-1] // 300,), mode="linear")
```
### Error logs
_No response_
### Versions
2.7.0.dev20250214+cu126
| true
|
2,855,300,233
|
add PrivateUse1 backend in fsdp collecitves
|
zqwenn
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)"
] | 11
|
CONTRIBUTOR
|
add PrivateUse1 backend in fsdp collecitves
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,855,274,524
|
Unable to export to ONNX | The serialized model is larger than the 2GiB limit imposed by the protobuf library.
|
NSTiwari
|
closed
|
[
"module: onnx",
"triaged"
] | 5
|
NONE
|
### 🐛 Describe the bug
I'm trying to convert and export the PaliGemma 2 model to ONNX using a custom script, however, it fails with the following error:
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
I've tried to override `onnx_shape_inference` by adding `GLOBALS.onnx_shape_inference = False` but that didn't help either.

I've seen a similar [issue](https://github.com/pytorch/pytorch/issues/146591) resolved that suggests to upgrade to PyTorch 2.6. However, that causes compatibility issues with Transformers library.
Here are the libraries installed:
```
!pip install -q --upgrade git+https://github.com/huggingface/transformers.git
!pip install -q datasets lightning
!pip install -q peft accelerate bitsandbytes
!pip install -q --upgrade wandb
!pip install Pillow
!pip install tensorboardX
!pip install optimum[exporters]
!pip install onnxslim
```
Below is the Python code.
```
import os
import torch
import torch.nn as nn
from transformers import (
AutoProcessor,
PaliGemmaForConditionalGeneration,
DynamicCache,
)
model_id="google/paligemma2-3b-pt-224"
# model_id="google/paligemma2-3b-ft-docci-448"
# model_id="google/paligemma2-3b-pt-448"
# model_id="google/paligemma2-3b-pt-896"
def new_len(self: torch.Tensor):
return self.shape[0]
torch.Tensor.__len__ = new_len
class VisionEncoder(nn.Module):
def __init__(self, paligemma_model):
super().__init__()
self.config = paligemma_model.config
self.vision_tower = paligemma_model.vision_tower
self.multi_modal_projector = paligemma_model.multi_modal_projector
def forward(self, pixel_values: torch.FloatTensor):
"""
Obtains image last hidden states from the vision tower and apply multimodal projection.
Args:
pixel_values (`torch.FloatTensor]` of shape `(batch_size, channels, height, width)`)
The tensors corresponding to the input images.
Returns:
image_features (`torch.Tensor`): Image feature tensor of shape `(num_images, image_length, embed_dim)`).
"""
image_outputs = self.vision_tower(pixel_values)
selected_image_feature = image_outputs.last_hidden_state
image_features = self.multi_modal_projector(selected_image_feature)
image_features = image_features / (self.config.text_config.hidden_size**0.5)
return image_features
class PatchedPaliGemmaForConditionalGeneration(PaliGemmaForConditionalGeneration):
def forward(self, *args):
inputs_embeds, position_ids, *past_key_values_args = args
config = model.config.text_config
# Convert past_key_values list to DynamicCache
if len(past_key_values_args) == 0:
past_key_values = None
else:
past_key_values = DynamicCache(config.num_hidden_layers)
for i in range(config.num_hidden_layers):
key = past_key_values_args.pop(0)
value = past_key_values_args.pop(0)
past_key_values.update(key_states=key, value_states=value, layer_idx=i)
batch_size = inputs_embeds.shape[0]
o = self.language_model.forward(
inputs_embeds=inputs_embeds,
# Create a 4D attention mask of all zeros (attend to everything)
attention_mask=torch.zeros(
batch_size,
1, # num_attention_heads (1 -> expand to num_attention_heads)
1, # sequence_length (1 -> expand to sequence_length)
1, # total_sequence_length (1 -> expand to total_sequence_length)
dtype=torch.float32,
),
position_ids=position_ids,
past_key_values=past_key_values,
)
flattened_past_key_values_outputs = {
"logits": o.logits,
}
output_past_key_values: DynamicCache = o.past_key_values
for i, (key, value) in enumerate(
zip(output_past_key_values.key_cache, output_past_key_values.value_cache)
):
flattened_past_key_values_outputs[f"present.{i}.key"] = key
flattened_past_key_values_outputs[f"present.{i}.value"] = value
return flattened_past_key_values_outputs
# Constants
OUTPUT_FOLDER = os.path.join("output", model_id)
TEXT_MODEL_NAME = "decoder_model_merged.onnx"
VISION_MODEL_NAME = "vision_encoder.onnx"
EMBED_MODEL_NAME = "embed_tokens.onnx"
TEMP_MODEL_OUTPUT_FOLDER = os.path.join(OUTPUT_FOLDER, "temp")
FINAL_MODEL_OUTPUT_FOLDER = os.path.join(OUTPUT_FOLDER, "onnx")
# Load model and processor
model = PatchedPaliGemmaForConditionalGeneration.from_pretrained(
model_id,
).eval()
vision_model = VisionEncoder(model)
embed_layer = model.language_model.model.embed_tokens
processor = AutoProcessor.from_pretrained(model_id)
# Save model configs and processor
model.config.save_pretrained(OUTPUT_FOLDER)
model.generation_config.save_pretrained(OUTPUT_FOLDER)
processor.save_pretrained(OUTPUT_FOLDER)
os.makedirs(TEMP_MODEL_OUTPUT_FOLDER, exist_ok=True)
# Configuration values
## Text model
text_config = model.config.text_config
num_attention_heads = text_config.num_attention_heads
num_key_value_heads = text_config.num_key_value_heads
head_dim = text_config.head_dim
num_layers = text_config.num_hidden_layers
hidden_size = text_config.hidden_size
# Dummy input sizes
batch_size = 2
sequence_length = 32
past_sequence_length = 8
## Text inputs
dummy_past_key_values_kwargs = {
f"past_key_values.{i}.{key}": torch.zeros(
batch_size,
num_key_value_heads,
past_sequence_length,
head_dim,
dtype=torch.float32,
)
for i in range(num_layers)
for key in ["key", "value"]
}
inputs_embeds = torch.randn(
(batch_size, sequence_length, hidden_size),
)
total_sequence_length = sequence_length + past_sequence_length
position_ids = torch.arange(1, sequence_length + 1, dtype=torch.int64).expand(batch_size, sequence_length)
text_inputs = dict(
inputs_embeds=inputs_embeds,
position_ids=position_ids,
**dummy_past_key_values_kwargs,
)
text_inputs_positional = tuple(text_inputs.values())
text_outputs = model.forward(*text_inputs_positional) # Test forward pass
## Vision inputs
size = processor.image_processor.size
w, h = size['width'], size['height']
pixel_values = torch.randn(2, 3, h, w, requires_grad=True)
vision_inputs = dict(pixel_values=pixel_values)
vision_inputs_positional = tuple(vision_inputs.values())
vision_outputs = vision_model.forward(*vision_inputs_positional) # Test forward pass
# ONNX Exports
from torch.onnx._globals import GLOBALS
GLOBALS.onnx_shape_inference = False # Bug in pytorch
## Text model
TEXT_MODEL_OUTPUT_PATH=os.path.join(TEMP_MODEL_OUTPUT_FOLDER, TEXT_MODEL_NAME)
torch.onnx.export(
model,
args=text_inputs_positional,
f=TEXT_MODEL_OUTPUT_PATH,
export_params=True,
opset_version=14,
do_constant_folding=True,
input_names=list(text_inputs.keys()),
output_names=["logits"]
+ [f"present.{i}.{key}" for i in range(num_layers) for key in ["key", "value"]],
dynamic_axes={
"inputs_embeds": {0: "batch_size", 1: "sequence_length"},
"position_ids": {0: "batch_size", 1: "sequence_length"},
**{
f"past_key_values.{i}.{key}": {0: "batch_size", 2: "past_sequence_length"}
for i in range(num_layers)
for key in ["key", "value"]
},
"logits": {0: "batch_size", 1: "sequence_length"},
**{
f"present.{i}.{key}": {0: "batch_size", 2: "total_sequence_length"}
for i in range(num_layers)
for key in ["key", "value"]
},
},
)
## Vision model
VISION_MODEL_OUTPUT_PATH = os.path.join(TEMP_MODEL_OUTPUT_FOLDER, VISION_MODEL_NAME)
torch.onnx.export(
vision_model,
args=vision_inputs_positional,
f=VISION_MODEL_OUTPUT_PATH,
export_params=True,
opset_version=14,
do_constant_folding=True,
input_names=['pixel_values'],
output_names=['image_features'],
dynamic_axes={
'pixel_values': {0: 'batch_size'},
'image_features': {0: 'batch_size'}
},
)
input_ids = torch.randint(0, embed_layer.num_embeddings, (batch_size, sequence_length))
## Embedding model
EMBED_MODEL_OUTPUT_PATH = os.path.join(TEMP_MODEL_OUTPUT_FOLDER, EMBED_MODEL_NAME)
torch.onnx.export(
embed_layer,
args=(input_ids,),
f=EMBED_MODEL_OUTPUT_PATH,
export_params=True,
opset_version=14,
do_constant_folding=True,
input_names=['input_ids'],
output_names=['inputs_embeds'],
dynamic_axes={
'input_ids': {0: 'batch_size', 1: 'sequence_length'},
'inputs_embeds': {0: 'batch_size', 1: 'sequence_length'}
},
)
# Post-processing
import onnx
import onnxslim
from optimum.onnx.graph_transformations import check_and_save_model
os.makedirs(FINAL_MODEL_OUTPUT_FOLDER, exist_ok=True)
for name in (TEXT_MODEL_NAME, VISION_MODEL_NAME, EMBED_MODEL_NAME):
temp_model_path = os.path.join(TEMP_MODEL_OUTPUT_FOLDER, name)
onnx.shape_inference.infer_shapes_path(temp_model_path, check_type=True, strict_mode=True)
## Attempt to optimize the model with onnxslim
try:
onnx_model = onnxslim.slim(temp_model_path)
except Exception as e:
print(f"Failed to slim {temp_model_path}: {e}")
onnx_model = onnx.load(temp_model_path)
## Save model
final_model_path = os.path.join(FINAL_MODEL_OUTPUT_FOLDER, name)
check_and_save_model(onnx_model, final_model_path)
# Minify tokenizer.json
import json
tokenizer_path = os.path.join(OUTPUT_FOLDER, "tokenizer.json")
with open(tokenizer_path, "r") as f:
tokenizer = json.load(f)
with open(tokenizer_path, "w") as f:
json.dump(tokenizer, f) # No need for indenting
# Add head_dim and num_image_tokens to config.json
config_path = os.path.join(OUTPUT_FOLDER, "config.json")
with open(config_path, "r") as f:
config = json.load(f)
config["text_config"]["head_dim"] = head_dim
config["num_image_tokens"] = config["text_config"]["num_image_tokens"]
with open(config_path, "w") as f:
json.dump(config, f, indent=2)
## Cleanup
import shutil
shutil.rmtree(TEMP_MODEL_OUTPUT_FOLDER)
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.22
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxslim==0.1.48
[pip3] optree==0.14.0
[pip3] pynvjitlink-cu12==0.5.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.5.1+cu124
[pip3] torchaudio==2.5.1+cu124
[pip3] torchmetrics==1.6.1
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
| true
|
2,855,202,831
|
t2
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147258
* #147257
Summary:
Test Plan:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,855,202,489
|
t1
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147258
* __->__ #147257
Summary:
Test Plan:
| true
|
2,855,186,400
|
[inductor] [dtype checking] `nn.LayerNorm` looses the check for `dtype=complex`
|
shaoyuyoung
|
open
|
[
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: When using `LayerNorm` with `dtype=complex`, eager throws errors both on CPP and CUDA but inductor pass the check for them.
**device backend**: both on CPP and triton
**exposed area**: `complex32`, `complex64`, `complex128`
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layernorm = nn.LayerNorm([10])
def forward(self, x):
x = self.layernorm(x)
return x
model = Model()
x = torch.randn(1, 1, 10, dtype=torch.complex64)
inputs = [x]
def run_test(model, inputs, backend):
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
c_output = model(*inputs)
print(c_output)
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'inductor')
```
### Error logs
CPU eager
```
mixed dtype (CPU): all inputs must share same datatype.
```
cuda eager
```
"LayerNormKernelImpl" not implemented for 'ComplexFloat'
```
inductor
```
tensor([[[-0.0324+0.3427j, 0.9617-0.2186j, -0.9683-0.5282j, 0.6549+0.2520j,
-0.8931+0.3126j, 0.8974-1.0744j, -0.9643+0.1278j, -1.0817-0.4584j,
0.9849+0.8212j, 0.4409+0.4234j]]])
```
### Versions
PyTorch version: 2.7.0.dev20250211+cu124
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: Tesla V100-SXM2-32GB
<details>
<summary>click here for detailed env</summary>
```
PyTorch version: 2.7.0.dev20250211+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-205-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.994
BogoMIPS: 4999.98
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250211+cu124
[pip3] torchaudio==2.6.0.dev20250211+cu124
[pip3] torchvision==0.22.0.dev20250211+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250211+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250211+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250211+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @zou3519 @bdhirsh @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,855,153,472
|
[inductor] [cpu] `torch.nn.RReLU()` doesn't respect `fallback_random` flag
|
shaoyuyoung
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"oncall: cpu inductor"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: ablation says `aot_eager_decomp_partition` is the first backend with incorrect optimization
**codegen backend**: only CPP backend.
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.rrelu = torch.nn.RReLU()
def forward(self, x):
x = self.rrelu(x)
return x
model = Model()
x = torch.randn(1, 1, 100, 100)
inputs = [x]
def run_test(model, inputs, device, backend):
torch.manual_seed(0)
model = model.to(device)
inputs = [x.to(device) for x in inputs]
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
device = 'cpu'
output = run_test(model, inputs, device, 'eager')
c_output = run_test(model, inputs, device, 'aot_eager_decomp_partition')
fp64 = run_test(model.to(torch.float64), [x.to(torch.float64) for x in inputs], device, "eager")
print(torch.allclose(output, c_output, 1e-3, 1e-3))
print(torch.max(torch.abs(output - c_output)))
print(torch._dynamo.utils.same(output, c_output, fp64))
```
### Error logs
CPP
```
False
tensor(0.8208)
E0429 17:53:31.189000 6900 site-packages/torch/_dynamo/utils.py:2939] RMSE (res-fp64): 0.06023, (ref-fp64): 0.00000 and shape=torch.Size([1, 1, 100, 100]). res.dtype: torch.float32, multiplier: 2.000000, tol: 0.000100, use_larger_multiplier_for_smaller_tensor: 0
False
```
Triton
```
True
tensor(1.1921e-07, device='cuda:0')
True
```
### Versions
nightly 20250418
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,855,096,094
|
[TorchRec][PT2] disable contextlib in PT2 train pipeline
|
TroyGarden
|
closed
|
[
"oncall: distributed",
"internals",
"fb-exported",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Summary:
X-link: https://github.com/pytorch/torchrec/pull/2730
Pull Request resolved: https://github.com/pytorch/torchrec/pull/2596
# context
* more details in the [post](https://fb.workplace.com/groups/1075192433118967/permalink/1587079018596970/)
* disable contextlib with PT2
Test Plan:
* run command
```
TORCH_SHOW_CPP_STACKTRACES=1 TORCHDYNAMO_EXTENDED_DEBUG_CPP=1 TORCH_LOGS="+dynamo,+graph_code,output_code,dynamic,aot,guards,verbose_guards,recompiles,graph_breaks" TORCH_TRACE=/var/tmp/tt buck2 run fbcode//mode/opt fbcode//aps_models/ads/icvr:icvr_launcher_live -- mode=fmc/local_ig_fm_ultra_mini training.pipeline_type=pt2 data_loader.dataset.table_ds=[2024-12-02] 2>&1 | tee -a output.log
```
* old tlparse
https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpYYAS3o/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
* new tlparse
https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpUJhCGZ/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
Differential Revision: D68480678
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| true
|
2,855,074,316
|
Fix clang-tidy warnings in torch/jit
|
cyyever
|
open
|
[
"oncall: jit",
"triaged",
"open source",
"NNC",
"release notes: jit"
] | 4
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,855,074,118
|
utils: Update md5 call to be fips compliant
|
seemethere
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: not user facing"
] | 8
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147252
Updates md5 call to be fips compliant according to this issue:
* https://github.com/pytorch/pytorch/issues/147236
Not going to add a conditional here because minimum the python version
that we support is already 3.9
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,855,063,681
|
[inductor] Simplify grid handling
|
jansel
|
closed
|
[
"topic: not user facing",
"ciflow/mps",
"skip-pr-sanity-checks",
"module: inductor",
"ciflow/inductor",
"ciflow/inductor-rocm"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147251
Before this PR, calling a triton kernel would look like:
```py
kernel.run(a, b, xnumel, grid=grid(xnumel), stream=stream0)
```
where the `grid=` was passed as a callable (function closure) arg. This PR removes the grid arg:
```py
kernel.run(a, b, xnumel, stream=stream0)
```
instead now the grid computation is included in the kernel launcher, with something like:
```py
def launcher(in_ptr0, out_ptr0, xnumel, stream):
grid_0 = ((xnumel + 1023) >> 10)
grid_1 = 1
grid_2 = 1
runner(grid_0, grid_1, grid_2, stream, function, metadata, None, launch_enter_hook, launch_exit_hook, in_ptr0, out_ptr0, xnumel)
```
This should be faster, since we remove multiple function/dict calls and are able to specialize the grid computation for each `triton.Config`.
It also allows us to unify the handling of grids between the Python and C++ wrapper code. Before this, C++ wrapper code didn't actually support dynamic grid sizes and instead burned in a static grid.
This unification allows this PR to be a net deletion of code.
cc @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,855,043,474
|
Hipify: use usedforsecurity=False for MD5
|
JBlitzar
|
closed
|
[
"open source",
"topic: not user facing"
] | 4
|
NONE
|
Fixes #147236
CCing people in the original issue
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @Legends0
| true
|
2,855,026,820
|
[Inductor] Fix 3D tiling with permute
|
blaine-rister
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
This PR adds a test case and tiny fix for 3D tiling. Before this PR, tiling would crash because one of the candidates lacked a `"y"` dimension. Now, when we're calculating 3D tiling candidates, we assume the y size is 1 if it's missing.
The test case implements a 3D permute using block pointers.
```
@triton.jit
def triton_poi_fused_add_0(in_ptr0, out_ptr0, znumel, ynumel, xnumel, ZBLOCK : tl.constexpr, YBLOCK : tl.constexpr, XBLOCK : tl.constexpr):
znumel = 51
ynumel = 51
xnumel = 51
zoffset = tl.program_id(2) * ZBLOCK
zindex = zoffset + tl.arange(0, ZBLOCK)[None, None, :]
zmask = zindex < znumel
yoffset = tl.program_id(1) * YBLOCK
yindex = yoffset + tl.arange(0, YBLOCK)[None, :, None]
ymask = yindex < ynumel
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:, None, None]
xmask = xindex < xnumel
x2 = xindex
y1 = yindex
z0 = zindex
tmp0 = tl.load(tl.make_block_ptr(in_ptr0, shape=[51, 51, 51], strides=[1, 51, 2601], block_shape=[XBLOCK, YBLOCK, ZBLOCK], order=[2, 1, 0], offsets=[xoffset, yoffset, zoffset]), boundary_check=[0, 1, 2])
tmp1 = tl.load(tl.make_block_ptr(in_ptr0, shape=[51, 51, 51], strides=[51, 1, 2601], block_shape=[XBLOCK, YBLOCK, ZBLOCK], order=[2, 1, 0], offsets=[xoffset, yoffset, zoffset]), boundary_check=[0, 1, 2])
tmp2 = tmp0 + tmp1
tmp3 = tmp0 + tmp0
tmp4 = tmp2 + tmp3
tl.store(tl.make_block_ptr(out_ptr0, shape=[51, 51, 51], strides=[1, 51, 2601], block_shape=[XBLOCK, YBLOCK, ZBLOCK], order=[2, 1, 0], offsets=[xoffset, yoffset, zoffset]), tl.broadcast_to(tmp4, [XBLOCK, YBLOCK, ZBLOCK]).to(tl.float32), boundary_check=[0, 1, 2])
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,855,011,729
|
Move ir_pre_fusion.txt and ir_post_fusion.txt to TORCH_LOGS
|
dulinriley
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 17
|
CONTRIBUTOR
|
Fixes #147002
Moves ir_{pre, post}_fusion.txt to be controlled by TORCH_LOGS instead of TORCH_COMPILE_DEBUG.
Updated tests of these logs as well.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,855,009,860
|
Remove CAFFE2_USE_EXCEPTION_PTR
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
COLLABORATOR
|
The check is for older compilers and is now aways true.
| true
|
2,855,007,793
|
dynamo: Don't crash when encountering a object with no __name__
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147246
This was triggering on ScriptFunctions. Note that other than badly implemented c functiosn, this seems to be almost impossible to trigger, so I wrote a smaller unit test, rather than a full repro. Let me know if people feel strongly and want a full reproduction.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,854,999,369
|
Update lintrunner sympy version to 1.13.3
|
henrylhtsang
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147245
| true
|
2,854,996,263
|
Add SmallVectorImpl move constructor and other fixes
|
cyyever
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,854,977,439
|
[ROCm] [TunableOp] Track top solutions during tuning process
|
naromero77amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 9
|
COLLABORATOR
|
For each set of GEMM parameters that are evaluated by Tunableop, keep track of the top 5 solutions. Print the top 5 solutions when `PYTORCH_TUNABLEOP_VERBOSE=2`.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,854,965,074
|
[ca] trace saved variable unpacking
|
xmfan
|
closed
|
[
"Merged",
"Reverted",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: compiled autograd",
"ci-no-td"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147891
* #147804
* #147796
* __->__ #147242
## Before
Previously, CA will always unpack all saved variables stored in the autograd graph before executing it. This meant that we can't capture unpack hooks as part of the CA graph, and they would fire out of order wrt to other backward hooks. For memory saving APIs built on top of saved tensor hooks like non-reentrant checkpointing and offloading, we couldn't achieve any savings because all activations would be recomputed/loaded and active at the same time, resulting in no-op.
## After
We add unpack hooks into the CA graph so that they can be executed progressively. The python hook and hook input themselves are wrapped by non-traceable code, so CA polyfills the wrapping as:
```python
# pseudocode
class SavedVariable:
def unpack(self):
if self.hook:
return self.hook(self.packed_data)
else:
return self.packed_data
# This approach won't directly work when we add support for Forward AD or double-backward.
```
Directly executing the CA graph (without torch.compiling it) under checkpointing/offloading, memory profile is expected to stay the same as when using the eager autograd engine. If AOT backward is in the autograd graph, memory profile is expected to be better than the eager autograd engine, since we can now delay saved activations unpacking into the AOT backward's execution.
All tests pass when running the CA graph directly, the remaining issues are in Dynamo.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,956,823
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,854,950,703
|
[symbolic shapes] Add replacement for backed symints
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147240
* #146939
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,854,947,874
|
`DeviceCopy in input program` source hint
|
bhack
|
open
|
[
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Can we give an hint on where this is coming from in the code. It could be useful if we want to maintain the warning so that the user have a concrete action.
### Alternatives
_No response_
### Additional context
nightly
cc @chauhang @penguinwu
| true
|
2,854,946,005
|
[apf] Fix input adapter
|
angelayi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Summary: Add support for inputs that no longer exist in `input_fields`, but is not actually used by the original program. In this case, we just give it a dummy input based on the node's metadata.
Test Plan: Verified for S488841
Differential Revision: D69328093
| true
|
2,854,889,308
|
[export] Loosen symint input serialization
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 8
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,854,865,676
|
Enforce FIPS compliance on Pytorch on python 3.9+
|
Legends0
|
closed
|
[
"module: build",
"module: rocm",
"good first issue",
"triaged",
"actionable"
] | 10
|
NONE
|
Since python 3.9, when FIPS compliance is enforced `hashlib.md5()` may not be usable without the usedforsecurity parameter set to False (https://docs.python.org/3/library/hashlib.html).
The hipify utility is not currently able to operate on a FIPS system without an error:
https://github.com/pytorch/pytorch/blob/1224765286343d897ed17a078057c3f9a356e4c4/torch/utils/hipify/hipify_python.py#L684
This line can produce this error message on systems enforcing FIPS modules:
`ValueError: [digital envelope routines] unsupported`
Line should be changed to the following for python 3.9 and newer:
`hashlib.md5(usedforsecurity=False)`
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,854,862,989
|
logging: close handler after removing it
|
tebartsch
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 7
|
NONE
|
Fixes
```python
import unittest
import os
import tempfile
import torch
import tracemalloc
tracemalloc.start(10)
class Test(unittest.TestCase):
def test(self):
with tempfile.TemporaryDirectory() as temp_dir:
os.environ["TORCH_LOGS_OUT"] = f"{temp_dir}/test.log"
torch._logging._init_logs()
del os.environ["TORCH_LOGS_OUT"]
torch._logging._init_logs()
if __name__ == "__main__":
unittest.main()
```
which currently prints
```
/home/user/venv/lib/python3.12/site-packages/torch/_logging/_internal.py:907: ResourceWarning: unclosed file <_io.TextIOWrapper name='/var/folders/nw/7yk3mnls5tvcvnyvn9x4q5480000gp/T/tmpvhd39_5f/test.log' mode='a' encoding='UTF-8'>
_clear_handlers(log)
Object allocated at (most recent call last):
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/unittest/suite.py", lineno 122
test(result)
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/unittest/case.py", lineno 690
return self.run(*args, **kwds)
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/unittest/case.py", lineno 634
self._callTestMethod(testMethod)
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/unittest/case.py", lineno 589
if method() is not None:
File "/home/user/tmp.py", lineno 13
torch._logging._init_logs()
File "/home/user/venv/lib/python3.12/site-packages/torch/_logging/_internal.py", lineno 963
_setup_handlers(
File "/home/user/venv/lib/python3.12/site-packages/torch/_logging/_internal.py", lineno 874
debug_handler = _track_handler(create_handler_fn())
File "/home/user/venv/lib/python3.12/site-packages/torch/_logging/_internal.py", lineno 964
lambda: logging.FileHandler(log_file_name),
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/logging/__init__.py", lineno 1231
StreamHandler.__init__(self, self._open())
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/logging/__init__.py", lineno 1263
return open_func(self.baseFilename, self.mode,
```
| true
|
2,854,849,568
|
Allow strobelight profiling a specific frame id , ex [27/*]
|
laithsakka
|
open
|
[
"oncall: profiler"
] | 0
|
CONTRIBUTOR
|
title.
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,854,840,765
|
[inductor] Don't leak pointers to cpp_wrapper with lru_cache
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147251
* __->__ #147233
Putting lru_cache on methods will keep pointers to the `self` objects
alive forever and leak memory.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,799,114
|
[Inductor][ROCm][CK] Unhardedcoded kernel shapes for ck_conv_template codegen
|
AviralGoelAMD
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
## [Inductor][ROCm][CK] Parameterize `ck_conv_template` Codegen
### Description
Previously, ROCm CK kernel codegen templates were hardcoded with fixed values for convolution parameters:
- `index_t GroupCount`
- `index_t NBatch`
- `index_t NOutChannels`
- `index_t NInChannels`
- `vector<index_t> FilterSize`
- `vector<index_t> InputSize`
- `vector<index_t> ConvolutionStrides`
- `vector<index_t> Dilations`
- `vector<index_t> LeftPads`
- `vector<index_t> RightPads`
This PR updates `ck_conv_template` to accept these parameters dynamically from Inductor. By doing so, we reduce the number of generated templates, improving flexibility and maintainability.
### Testing
- Verified correctness by running relevant test cases, i.e test/inductor/test_ck_backend.py
- Ensured generated kernels reflect the updated parameterization, i.e generated templates in /tmp/torchinductor_root/
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,854,795,072
|
Ensure conj/neg flags are set in destination for CUDA->CPU copies
|
amjames
|
open
|
[
"open source",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147231
* #149226
Fixes #146286
| true
|
2,854,747,811
|
Code Refactoring for getting start and stride from global ranks
|
shengfukevin
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 13
|
CONTRIBUTOR
|
Summary: Code Refactoring for getting start and stride from global ranks, this function can be used in different collective backend.
Differential Revision: D69555405
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.