id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,841,770,803
|
[Feature Request] Include sequence "add ()" method similar to Keras
|
jobs-git
|
closed
|
[] | 3
|
NONE
|
### 🚀 The feature, motivation and pitch
Many models are sequential or at least many parts are sequential.
In keras, we can create layers as simple as this:
```python
model = Sequential ()
model.add (Input (...))
model.add (Conv2D(...))
...
```
This is important when chaining layers in Blueprint-like interfaces. Chaining existing architecture/pre-trained to the sequence in this manners will also make model development easier and seamless - meaning same approach irrespective of the source.
See https://keras.io/guides/sequential_model/ on the "add ()" method section
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,841,731,191
|
On Linux, passing torch.Generator to multiprocessing.Process crashes for forkserver and spawn start method
|
foxik
|
open
|
[
"high priority",
"module: multiprocessing",
"triaged",
"module: random"
] | 11
|
CONTRIBUTOR
|
### 🐛 Describe the bug
On Linux, when the multiprocessing method is `forkserver` or `spawn`, passing `torch.Generator` to a new process via `multiprocessing.Process` causes a crash. Consider the following example:
```python
import time
import torch
def worker(*args):
print("Worker started with", *args, flush=True)
if __name__ == '__main__':
torch.multiprocessing.set_start_method("forkserver") # or "spawn"
generator = torch.Generator()
process = torch.multiprocessing.Process(target=worker, args=(generator,))
process.start() # process.run() does not cause a crash
for i in range(10):
print("Main", i)
time.sleep(1)
```
The output (on two different machines, one is Debian Bookworm and the other Ubuntu Jammy) is:
```
Main 0
Main 1
Main 2
Traceback (most recent call last):
File "/opt/python/3.12.0/lib/python3.12/multiprocessing/forkserver.py", line 274, in main
code = _serve_one(child_r, fds,
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.12.0/lib/python3.12/multiprocessing/forkserver.py", line 313, in _serve_one
code = spawn._main(child_r, parent_sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.12.0/lib/python3.12/multiprocessing/spawn.py", line 132, in _main
self = reduction.pickle.load(from_parent)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/venv-312/lib/python3.12/site-packages/torch/multiprocessing/reductions.py", line 546, in rebuild_storage_fd
storage = cls._new_shared_fd_cpu(fd, size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: unable to resize file <filename not specified> to the right size: Invalid argument (22)
```
The problem:
- happens with any of Python 3.9, Python 3.11, Python 3.12
- happens with torch 2.6.0 and also with the current nightly 2.7.0.dev20250209
- happens with `forkserver` and `spawn` method
- does not happen with `fork`
- does not happen on macOS
- does not happen when `process.run()` instead of `process.start()` is used
- does not happen when a regular `torch.tensor` is passed instead of the `torch.Generator`
Note that `forkserver` will become the default multiprocessing start method on Linux in Python 3.14.
**Real-world usage**
The above snippet is for reproducibility; the real-world example of this failure is a dataset containing a torch.Generator passed to a dataloader:
```python
import torch
class Dataset(): # a toy dataset for demonstration
def __init__(self):
self._generator = torch.Generator().manual_seed(42)
def __len__(self):
return 32
def __getitem__(self, index):
return index
if __name__ == '__main__':
torch.multiprocessing.set_start_method("forkserver") # or "spawn"
dataset = Dataset()
dataloader = torch.utils.data.DataLoader(dataset, batch_size=4, num_workers=1)
list(dataloader)
```
which fails with an analogous error:
```
Traceback (most recent call last):
File "/opt/python/3.12.0/lib/python3.12/multiprocessing/forkserver.py", line 274, in main
code = _serve_one(child_r, fds,
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.12.0/lib/python3.12/multiprocessing/forkserver.py", line 313, in _serve_one
code = spawn._main(child_r, parent_sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.12.0/lib/python3.12/multiprocessing/spawn.py", line 132, in _main
self = reduction.pickle.load(from_parent)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/venv-312/lib/python3.12/site-packages/torch/multiprocessing/reductions.py", line 546, in rebuild_storage_fd
storage = cls._new_shared_fd_cpu(fd, size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: unable to resize file <filename not specified> to the right size: Invalid argument (22)
```
### Versions
Ubuntu Jammy with torch nightly:
```
PyTorch version: 2.7.0.dev20250209+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.0 (main, May 15 2024, 14:10:54) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.35-1-pve-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7313 16-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3000.0000
CPU min MHz: 1500.0000
BogoMIPS: 5988.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] torch==2.7.0.dev20250209+cpu
[conda] Could not collect
```
Debian Bookworm with torch 2.6.0
```
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: 14.0.6
CMake version: version 3.25.1
Libc version: glibc-2.36
Python version: 3.11.2 (main, Nov 30 2024, 21:22:50) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.11.5+bpo-amd64-x86_64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-1235U
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 51%
CPU max MHz: 1300.0000
CPU min MHz: 400.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
L1d cache: 352 KiB (10 instances)
L1i cache: 576 KiB (10 instances)
L2 cache: 6.5 MiB (4 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] torch==2.6.0+cpu
[pip3] torchaudio==2.6.0+cpu
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.21.0+cpu
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @VitalyFedyunin @albanD @pbelevich
| true
|
2,841,718,641
|
[Inductor] add mkldnn_max_pool2d support for CPU inductor
|
CaoE
|
closed
|
[
"open source",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146827
* #146826
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,841,718,319
|
add mkldnn maxpool support on CPU dispatch
|
CaoE
|
closed
|
[
"module: cpu",
"module: mkldnn",
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146827
* __->__ #146826
Add mkldnn_max_pool2d support on CPU dispatch as aten kernels miss a version without indices on CPU and its performance is much worse than that of oneDNN maxpool with a gap of up to 10x.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,841,701,008
|
[func] move rearrange to torch.func
|
shingjan
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Fixes #92675
basically moved functorch.rearrange to torch.func.arrange.
| true
|
2,841,665,365
|
Inductor-CPU might load (and store) fewer elements than the vector-width
|
sanchitintel
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
## Problem
Discovered while working on an Inductor-CPP templated GEMM that 16 FP16 elements might be copied (loaded & stored) at a time instead of 32 from a local buffer to the output buffer, even if the machine has ZMM registers.
[Codegened code link](https://gist.github.com/sanchitintel/43eb5327c6f81fa9ed087bab48b294dc#file-fp16_compute_accum_gemm-py-L286-L299)
I guess this issue is currently not a problem, since this special-case (FP16 accum in GEMM for FP16 activation & int8 weights converted to FP16, while also fusing application of scale) runs the risk of overflow (depending upon the magnitudes & input shapes of activation, weights & weight scale, it may not, but it's not worth the risk), and should not be used in PyTorch, so we can probably ignore this example for now, and instead try reasoning about the implementation with respect to some other realistic example.
Which of these approaches would perform better?
1. (Current approach) Loading 1/2 vector width of FP16/BF16 elements, performing some intermediate computations in FP32, but using only one ZMM register corresponding to those BF16/FP16 elements converted to FP32.
2. Loading full vector-width of BF16/FP16 elements, then using 2 ZMM registers for those elements converted to FP32. In this case, we'd also have to use 2x the number of ZMM registers for other inputs and intermediate outputs used in the epilogue computations, and it's unlikely that all 32 ZMM registers per core may not be needed, as we may discard some intermediate outputs and inputs not needed further.
If `1` performs better, we should retain the current implementation, but if 2 performs better, we may need to revise the implementation (especially in the future when FP8 GEMMs would be used, as their accum dtype would likely be BF16, and we may encounter more such scenarios).
Thanks!
### Versions
Main branch (the example used is not representative of the main branch, though)
cc @chauhang @penguinwu
| true
|
2,841,636,522
|
Use mkldnn_max_pool2d for max_pool2d when indices is not needed
|
CaoE
|
closed
|
[
"module: cpu",
"module: mkldnn",
"open source",
"ciflow/trunk",
"ciflow/periodic",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ciflow/linux-aarch64"
] | 3
|
COLLABORATOR
|
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,841,590,341
|
Update slow tests
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 6
|
COLLABORATOR
|
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
| true
|
2,841,581,160
|
Deprecate DataLoader pin_memory_device param
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"release notes: dataloader"
] | 15
|
CONTRIBUTOR
|
Following [ #131858 suggestion](https://github.com/pytorch/pytorch/pull/131858#pullrequestreview-2517760602) to optimize DataLoader code
cc @albanD
| true
|
2,841,579,789
|
ImportError: cannot import name 'DiagnosticOptions' from 'torch.onnx._internal.exporter'
|
ashok-arora
|
closed
|
[
"module: onnx",
"triaged"
] | 11
|
NONE
|
### 🐛 Describe the bug
Unable to run any model for inference.
Traceback:
```bash
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[15], line 1
----> 1 results = model('./hallucinated.png')
File /opt/anaconda3/lib/python3.12/site-packages/ultralytics/engine/model.py:181, in Model.__call__(self, source, stream, **kwargs)
152 def __call__(
153 self,
154 source: Union[str, Path, int, Image.Image, list, tuple, np.ndarray, torch.Tensor] = None,
155 stream: bool = False,
156 **kwargs: Any,
157 ) -> list:
158 """
159 Alias for the predict method, enabling the model instance to be callable for predictions.
160
(...)
179 ... print(f"Detected {len(r)} objects in image")
180 """
--> 181 return self.predict(source, stream, **kwargs)
File /opt/anaconda3/lib/python3.12/site-packages/ultralytics/engine/model.py:559, in Model.predict(self, source, stream, predictor, **kwargs)
557 if prompts and hasattr(self.predictor, "set_prompts"): # for SAM-type models
558 self.predictor.set_prompts(prompts)
--> 559 return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File /opt/anaconda3/lib/python3.12/site-packages/ultralytics/engine/predictor.py:175, in BasePredictor.__call__(self, source, model, stream, *args, **kwargs)
173 return self.stream_inference(source, model, *args, **kwargs)
174 else:
--> 175 return list(self.stream_inference(source, model, *args, **kwargs))
File /opt/anaconda3/lib/python3.12/site-packages/torch/utils/_contextlib.py:35, in _wrap_generator.<locals>.generator_context(*args, **kwargs)
32 try:
33 # Issuing `None` to a generator fires it up
34 with ctx_factory():
---> 35 response = gen.send(None)
37 while True:
38 try:
39 # Forward the response to our caller and get its next request
File /opt/anaconda3/lib/python3.12/site-packages/ultralytics/engine/predictor.py:241, in BasePredictor.stream_inference(self, source, model, *args, **kwargs)
239 # Warmup model
240 if not self.done_warmup:
--> 241 self.model.warmup(imgsz=(1 if self.model.pt or self.model.triton else self.dataset.bs, 3, *self.imgsz))
242 self.done_warmup = True
244 self.seen, self.windows, self.batch = 0, [], None
File /opt/anaconda3/lib/python3.12/site-packages/ultralytics/nn/autobackend.py:765, in AutoBackend.warmup(self, imgsz)
758 def warmup(self, imgsz=(1, 3, 640, 640)):
759 """
760 Warm up the model by running one forward pass with a dummy input.
761
762 Args:
763 imgsz (tuple): The shape of the dummy input tensor in the format (batch_size, channels, height, width)
764 """
--> 765 import torchvision # noqa (import here so torchvision import time not recorded in postprocess time)
767 warmup_types = self.pt, self.jit, self.onnx, self.engine, self.saved_model, self.pb, self.triton, self.nn_module
768 if any(warmup_types) and (self.device.type != "cpu" or self.triton):
File /opt/anaconda3/lib/python3.12/site-packages/torchvision/__init__.py:6
3 from modulefinder import Module
5 import torch
----> 6 from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils
8 from .extension import _HAS_OPS
10 try:
File /opt/anaconda3/lib/python3.12/site-packages/torchvision/models/__init__.py:2
1 from .alexnet import *
----> 2 from .convnext import *
3 from .densenet import *
4 from .efficientnet import *
File /opt/anaconda3/lib/python3.12/site-packages/torchvision/models/convnext.py:8
5 from torch import nn, Tensor
6 from torch.nn import functional as F
----> 8 from ..ops.misc import Conv2dNormActivation, Permute
9 from ..ops.stochastic_depth import StochasticDepth
10 from ..transforms._presets import ImageClassification
File /opt/anaconda3/lib/python3.12/site-packages/torchvision/ops/__init__.py:1
----> 1 from ._register_onnx_ops import _register_custom_op
2 from .boxes import (
3 batched_nms,
4 box_area,
(...)
13 remove_small_boxes,
14 )
15 from .ciou_loss import complete_box_iou_loss
File /opt/anaconda3/lib/python3.12/site-packages/torchvision/ops/_register_onnx_ops.py:5
2 import warnings
4 import torch
----> 5 from torch.onnx import symbolic_opset11 as opset11
6 from torch.onnx.symbolic_helper import parse_args
8 _ONNX_OPSET_VERSION_11 = 11
File /opt/anaconda3/lib/python3.12/site-packages/torch/onnx/__init__.py:46
33 from .errors import CheckerError # Backwards compatibility
34 from .utils import (
35 _optimize_graph,
36 _run_symbolic_function,
(...)
43 unregister_custom_op_symbolic,
44 )
---> 46 from ._internal.exporter import ( # usort:skip. needs to be last to avoid circular import
47 DiagnosticOptions,
48 ExportOptions,
49 ONNXProgram,
50 ONNXProgramSerializer,
51 ONNXRuntimeOptions,
52 InvalidExportOptionsError,
53 OnnxExporterError,
54 OnnxRegistry,
55 dynamo_export,
56 enable_fake_mode,
57 )
59 from ._internal.onnxruntime import (
60 is_onnxrt_backend_supported,
61 OrtBackend as _OrtBackend,
62 OrtBackendOptions as _OrtBackendOptions,
63 OrtExecutionProvider as _OrtExecutionProvider,
64 )
66 __all__ = [
67 # Modules
68 "symbolic_helper",
(...)
114 "is_onnxrt_backend_supported",
115 ]
ImportError: cannot import name 'DiagnosticOptions' from 'torch.onnx._internal.exporter' (/opt/anaconda3/lib/python3.12/site-packages/torch/onnx/_internal/exporter/__init__.py)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.3.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.31.3
Libc version: N/A
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 10:07:17) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.7.0
[pip3] optree==0.14.0
[pip3] torch==2.3.0
[pip3] torchvision==0.18.1a0
[conda] numpy 1.26.4 py312h7f4fdc5_0
[conda] numpy-base 1.26.4 py312he047099_0
[conda] numpydoc 1.7.0 py312hca03da5_0
[conda] optree 0.14.0 pypi_0 pypi
[conda] pytorch 2.3.0 cpu_py312h9fb2a2f_0
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
```
| true
|
2,841,577,933
|
[dynamo] Support list subclasses and fix dict subclasses mutation bugs
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146995
* __->__ #146819
This PR adds support for list subclasses. Among other things are
1) Tracking the mutations on internal vts like `_dict_vt` and `_list_vt` using sources. This helps identify if there was a mutation in the underlying data structures, and we need to reconstruct it.
2) `UserDefinedObjectVariable` now has a new method - `is_modified` which `side_effect` infra relies upon to check mutations in the underlying vts (like `_dict_vt`).
3) `reconstruction` logic ensures that we use `dict.__getitem__` and `list.__getitem__` methods. This is super important because we don't want to call the overridden `__getitem__` methods.
If this PR is hard to review, please let me know. I can break it into several small PRs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,841,573,786
|
[mps] Implement eager support for spherical_bessel_j0
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"release notes: mps",
"ciflow/mps",
"module: inductor"
] | 4
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,841,493,698
|
BF16 linear(matmul) operator 100x slower on odd matrix dimension sizes on A100
|
piubwd
|
open
|
[
"module: performance",
"module: cuda",
"triaged",
"module: cublas",
"module: linear algebra",
"matrix multiplication"
] | 3
|
NONE
|
### 🐛 Describe the bug
This is an another reproduction of issues #106469 and #106485 under the newer version of pytorch (torch 2.6+cu126)
When performing linear (matrix multiplication) operator under bf16 on A100, if one dimension length is an odd number (I tried 3,5,101), the speed is 136x~283x slower than those of nearest even number dimension sizes.
eg, for the following code
```python
python reproduction_code.py bf16 3
```
cost 68 seconds
```python
python reproduction_code.py bf16 2
```
cost 570ms
Here, the `reproduction_code.py` is
```python
import torch
from torch.profiler import profile
import torch.amp as amp
import torch.nn.functional as F
def build_matrix(shape_row, shape_col):
return torch.randn((shape_row, shape_col)).cuda()
def profile_aten_mm(shape_row_1, shape_col_1, shape_col_2, forward_dtype):
mat1 = build_matrix(shape_row=shape_row_1, shape_col=shape_col_1)
mat2 = build_matrix(shape_row=shape_col_1, shape_col=shape_col_2).T
forward_dtype
print(f"mat1.shape={mat1.shape} mat2.shape={mat2.shape}")
with profile(with_flops=True, profile_memory=True, record_shapes=True) as prof:
with torch.autocast(device_type="cuda", dtype=forward_dtype):
for epoch in range(100):
# mat3 = mat1 @ mat2
mat3 = F.linear(mat1, mat2)
print(f"mat3.shape={mat3.shape}")
print(f"mat3.dtype={mat3.dtype}")
print(prof.key_averages(group_by_input_shape=True).table(row_limit=100000, max_name_column_width=114514))
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser("qwq")
parser.add_argument("precision")
parser.add_argument("N")
args = parser.parse_args()
N = int(args.N)
pre = args.precision
print(f"(only mm) Profiling with configuration {pre} {N}")
if pre == "bf16":
forward_dtype = torch.bfloat16
elif pre == "16":
forward_dtype = torch.float16
elif pre == "32":
forward_dtype = torch.float32
print(f"forward_dtype={forward_dtype}")
print(f"{torch.__version__}")
profile_aten_mm(16, int(3e7), N, forward_dtype=forward_dtype)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Debian 9.5.0-3) 9.5.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.31
Python version: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-169-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 545.23.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 384
On-line CPU(s) list: 0-383
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 17
Model name: AMD EPYC 9684X 96-Core Processor
Stepping: 2
Frequency boost: enabled
CPU MHz: 2651.914
CPU max MHz: 2550.0000
CPU min MHz: 1500.0000
BogoMIPS: 5092.43
Virtualization: AMD-V
L1d cache: 6 MiB
L1i cache: 6 MiB
L2 cache: 192 MiB
L3 cache: 2.3 GiB
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca flush_l1d
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.3.0
[pip3] numpy==1.23.5
[pip3] numpydoc==1.8.0
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-lightning==2.2.1
[pip3] torch==2.6.0+cu126
[pip3] torch-ema==0.3
[pip3] torchmetrics==1.4.1
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @msaroufim @ptrblck @eqy @csarofeen @xwang233 @jianyuh @nikitaved @pearu @mruberry @walterddr @Lezcano
| true
|
2,841,481,865
|
Optimize dataloader Self typing
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: dataloader"
] | 5
|
CONTRIBUTOR
|
Optimize `dataloader.py` method return type with Self typing
| true
|
2,841,448,666
|
Use __qualname__ in add_safe_globals and update Unpickling error raised for Unsupported GLOBAL
|
hanson-hschang
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
- Fixes #146814
Change
```python
for f in _marked_safe_globals_set:
module, name = f.__module__, f.__name__
```
to
```python
for f in _marked_safe_globals_set:
module, name = f.__module__, f.__qualname__
```
for avoiding same key string overwrite.
A test is also added.
```
python test/test_serialization.py TestSerialization.test_serialization_nested_class
```
- Fixes #146886
| true
|
2,841,436,844
|
Problem of same name nested class in serialization
|
hanson-hschang
|
closed
|
[
"module: serialization",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The current implementation of `_get_user_allowed_globals` defined in the `_weights_only_unpickler.py` will encounter trouble when same name nested class added to safe globals through `torch.serialization.add_safe_globals`. The code that creates the problem is as follows:
```python
import torch
class ClassAMock:
class Nested:
pass
class ClassBMock:
class Nested:
pass
def test_nested_class() -> None:
torch.save(
dict(
a_nested=ClassAMock.Nested(),
b_nested=ClassBMock.Nested(),
),
'nested_class.pth'
)
torch.serialization.add_safe_globals(
[ClassAMock, ClassBMock, getattr, ClassAMock.Nested, ClassBMock.Nested]
)
torch.load('nested_class.pth')
test_nested_class()
```
The error message is as follows:
```
_pickle.UnpicklingError: Weights only load failed. In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
Please file an issue with the following so that we can make `weights_only=True` compatible with your use case: WeightsUnpickler error: Can only create new object for nn.Parameter or classes allowlisted via `add_safe_globals` but got <class '__main__.ClassBMock.Nested'>
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250209
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.6.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.31.4
Libc version: N/A
Python version: 3.12.9 (main, Feb 4 2025, 14:38:38) [Clang 16.0.0 (clang-1600.0.26.6)] (64-bit runtime)
Python platform: macOS-14.6.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0
[pip3] optree==0.13.0
[conda] numpy 2.2.1 pypi_0 pypi
[conda] numpydoc 1.5.0 py311hca03da5_0
cc @mruberry @mikaylagawarecki
| true
|
2,841,403,744
|
Oneshot AllReduce not being triggered when there's nested intra- and inter-node process groups
|
donglinz
|
open
|
[
"oncall: distributed"
] | 1
|
NONE
|
### 🐛 Describe the bug
I am testing with 2 H100 nodes with 8 GPUs for each. Initialized a world process groups with size 16 and create intra-node process groups with ```torch.distributed.split_group``` thereafter.
I noticed that one short all reduce ops are not being triggered for intra-node process group all reduce.
Inspecting the logs looks like the ```IntraNodeComm::rendezvous``` is being called when the world process group is being initialized rather than associated with the intra-node process group and seems this is why intra-node communication is not being triggered.
Node 0:
```
2025-02-10 05:40:08.923 | INFO | __mp_main__:run_shard:37 - Initializing process group with world size 16
2025-02-10 05:40:22.059 | INFO | __mp_main__:run_shard:50 - Initialized process group with world size 16
[rank3]:[W210 05:40:22.070992151 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node0, node1)
[rank4]:[W210 05:40:22.071007373 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node0, node1)
[rank7]:[W210 05:40:22.071020369 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node0, node1)
[rank6]:[W210 05:40:22.071024166 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node0, node1)
[rank1]:[W210 05:40:22.071056049 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node0, node1)
[rank0]:[W210 05:40:22.071083300 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node0, node1)
[rank5]:[W210 05:40:22.071092767 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node0, node1)
[rank2]:[W210 05:40:22.071129152 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node0, node1)
2025-02-10 05:40:34.695 | INFO | __mp_main__:run_shard:63 - RANK 0/16 intra-node group ranks: [0, 1, 2, 3, 4, 5, 6, 7]
2025-02-10 05:40:34.695 | INFO | __mp_main__:run_shard:63 - RANK 1/16 intra-node group ranks: [0, 1, 2, 3, 4, 5, 6, 7]
2025-02-10 05:40:34.695 | INFO | __mp_main__:run_shard:63 - RANK 3/16 intra-node group ranks: [0, 1, 2, 3, 4, 5, 6, 7]
2025-02-10 05:40:34.695 | INFO | __mp_main__:run_shard:63 - RANK 5/16 intra-node group ranks: [0, 1, 2, 3, 4, 5, 6, 7]
2025-02-10 05:40:34.695 | INFO | __mp_main__:run_shard:63 - RANK 2/16 intra-node group ranks: [0, 1, 2, 3, 4, 5, 6, 7]
2025-02-10 05:40:34.696 | INFO | __mp_main__:run_shard:63 - RANK 7/16 intra-node group ranks: [0, 1, 2, 3, 4, 5, 6, 7]
2025-02-10 05:40:34.695 | INFO | __mp_main__:run_shard:63 - RANK 4/16 intra-node group ranks: [0, 1, 2, 3, 4, 5, 6, 7]
2025-02-10 05:40:34.695 | INFO | __mp_main__:run_shard:63 - RANK 6/16 intra-node group ranks: [0, 1, 2, 3, 4, 5, 6, 7]
```
Node 1:
```
2025-02-10 05:33:23.274 | INFO | __mp_main__:run_shard:38 - Initializing process group with world size 16
[W210 05:33:33.142215289 socket.cpp:200] [c10d] The hostname of the client socket cannot be retrieved. err=-3
[W210 05:33:33.237101969 socket.cpp:200] [c10d] The hostname of the client socket cannot be retrieved. err=-3
[W210 05:33:33.368061684 socket.cpp:200] [c10d] The hostname of the client socket cannot be retrieved. err=-3
[W210 05:33:33.426134871 socket.cpp:200] [c10d] The hostname of the client socket cannot be retrieved. err=-3
[W210 05:33:33.464914105 socket.cpp:200] [c10d] The hostname of the client socket cannot be retrieved. err=-3
[W210 05:33:33.466004272 socket.cpp:200] [c10d] The hostname of the client socket cannot be retrieved. err=-3
[W210 05:33:33.494283115 socket.cpp:200] [c10d] The hostname of the client socket cannot be retrieved. err=-3
[W210 05:33:33.494914592 socket.cpp:200] [c10d] The hostname of the client socket cannot be retrieved. err=-3
[rank10]:[W210 05:33:33.621687382 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node1, node0)
[rank12]:[W210 05:33:33.622218903 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node1, node0)
[rank9]:[W210 05:33:33.622220701 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node1, node0)
[rank11]:[W210 05:33:33.622301429 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node1, node0)
[rank13]:[W210 05:33:33.622320304 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node1, node0)
[rank15]:[W210 05:33:33.622354237 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node1, node0)
[rank14]:[W210 05:33:33.622554512 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node1, node0)
2025-02-10 05:33:33.884 | INFO | __mp_main__:run_shard:51 - Initialized process group with world size 16
[rank8]:[W210 05:33:33.643654232 intra_node_comm.cpp:160] Aborting IntraNodeComm::rendezvous because some participants are not on the same host (node1, node0)
2025-02-10 05:33:46.434 | INFO | __mp_main__:run_shard:64 - RANK 10/16 intra-node group ranks: [8, 9, 10, 11, 12, 13, 14, 15]
2025-02-10 05:33:46.434 | INFO | __mp_main__:run_shard:64 - RANK 8/16 intra-node group ranks: [8, 9, 10, 11, 12, 13, 14, 15]
2025-02-10 05:33:46.434 | INFO | __mp_main__:run_shard:64 - RANK 15/16 intra-node group ranks: [8, 9, 10, 11, 12, 13, 14, 15]
2025-02-10 05:33:46.434 | INFO | __mp_main__:run_shard:64 - RANK 11/16 intra-node group ranks: [8, 9, 10, 11, 12, 13, 14, 15]
2025-02-10 05:33:46.434 | INFO | __mp_main__:run_shard:64 - RANK 9/16 intra-node group ranks: [8, 9, 10, 11, 12, 13, 14, 15]
2025-02-10 05:33:46.434 | INFO | __mp_main__:run_shard:64 - RANK 12/16 intra-node group ranks: [8, 9, 10, 11, 12, 13, 14, 15]
2025-02-10 05:33:46.434 | INFO | __mp_main__:run_shard:64 - RANK 13/16 intra-node group ranks: [8, 9, 10, 11, 12, 13, 14, 15]
2025-02-10 05:33:46.434 | INFO | __mp_main__:run_shard:64 - RANK 14/16 intra-node group ranks: [8, 9, 10, 11, 12, 13, 14, 15]
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250209+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:38:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-11
NUMA node1 CPU(s): 12-23
NUMA node2 CPU(s): 24-35
NUMA node3 CPU(s): 36-47
NUMA node4 CPU(s): 48-59
NUMA node5 CPU(s): 60-71
NUMA node6 CPU(s): 72-83
NUMA node7 CPU(s): 84-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-protobuf==3.5.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250209+cu126
[pip3] triton==3.1.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] cuda-cudart 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] cuda-cudart-dev 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] cuda-cudart-static 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] cuda-cupti 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] cuda-cupti-static 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] cuda-libraries 12.4.0 0 nvidia/label/cuda-12.4.0
[conda] cuda-libraries-dev 12.4.0 0 nvidia/label/cuda-12.4.0
[conda] cuda-libraries-static 12.4.0 0 nvidia/label/cuda-12.4.0
[conda] cuda-nvrtc 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] cuda-nvrtc-dev 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] cuda-nvrtc-static 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] cuda-nvtx 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] cuda-opencl 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] cuda-opencl-dev 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] cuda-runtime 12.4.0 0 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcublas-dev 12.4.2.65 0 nvidia/label/cuda-12.4.0
[conda] libcublas-static 12.4.2.65 0 nvidia/label/cuda-12.4.0
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcufft-dev 11.2.0.44 0 nvidia/label/cuda-12.4.0
[conda] libcufft-static 11.2.0.44 0 nvidia/label/cuda-12.4.0
[conda] libcurand 10.3.5.119 0 nvidia/label/cuda-12.4.0
[conda] libcurand-dev 10.3.5.119 0 nvidia/label/cuda-12.4.0
[conda] libcurand-static 10.3.5.119 0 nvidia/label/cuda-12.4.0
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusolver-dev 11.6.0.99 0 nvidia/label/cuda-12.4.0
[conda] libcusolver-static 11.6.0.99 0 nvidia/label/cuda-12.4.0
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libcusparse-dev 12.3.0.142 0 nvidia/label/cuda-12.4.0
[conda] libcusparse-static 12.3.0.142 0 nvidia/label/cuda-12.4.0
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] libnvjitlink-dev 12.4.99 0 nvidia/label/cuda-12.4.0
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250209+cu126 pypi_0 pypi
[conda] torchtriton 3.1.0+cf34004b8a py312 pytorch-nightly
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,841,227,955
|
fix #145064 , added error checking for empty tensor in _pdist_forward
|
AmalDevHaridevan
|
closed
|
[
"oncall: distributed",
"module: cpu",
"triaged",
"module: mkldnn",
"open source",
"NNC",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"module: compiled autograd"
] | 5
|
NONE
|
Fixes #145064
Added TORCH_CHECK to prevent iterating over nullptr and causing segfault.
We can verify this by running the following simple test:
```python import torch
print(torch.__version__)
input = torch.rand((11, 15,3))
print("Running test with non empty tensor")
print("="*50)
print(torch.ops.aten._pdist_forward(input, p=2.0))
print("="*50)
print("Running test with empty tensor")
print("="*50)
input = torch.rand((11, 15, 0))
print(torch.ops.aten._pdist_forward(input, p=2.0))
```
# Before fix:
```2.7.0a0+git464e572
Running test with non empty tensor
==================================================
tensor([1.2083, 1.4906, 1.2710, 1.4653, 1.6329, 1.5641, 1.6864, 1.3509, 1.3771,
1.8574, 0.9800, 1.5987, 1.4999, 1.4619, 1.6616, 1.7614, 1.3761, 1.3119,
1.3935, 1.4656, 1.6993, 1.3452, 1.4604, 1.0390, 1.2662, 1.6565, 1.5740,
1.3851, 1.8369, 1.6037, 1.5965, 1.3896, 1.1114, 1.4699, 1.6736, 1.5287,
1.2168, 1.5095, 1.6844, 1.4027, 1.7431, 1.2226, 1.4504, 1.1963, 1.5279,
1.2033, 1.1480, 1.2056, 1.0587, 1.3939, 1.3022, 1.5384, 1.3645, 1.6349,
1.2800])
==================================================
Running test with empty tensor
==================================================
Segmentation fault (core dumped)
```
# After fix
```
2.7.0a0+git464e572
Running test with non empty tensor
==================================================
tensor([1.5208, 1.5068, 1.2832, 1.4650, 1.9227, 1.9052, 1.9649, 1.9571, 1.8125,
1.7174, 1.8387, 1.6939, 1.6634, 1.8099, 1.3245, 1.7073, 1.4311, 1.8628,
1.6667, 1.6101, 1.8348, 1.4548, 1.3954, 1.5973, 1.7277, 1.8505, 1.3647,
1.6524, 1.6583, 0.9928, 1.2633, 1.5329, 1.7163, 1.2425, 1.3743, 2.0104,
1.8953, 1.4519, 1.8834, 1.5887, 2.0280, 1.1968, 1.2921, 1.4689, 1.5236,
1.7794, 1.4897, 1.5896, 1.6168, 1.6176, 1.6705, 1.8576, 1.5708, 1.2780,
1.3247])
==================================================
Running test with empty tensor
==================================================
Traceback (most recent call last):
File "/home/harid/test.py", line 12, in <module>
print(torch.ops.aten._pdist_forward(input, p=2.0))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_ops.py", line 1156, in __call__
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input tensor is empty
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @EikanWang @voznesenskym @penguinwu @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan
| true
|
2,841,219,771
|
Added error checking for empty Tensor in _pdist_forward
|
AmalDevHaridevan
|
closed
|
[
"module: inductor"
] | 2
|
NONE
|
Fixes #145064
Added TORCH_CHECK to prevent iterating over nullptr and causing segfault.
We can verify this by running the following simple test:
```python import torch
print(torch.__version__)
input = torch.rand((11, 15,3))
print("Running test with non empty tensor")
print("="*50)
print(torch.ops.aten._pdist_forward(input, p=2.0))
print("="*50)
print("Running test with empty tensor")
print("="*50)
input = torch.rand((11, 15, 0))
print(torch.ops.aten._pdist_forward(input, p=2.0))
```
# Before fix:
```2.7.0a0+git464e572
Running test with non empty tensor
==================================================
tensor([1.2083, 1.4906, 1.2710, 1.4653, 1.6329, 1.5641, 1.6864, 1.3509, 1.3771,
1.8574, 0.9800, 1.5987, 1.4999, 1.4619, 1.6616, 1.7614, 1.3761, 1.3119,
1.3935, 1.4656, 1.6993, 1.3452, 1.4604, 1.0390, 1.2662, 1.6565, 1.5740,
1.3851, 1.8369, 1.6037, 1.5965, 1.3896, 1.1114, 1.4699, 1.6736, 1.5287,
1.2168, 1.5095, 1.6844, 1.4027, 1.7431, 1.2226, 1.4504, 1.1963, 1.5279,
1.2033, 1.1480, 1.2056, 1.0587, 1.3939, 1.3022, 1.5384, 1.3645, 1.6349,
1.2800])
==================================================
Running test with empty tensor
==================================================
Segmentation fault (core dumped)
```
# After fix
```
2.7.0a0+git464e572
Running test with non empty tensor
==================================================
tensor([1.5208, 1.5068, 1.2832, 1.4650, 1.9227, 1.9052, 1.9649, 1.9571, 1.8125,
1.7174, 1.8387, 1.6939, 1.6634, 1.8099, 1.3245, 1.7073, 1.4311, 1.8628,
1.6667, 1.6101, 1.8348, 1.4548, 1.3954, 1.5973, 1.7277, 1.8505, 1.3647,
1.6524, 1.6583, 0.9928, 1.2633, 1.5329, 1.7163, 1.2425, 1.3743, 2.0104,
1.8953, 1.4519, 1.8834, 1.5887, 2.0280, 1.1968, 1.2921, 1.4689, 1.5236,
1.7794, 1.4897, 1.5896, 1.6168, 1.6176, 1.6705, 1.8576, 1.5708, 1.2780,
1.3247])
==================================================
Running test with empty tensor
==================================================
Traceback (most recent call last):
File "/home/harid/test.py", line 12, in <module>
print(torch.ops.aten._pdist_forward(input, p=2.0))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_ops.py", line 1156, in __call__
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input tensor is empty
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,841,186,601
|
DISABLED test_insignificant_strides (__main__.SDPAPatternRewriterCudaTests)
|
pruthvistony
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22inductor%2Ftest_fused_attention.py%3A%3ASDPAPatternRewriterCudaTests%3A%3Atest_insignificant_strides%22%5D)).
cc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,841,159,931
|
Memory access fault by GPU node when training on a 7900XTX
|
mesalon
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
When running a basic model trainer, I get this error.
```
(venv) mesalon@desktop-mesalon:~/markov/gpt2$ python3 trainer.py
Loaded pretrained model.
loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
Training Epoch 1: 25%|█████████████████████▉ | 4430/17762 [00:59<03:01, 73.34it/s, loss=3.08]
Memory access fault by GPU node-1 (Agent handle: 0x5d8b398786a0) on address 0x774546a00000. Reason: Page not present or supervisor privilege.
Aborted (core dumped)
```
Here is the code I use to train the LLM.
https://gist.github.com/Mesalon/f4482131fccc7a210f87a784cda0786f
Please help.
### Versions
```
PyTorch version: 2.7.0.dev20250208+rocm6.3
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: Linux Mint 21.3 (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Radeon RX 7900 XTX (gfx1100)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 7800X3D 8-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
CPU max MHz: 5050.0000
CPU min MHz: 400.0000
BogoMIPS: 8384.52
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 96 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton-rocm==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+rocm6.3
[pip3] torchaudio==2.6.0.dev20250209+rocm6.3
[pip3] torchvision==0.22.0.dev20250209+rocm6.3
[pip3] triton==3.2.0
[conda] Could not collect
```
| true
|
2,841,120,246
|
Generalize mixed precision in DDP
|
zhangxiaoli73
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (ddp)"
] | 9
|
CONTRIBUTOR
|
**Motivation:**
1. Generalize mixed precision in DDP.
2. Enable `SyncBatchNorm` for XPU device.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @gujinghui @guangyey
| true
|
2,841,088,234
|
_is_gcc Function Incorrectly Classifies clang++ as g++
|
AmalDevHaridevan
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor"
] | 3
|
NONE
|
Fixes #146712
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,841,082,377
|
DISABLED test_inductor_all_gather_into_tensor_coalesced (__main__.CompileTest)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d"
] | 86
|
NONE
|
Platforms: linux, rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inductor_all_gather_into_tensor_coalesced&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36922272925).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 14 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inductor_all_gather_into_tensor_coalesced`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/distributed/test_c10d_functional_native.py", line 647, in setUp
dist.init_process_group(
~~~~~~~~~~~~~~~~~~~~~~~^
backend="fake",
^^^^^^^^^^^^^^^
...<2 lines>...
store=store,
^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/c10d_logger.py", line 95, in wrapper
func_return = func(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py", line 1637, in init_process_group
raise ValueError("trying to initialize the default process group twice!")
ValueError: trying to initialize the default process group twice!
```
</details>
Test file path: `distributed/test_c10d_functional_native.py`
cc @clee2000 @wdvr
| true
|
2,841,024,597
|
chore: fix typos in error messages in FSDP
|
universome
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)"
] | 7
|
CONTRIBUTOR
|
Fixes two small typos in FSDP error messages
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,840,988,308
|
`torch.library.register_fake` respects only positional order, but not kwargs order
|
HanGuo97
|
open
|
[
"triaged",
"module: library",
"oncall: pt2",
"module: pt2-dispatcher"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It seems like the registration process in `torch.library.register_fake` requires _order_ of arguments to be exactly aligned with the function to be registered. The argument names, however, could be arbitrary.
```python
import torch
import numpy as np
from torch import Tensor
# Example 1: an operator without data-dependent output shape
@torch.library.custom_op("mylib::custom_linear", mutates_args=())
def custom_linear(x: Tensor, weight: Tensor, bias: Tensor) -> Tensor:
raise NotImplementedError("Implementation goes here")
@torch.library.register_fake("mylib::custom_linear")
def _(x, weight, bias):
print(f"weight: {weight}, bias: {bias}")
assert x.dim() == 2
assert weight.dim() == 2
assert bias.dim() == 1
assert x.shape[1] == weight.shape[1]
assert weight.shape[0] == bias.shape[0]
assert x.device == weight.device
return (x @ weight.t()) + bias
with torch._subclasses.fake_tensor.FakeTensorMode():
x = torch.randn(2, 3)
w = torch.randn(3, 3)
b = torch.randn(3)
y = torch.ops.mylib.custom_linear(x, w, b)
# ===> we swap the order of bias and weight
@torch.library.register_fake("mylib::custom_linear")
def _(x, bias, weight):
print(f"Swapped bias and weight")
print(f"weight: {weight}, bias: {bias}")
assert x.dim() == 2
assert weight.dim() == 2
assert bias.dim() == 1
assert x.shape[1] == weight.shape[1]
assert weight.shape[0] == bias.shape[0]
assert x.device == weight.device
return (x @ weight.t()) + bias
with torch._subclasses.fake_tensor.FakeTensorMode():
x = torch.randn(2, 3)
w = torch.randn(3, 3)
b = torch.randn(3)
y = torch.ops.mylib.custom_linear(x, w, b)
```
The above will print
```
weight: FakeTensor(..., size=(3, 3)), bias: FakeTensor(..., size=(3,))
Swapped bias and weight
weight: FakeTensor(..., size=(3,)), bias: FakeTensor(..., size=(3, 3))
```
### Versions
NA
cc @anjali411 @chauhang @penguinwu @zou3519 @bdhirsh @yf225
| true
|
2,840,974,180
|
`Illegal Instruction` Error on Raspberry Pi 4 with `torch.nn.functional.interpolate` and `recompute_scale_factor=True` (Torch 2.6.0)
|
Chizkiyahu
|
closed
|
[
"high priority",
"triage review",
"module: onnx",
"module: regression",
"module: arm"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
# Description
When using `torch.nn.functional.interpolate` with `recompute_scale_factor=True` on a **Raspberry Pi 4**, PyTorch 2.6.0 causes an **Illegal Instruction error** during ONNX export.
# Code
```python
import torch
class Module(torch.nn.Module):
def forward(self, x):
# this line give Illegal instruction error when
# is from torch export
# torch 2.6.0
# raspberry pi 4
# recompute_scale_factor=True
return torch.nn.functional.interpolate(x, scale_factor=0.5, recompute_scale_factor=True)
model = Module()
shape = (1, 3, 10, 10)
dummy_inputs = tuple([torch.randn(*shape).reshape(*shape)])
# Running the model works fine
res = model(*dummy_inputs)
# Exporting to ONNX causes core dump
torch.onnx.export(model, opset_version=20, f="./m.onnx", args=dummy_inputs)
```
# **Error Output**
```
Illegal instruction (core dumped)
```
## **Device and Environment Details**
| Device | PyTorch Version | Execution Type | Status |
|----------------------------|----------------|----------------|---------|
| MacBook Pro M4 (native) | 2.6.0 | Native | ✅ Works |
| MacBook Pro M4 (Docker) | 2.6.0 | Docker | ✅ Works |
| Raspberry Pi 4 (native) | 2.5.1 | Native | ✅ Works |
| Raspberry Pi 4 (Docker) | 2.5.1 | Docker | ✅ Works |
| Raspberry Pi 4 (native) | 2.6.0 | Native | ❌ **Fails** |
| Raspberry Pi 4 (Docker) | 2.6.0 | Docker | ❌ **Fails** |
| Raspberry Pi 5 (native) | 2.6.0 | Native | ✅ Works |
# raspi 4 vs 5 cpu Features
running `cat /proc/cpuinfo | grep 'Fe' | uniq`
## raspi 4
```bash
Features : fp asimd evtstrm crc32 cpuid
```
## raspi 5
```bash
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
```
# Similar bug
https://github.com/pytorch/pytorch/issues/146792
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (aarch64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.36
Python version: 3.11.11 (main, Feb 4 2025, 13:44:55) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.15.32-v8+-aarch64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: ARM
Model name: Cortex-A72
Model: 3
Thread(s) per core: 1
Core(s) per cluster: 4
Socket(s): -
Cluster(s): 1
Stepping: r0p3
CPU(s) scaling MHz: 100%
CPU max MHz: 1800.0000
CPU min MHz: 600.0000
BogoMIPS: 108.00
Flags: fp asimd evtstrm crc32 cpuid
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.19.2
[pip3] onnxruntime_extensions==0.13.0
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] uni_pytorch==0.0.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @snadampal @milpuz01
| true
|
2,840,955,121
|
AttributeError: partially initialized module 'torch._dynamo' has no attribute 'optimize'
|
fzimmermann89
|
closed
|
[
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
In a fresh conda/pip cpu-only torch2.6 environment
```
conda create -n dynamo python=3.12 -c conda-forge
conda activate dynamo
pip install --upgrade --index-url=https://download.pytorch.org/whl/cpu --extra-index-url https://pypi.org/simple/ einops "torch>=2.6" torchvision
```
trying to use torch.compile
```
import torch
def test(x: torch.Tensor):
return x
torch.compile(test)
```
results in an AttributeError: partially initialized module 'torch._dynamo' has no attribute 'optimize' (most likely due to a circular import)
What am I doing wrong here?
### Error logs
Traceback (most recent call last):
File "/home/zimmer08/code/mrpro/../profile.py", line 28, in <module>
torch.compile(test)
File "/home/zimmer08/envs/envs/dynamoerror/lib/python3.12/site-packages/torch/__init__.py", line 2565, in compile
return torch._dynamo.optimize(
^^^^^^^^^^^^^
File "/home/zimmer08/envs/envs/dynamoerror/lib/python3.12/site-packages/torch/__init__.py", line 2679, in __getattr__
return importlib.import_module(f".{name}", __name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zimmer08/envs/envs/dynamoerror/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 999, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/home/zimmer08/envs/envs/dynamoerror/lib/python3.12/site-packages/torch/_dynamo/__init__.py", line 3, in <module>
from . import convert_frame, eval_frame, resume_execution
File "/home/zimmer08/envs/envs/dynamoerror/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 6, in <module>
import cProfile
File "/home/zimmer08/envs/envs/dynamoerror/lib/python3.12/cProfile.py", line 12, in <module>
import profile as _pyprofile
File "/home/zimmer08/code/profile.py", line 28, in <module>
torch.compile(test)
File "/home/zimmer08/envs/envs/dynamoerror/lib/python3.12/site-packages/torch/__init__.py", line 2565, in compile
return torch._dynamo.optimize(
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: partially initialized module 'torch._dynamo' has no attribute 'optimize' (most likely due to a circular import)
### Versions
```
conda create -n dynamo python=3.12 -c conda-forge
conda activate dynamo
pip install --upgrade --index-url=https://download.pytorch.org/whl/cpu --extra-index-url https://pypi.org/simple/ einops "torch>=2.6" torchvision
```
Package Version
----------------- ----------
einops 0.8.1
filelock 3.17.0
fsspec 2025.2.0
Jinja2 3.1.5
MarkupSafe 3.0.2
mpmath 1.3.0
networkx 3.4.2
numpy 2.2.2
pillow 11.1.0
pip 25.0
setuptools 75.8.0
sympy 1.13.1
torch 2.6.0+cpu
torchvision 0.21.0+cpu
typing_extensions 4.12.2
wheel 0.45.1
Name Version Build Channel
────────────────────────────────────────────────────────────────
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
bzip2 1.0.8 h4bc722e_7 conda-forge
ca-certificates 2025.1.31 hbcca054_0 conda-forge
ld_impl_linux-64 2.43 h712a8e2_2 conda-forge
libexpat 2.6.4 h5888daf_0 conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libgcc 14.2.0 h77fa898_1 conda-forge
libgcc-ng 14.2.0 h69a702a_1 conda-forge
libgomp 14.2.0 h77fa898_1 conda-forge
liblzma 5.6.4 hb9d3cd8_0 conda-forge
libnsl 2.0.1 hd590300_0 conda-forge
libsqlite 3.48.0 hee588c1_1 conda-forge
libuuid 2.38.1 h0b41bf4_0 conda-forge
libxcrypt 4.4.36 hd590300_1 conda-forge
libzlib 1.3.1 hb9d3cd8_2 conda-forge
ncurses 6.5 h2d0b736_3 conda-forge
openssl 3.4.0 h7b32b05_1 conda-forge
pip 25.0 pyh8b19718_0 conda-forge
python 3.12.8 h9e4cc4f_1_cpython conda-forge
readline 8.2 h8228510_1 conda-forge
setuptools 75.8.0 pyhff2d567_0 conda-forge
tk 8.6.13 noxft_h4845f30_101 conda-forge
tzdata 2025a h78e105d_0 conda-forge
wheel 0.45.1 pyhd8ed1ab_1 conda-forge
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.153.2-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-1235U
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 4
BogoMIPS: 4991.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves umip gfni vaes vpclmulqdq rdpid fsrm md_clear flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 5 MiB (4 instances)
L3 cache: 12 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] torch==2.6.0+cpu
[pip3] torchvision==0.21.0+cpu
[conda] numpy 2.2.2 pypi_0 pypi
[conda] torch 2.6.0+cpu pypi_0 pypi
[conda] torchvision 0.21.0+cpu pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,840,927,035
|
Add mechansim for small intra kernel reductions
|
drisspg
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146801
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,840,908,250
|
[inductor] Remove _get_grid_fn_str
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146800
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,840,877,532
|
[MPS] cholesky ex version
|
Isalia20
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 6
|
COLLABORATOR
|
PR #145701 didn't have experimental version of cholesky. This PR adds that version
| true
|
2,840,719,237
|
Torch 2.6 Unexpected Graph Break with SubConfigProxy
|
chengzeyi
|
open
|
[
"triaged",
"module: regression",
"oncall: pt2",
"module: graph breaks",
"module: compile ux"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When I run with the following code which checks a value from a custom config module (similar to `torch._inductor.config`), I encounter unexpect graph break with latest torch 2.6.0, which does not occur with torch 2.5.0. This causes severe performance regression when running FLUX models with ParaAttention.
```python
with unittest.mock.patch.object(
torch_ring_attention,
"_convert_to_f32",
not para_attn.config.attention.allow_reduced_precision_compute,
create=True,
)
```
From
https://github.com/chengzeyi/ParaAttention/blob/3b85ae1e53f88d5995c58a6b439b452d33f61aab/src/para_attn/para_attn_interface.py#L161
```
Graph break in user code at /home/zeyi/repos/ParaAttention/src/para_attn/para_attn_interface.py:142
Reason: Unsupported: 'inline in skipfiles: SubConfigProxy.__getattr__ | __getattr__ /home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/utils/_config_module.py, skipped according trace_rules.lookup SKIP_DIRS'
User code traceback:
File "/home/zeyi/repos/ParaAttention/src/para_attn/para_attn_interface.py", line 216, in ring_attn_func
return RingAttnFunc.apply(
File "/home/zeyi/repos/ParaAttention/src/para_attn/para_attn_interface.py", line 142, in forward
not para_attn.config.attention.allow_reduced_precision_compute,
Traceback (most recent call last):
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 1053, in var_getattr
subobj = self._getattr_static(name)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 1001, in _getattr_static
subobj = self.value.__getattribute__(name)
AttributeError: 'SubConfigProxy' object has no attribute 'allow_reduced_precision_compute'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1658, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 1022, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 759, in call_method
return self.call_apply(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 708, in call_apply
return variables.UserFunctionVariable(fn, source=source).call_function(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1800, in LOAD_ATTR
self._load_attr(inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1790, in _load_attr
result = BuiltinVariable(getattr).call_function(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 1004, in call_function
return handler(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 852, in builtin_dispatch
rv = fn(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 772, in call_self_handler
result = self_handler(tx, *args, **kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 1704, in call_getattr
return obj.var_getattr(tx, name)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 1076, in var_getattr
).call_function(tx, [ConstantVariable.create(name)], {})
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3116, in inline_call_
result = InliningInstructionTranslator.check_inlineable(func)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3093, in check_inlineable
unimplemented(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: 'inline in skipfiles: SubConfigProxy.__getattr__ | __getattr__ /home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/utils/_config_module.py, skipped according trace_rules.lookup SKIP_DIRS'
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.20
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H200
GPU 1: NVIDIA H200
GPU 2: NVIDIA H200
GPU 3: NVIDIA H200
GPU 4: NVIDIA H200
GPU 5: NVIDIA H200
GPU 6: NVIDIA H200
GPU 7: NVIDIA H200
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4800.19
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95
NUMA node1 CPU(s): 96-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,840,715,145
|
Torch 2.6 Unexpected Graph Break with contextmanager
|
chengzeyi
|
closed
|
[] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When I run with the following context manager, I encounter unexpect graph break with latest torch 2.6.0, which does not occur with torch 2.5.0. This causes severe performance regression when running `FLUX` models with `ParaAttention`.
```python
class UnifiedAttnMode(TorchFunctionMode):
disabled = False
@torch.compiler.disable()
def __init__(self, mesh=None):
super().__init__()
self._parallel_method = "ulysses"
if mesh is None:
self._ulysses_mesh = DP.get_default_group()
self._ring_mesh = None
else:
if isinstance(mesh, dist.ProcessGroup):
self._ulysses_mesh = mesh
self._ring_mesh = None
else:
assert isinstance(mesh, dist.DeviceMesh), "mesh must be a ProcessGroup or DeviceMesh"
if "ulysses" in mesh.mesh_dim_names:
self._ulysses_mesh = mesh["ulysses"]
else:
self._ulysses_mesh = None
if "ring" in mesh.mesh_dim_names:
self._ring_mesh = mesh["ring"]
else:
self._ring_mesh = None
assert (
self._ulysses_mesh is not None or self._ring_mesh is not None
), "mesh must have ulysses or ring dim"
def __torch_function__(self, func, types, args=(), kwargs=None):
kwargs = {} if kwargs is None else kwargs
if UnifiedAttnMode.disabled:
return func(*args, **kwargs)
if func is F.scaled_dot_product_attention:
parallel_method = self._parallel_method
if parallel_method == "ulysses":
with self._set_parallel_method("ring"), self:
if self._ulysses_mesh is None:
return func(*args, **kwargs)
return ulysses_attn_func(*args, **kwargs, mesh=self._ulysses_mesh)
elif parallel_method == "ring":
with self._set_parallel_method("none"), self:
if self._ring_mesh is None:
return func(*args, **kwargs)
return ring_attn_func(*args, **kwargs, mesh=self._ring_mesh)
elif parallel_method == "none":
if para_attn.config.attention.force_dispatch_to_custom_ops:
return para_attn_ops.attention_forward(*args, **kwargs)
return func(*args, **kwargs)
else:
raise ValueError(f"Unknown parallel method: {parallel_method}")
return func(*args, **kwargs)
@torch.compiler.disable()
def __enter__(self):
super().__enter__()
@torch.compiler.disable()
def __exit__(self, *args):
super().__exit__(*args)
@classmethod
@contextlib.contextmanager
def disable(cls):
old_disabled = cls._set_disabled(True)
try:
yield
finally:
cls._set_disabled(old_disabled)
@classmethod
@torch.compiler.disable()
def _set_disabled(cls, value):
old_disabled = cls.disabled
cls.disabled = value
return old_disabled
@contextlib.contextmanager
def _set_parallel_method(self, method):
old_parallel_method = self._parallel_method
self._parallel_method = method
try:
yield
finally:
self._parallel_method = old_parallel_method
```
From
https://github.com/chengzeyi/ParaAttention/blob/3b85ae1e53f88d5995c58a6b439b452d33f61aab/src/para_attn/para_attn_interface.py#L461
```
Graph break in user code at /home/zeyi/repos/ParaAttention/src/para_attn/para_attn_interface.py:418
Reason: Unsupported: 'inline in skipfiles: UnifiedAttnMode._set_parallel_method | helper /usr/lib/python3.10/contextlib.py, skipped according trace_rules.lookup SKIP_DIRS'
User code traceback:
File "/home/zeyi/repos/ParaAttention/src/para_attn/context_parallel/diffusers_adapters/flux.py", line 61, in torch_dynamo_resume_in_new_forward_at_60
output = original_forward(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/diffusers/models/transformers/transformer_flux.py", line 522, in forward
encoder_hidden_states, hidden_states = block(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/diffusers/models/transformers/transformer_flux.py", line 180, in forward
attention_outputs = self.attn(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 588, in forward
return self.processor(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 2321, in __call__
hidden_states = F.scaled_dot_product_attention(
File "/home/zeyi/repos/ParaAttention/src/para_attn/para_attn_interface.py", line 418, in __torch_function__
with self._set_parallel_method("ring"), self:
Traceback (most recent call last):
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/lazy.py", line 170, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1748, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 914, in call_function
return variables.UserFunctionVariable(fn, source=source).call_function(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/lazy.py", line 170, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 914, in call_function
return variables.UserFunctionVariable(fn, source=source).call_function(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/lazy.py", line 170, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 960, in call_function
return self.call_method(tx, "__call__", args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 815, in call_method
return UserMethodVariable(method, self, source=source).call_function(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1748, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 886, in call_function
return dispatch_torch_function(tx, self, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/torch_function.py", line 543, in dispatch_torch_function
res = tx.symbolic_torch_function_state.call_torch_function_mode(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/torch_function.py", line 274, in call_torch_function_mode
return cur_mode.call_torch_function(tx, fn, types, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/torch_function.py", line 392, in call_torch_function
return call_torch_function(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/torch_function.py", line 506, in call_torch_function
return tx.inline_user_function_return(torch_function_var, tf_args, {})
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1658, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3116, in inline_call_
result = InliningInstructionTranslator.check_inlineable(func)
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3093, in check_inlineable
unimplemented(
File "/home/zeyi/pyvenv/default/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: 'inline in skipfiles: UnifiedAttnMode._set_parallel_method | helper /usr/lib/python3.10/contextlib.py, skipped according trace_rules.lookup SKIP_DIRS'
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.20
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H200
GPU 1: NVIDIA H200
GPU 2: NVIDIA H200
GPU 3: NVIDIA H200
GPU 4: NVIDIA H200
GPU 5: NVIDIA H200
GPU 6: NVIDIA H200
GPU 7: NVIDIA H200
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4800.19
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95
NUMA node1 CPU(s): 96-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] Could not collect
```
| true
|
2,840,623,887
|
Segmentation Fault in `torch.ops.aten.matrix_exp_backward`
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"module: empty tensor",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
example:
```python
import torch
def f(*args):
sym_0, sym_1, sym_2, sym_3, sym_4, sym_5, sym_6 = args
var_976 = torch.ops.aten.blackman_window(window_length= sym_0, periodic= sym_1)
var_956 = torch.ops.aten.special_logsumexp(self= var_976, dim= sym_2, keepdim= sym_3)
var_781 = torch.ops.aten.randint(low= sym_4, high= sym_5, size= sym_6)
print(var_956, var_781)
return torch.ops.aten.matrix_exp_backward(self= var_956, grad= var_781)
f(358, False, (-1,), False, -1, 0, (1,))
```
result:
```
tensor(6.3650) tensor([-1])
[W209 20:23:45.571710835 TensorShape.cpp:4475] Warning: Tensor.mH is deprecated on 0-D tensors. Consider using x.conj(). (function operator())
fish: Job 2, 'python3 sigsegv-matrix_exp_back…' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
pytorch 2.7.0.dev20250209+cu124
cc @malfet
| true
|
2,840,620,433
|
Floating Point Exception in `torch.ops.aten.pixel_shuffle` with Large `upscale_factor`
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
example:
```python
import torch
def f(sym_3):
return torch.ops.aten.pixel_shuffle(
self=torch.randn((1, 1363, 1)), upscale_factor=sym_3
)
f(8070450532247928832)
```
result:
```
fish: Job 3, 'python3 sigsegv-pixel_shuffle.py' terminated by signal SIGFPE (Floating point exception)
```
### Versions
pytorch 2.7.0.dev20250209+cu124
cc @malfet
| true
|
2,840,619,252
|
Segmentation Fault in `torch.as_strided_copy` with Large `storage_offset`
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
example:
```python
from torch import eye, as_strided_copy
def f(*args):
sym_0, sym_1, sym_2, sym_3, sym_4 = args
var_964 = eye(sym_0, sym_1)
return as_strided_copy(var_964, sym_2, sym_3, sym_4)
f(0, 1, (4,), (1,), 7546629512955761371)
```
result:
```
fish: Job 3, 'python3 sigsegv-as_strided_copy…' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
pytorch 2.7.0.dev20250209+cu124
cc @malfet
| true
|
2,840,612,528
|
Segmentation Fault in `torch.ops.aten.as_strided` with Large `storage_offset`
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
example:
```python
import torch
def f(sym_1, sym_2, sym_3):
var_564 = torch.ops.aten.as_strided(self= torch.tensor([True]), size= sym_1, stride= sym_2, storage_offset= sym_3)
return var_564
res = f((4096,), (0,), 9223372036854775807)
print(res)
```
result:
```
fish: Job 3, 'python3 sigsegv-as_strided.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
pytorch 2.7.0.dev20250209+cu124
cc @malfet
| true
|
2,840,612,159
|
`Illegal instruction (core dumped)` on Raspberry Pi 4 when exporting ONNX with `torch 2.6.0`
|
Chizkiyahu
|
closed
|
[
"high priority",
"module: crash",
"triaged",
"module: regression",
"module: arm"
] | 13
|
CONTRIBUTOR
|
### 🐛 Describe the bug
#### **Description**
On Raspberry Pi 4, `torch.onnx.export` fails with `Illegal instruction (core dumped)` in `torch 2.6.0`. The same code works fine on `torch 2.5.1`. The issue occurs when using `x.expand(x.shape[0], -1, -1)` inside a `torch.nn.Module`. The crash happens **only during ONNX export**, not during regular inference.
#### **Code to Reproduce**
```python
import torch
class Module(torch.nn.Module):
def forward(self, x):
return x.expand(x.shape[0], -1, -1) # Crashes here during ONNX export
model = Module()
dummy_inputs = tuple(torch.randn(1, 1, 192))
# Running the model works fine
res = model(*dummy_inputs)
# Exporting to ONNX causes core dump
torch.onnx.export(model, opset_version=20, f="./m.onnx", args=dummy_inputs)
```
#### **Error Output**
```
Illegal instruction (core dumped)
```
#### **Device and Environment Details**
| Device | PyTorch Version | Execution Type | Status |
|----------------------------|----------------|----------------|---------|
| MacBook Pro M4 (native) | 2.6.0 | Native | ✅ Works |
| MacBook Pro M4 (Docker) | 2.6.0 | Docker | ✅ Works |
| Raspberry Pi 4 (native) | 2.5.1 | Native | ✅ Works |
| Raspberry Pi 4 (Docker) | 2.5.1 | Docker | ✅ Works |
| Raspberry Pi 4 (native) | 2.6.0 | Native | ❌ **Fails** |
| Raspberry Pi 4 (Docker) | 2.6.0 | Docker | ❌ **Fails** |
| Raspberry Pi 5 (native) | 2.6.0 | Native | ✅ Works |
# raspi 4 vs 5 cpu Features
running `cat /proc/cpuinfo | grep 'Fe' | uniq`
## raspi 4
```bash
Features : fp asimd evtstrm crc32 cpuid
```
## raspi 5
```bash
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (aarch64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.36
Python version: 3.11.11 (main, Feb 4 2025, 13:44:55) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.15.32-v8+-aarch64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: ARM
Model name: Cortex-A72
Model: 3
Thread(s) per core: 1
Core(s) per cluster: 4
Socket(s): -
Cluster(s): 1
Stepping: r0p3
CPU(s) scaling MHz: 100%
CPU max MHz: 1800.0000
CPU min MHz: 600.0000
BogoMIPS: 108.00
Flags: fp asimd evtstrm crc32 cpuid
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.19.2
[pip3] onnxruntime_extensions==0.13.0
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] uni_pytorch==0.0.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @snadampal @milpuz01
| true
|
2,840,611,363
|
Floating Point Exception in `torch.ops.aten.unfold_backward` with Specific Input
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
example:
```python
import torch
def f(*args):
sym_0, sym_1, sym_2, sym_3, sym_4, sym_5, sym_6 = args
var_789 = torch.ones(sym_0, dtype=sym_1, layout=sym_2)
return torch.ops.aten.unfold_backward(var_789, sym_3, sym_4, sym_5, sym_6)
f((2309,), torch.bool, torch.strided, (1531,), -1, 844, 0)
```
result:
```
fish: Job 3, 'python3 sigfpe-unfold_backward.…' terminated by signal SIGFPE (Floating point exception)
```
### Versions
pytorch 2.7.0.dev20250209+cu124
cc @malfet
| true
|
2,840,610,656
|
Segmentation Fault in `torch.ops.aten.multi_margin_loss_backward` with Empty `grad_output`
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
example:
```python
import torch
sym_16 = 2
sym_17 = True
sym_18 = 0
grad_output = torch.tensor([])
self = torch.tensor([64.])
target = torch.tensor([0])
torch.ops.aten.multi_margin_loss_backward(grad_output=grad_output, self=self, target=target, p=sym_16, margin=sym_17, weight=None, reduction=sym_18)
```
result:
```
fish: Job 3, 'python3 sigsegv-multi_margin_lo…' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
pytorch 2.7.0.dev20250209+cu124
cc @malfet
| true
|
2,840,609,383
|
Segmentation Fault in `torch.ops.aten.linalg_eigvals` After Invalid `unfold_copy`
|
WLFJ
|
open
|
[
"module: crash",
"triaged",
"module: linear algebra",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
example:
```python
import torch
sym_0 = 512
sym_1 = False
sym_2 = 1.7976931348623157e+308
sym_3 = -1
sym_4 = 65
sym_5 = 9223372036854775807
sym_6 = 1
sym_7 = 33
sym_8 = 1
var_547 = torch.ops.aten.hamming_window(window_length=sym_0, periodic=sym_1, alpha=sym_2)
var_462 = torch.ops.aten.unfold_copy(self=var_547, dimension=sym_3, size=sym_4, step=sym_5)
var_583 = torch.ops.aten.unfold_copy(self=var_462, dimension=sym_6, size=sym_7, step=sym_8)
torch.ops.aten.linalg_eigvals(self=var_583)
```
result:
```
Intel oneMKL ERROR: Parameter 3 was incorrect on entry to SGEBAL.
fish: Job 3, 'python3 sigsegv-linalg_eigvals.…' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
pytorch: 2.7.0.dev20250209+cu124
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,840,608,418
|
Segmentation Fault in `torch.choose_qparams_optimized` with Invalid Parameters
|
WLFJ
|
open
|
[
"module: crash",
"oncall: quantization",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
example:
```python
import torch
sym_3 = 0
sym_4 = -1
sym_5 = 1.7976931348623157e+308
sym_6 = 0
res = torch.choose_qparams_optimized(input=torch.tensor([]), numel=sym_3, n_bins=sym_4, ratio=sym_5, bit_width=sym_6)
print(res)
```
result:
```
fish: Job 3, 'python3 sigsegv-choose_qparams_…' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
pytorch: 2.7.0.dev20250209+cu124
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
2,840,607,175
|
Floating Point Exception in `torch.ops.aten.native_channel_shuffle` with `groups=0`
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"module: empty tensor",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
example:
```python
import torch
print(torch.__version__)
sym_7 = 0
var_471 = torch.ops.aten.native_channel_shuffle(torch.tensor([[[0.]]]), groups=sym_7)
print(var_471)
```
result:
```
fish: Job 3, 'python3 sigfpe-native_channel_s…' terminated by signal SIGFPE (Floating point exception)
```
### Versions
pytorch: 2.7.0.dev20250209+cu124
cc @malfet
| true
|
2,840,592,411
|
Installing CPU-only PyTorch results in unnecessary CUDA dependencies during Docker build.
|
devroopsaha744
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
#### **Issue:**
I am using the standard PyTorch version (`torch`) inside a Docker container, but CUDA dependencies (e.g., `nvidia-cublas`, `nvidia-cusparse`) are still being installed, even though I only need the CPU version of PyTorch.
#### **Steps to Reproduce:**
1. Create a Dockerfile with a base image (e.g., `python:3.10`).
2. In the `requirements.txt`, include `torch` (without specifying CUDA) and other dependencies.
3. Build the Docker image using `docker build -t my-fastapi-app .`.
4. CUDA dependencies are being installed during the build, even though I only want the CPU version.
#### **Expected Behavior:**
Only the CPU version of PyTorch should be installed, without CUDA dependencies.
#### **Actual Behavior:**
CUDA dependencies are being installed, increasing image size and pulling unnecessary libraries.
#### **Dockerfile**
```dockerfile
# Dockerfile
FROM python:3.10
WORKDIR /app
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
```
#### **Requirements.txt:**
```txt
# requirements.txt
torch
fastapi
uvicorn
transformers
pydantic
```
#### **Environment:**
- Docker version: 27.4.0
- Python version: 3.10
- OS: [Windows/Linux/Mac]
- PyTorch version: Latest (`torch`)
#### **Additional Notes:**
- I've tried using the `torch+cpu` method in `requirements.txt` but CUDA dependencies are still being installed during the Docker build.
- I only need the **CPU version** of PyTorch, and CUDA support is not required.
#### **How to fix it?**
| true
|
2,840,558,414
|
AttributeError: '_OpNamespace' '_C' object has no attribute 'silu_and_mul'
|
mrblenderTBS
|
closed
|
[] | 4
|
NONE
|
### 🐛 Describe the bug
if current_platform.is_cuda_alike() or current_platform.is_cpu():
self.op = torch.ops._C.silu_and_mul
### Versions
When trying to run a model based on vLLM, it displays this message. This error frankly baffled me. While other errors could at least be found on other forums, this one was not encountered by anyone. I have reinstalled torch I don’t know how many times, replaced _C with the one from version 2.3.0, nothing changed, as if this command did not exist in principle, please help
| true
|
2,840,493,112
|
[export] cache unflatten forward module
|
pianpwk
|
open
|
[
"fb-exported",
"Stale",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Differential Revision: D69361235
| true
|
2,840,461,658
|
[4/N] Remove unnecessary once flag usage
|
cyyever
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,840,458,736
|
Suggestion: integration of einops test suite
|
arogozhnikov
|
open
|
[
"module: ci",
"module: tests",
"triaged",
"module: linear algebra"
] | 1
|
NONE
|
Hi torch team,
Starting from einops 0.8.1, you can test torch against einops with:
```shell
# install numpy, einops, pytest and torch
python -m einops.tests.run_tests numpy torch
```
and I suggest having this in torch's CI.
There are a couple of motivations:
1. einops tests actually reveal regressions in frameworks (happened several times, not in torch)
2. it is hard within einops to test against more advanced features like torch.compile, because most of engineering/regressions happen on torch side. If tests fail within einops - 1) that's late, problem should be caught earlier 2) not much I can do. See [this issue](https://github.com/arogozhnikov/einops/issues/315) for a motivational example. Should simplify work on your side too.
Ready to answer questions / discuss concerns. Tests currently take ~12 seconds.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @ZainRizvi @jianyuh @nikitaved @pearu @walterddr @xwang233 @Lezcano
| true
|
2,840,455,864
|
[Inductor-CPU] FP16 X int8 WoQ GEMM for M <= 4 with FP16 accum & compute
|
sanchitintel
|
open
|
[
"module: cpu",
"open source",
"Stale",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
## Summary
For FP16 activation, int8 weights (frozen) GEMM, for M dimension (batch size x sequence length) <= 4, the implementation in this PR is faster than the current Inductor implementation, and should accelerate next-token generation of LLMs during inference. Scale of int8 weight-only-quantization is applied within the micro-kernel.
## Details
AVX512_FP16 ISA (available on Xeon SP 4th gen & above) has an FMA instruction with FP16 accumulation.
There are AVX512 intrinsics for converting int8 to FP16 via the `int8 -> int16 -> fp16` route, which is faster than having to convert `int8 -> int32 -> fp32 -> fp16`.
This PR essentially copies [a GEMM micro-kernel from Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch/blob/5a7c60cce265b158276326c3aef2b0db55bf9a58/csrc/cpu/aten/utils/woq.h#L685) with 3 modifications:
1. The IPEX micro-kernel code might even try using more than 32 ZMM registers (although the compiler would ensure register-spill doesn't happen, so it isn't a problem in-practice).
2. The IPEX micro-kernel [uses a complex way to do forced loop-unrolling](https://github.com/intel/intel-extension-for-pytorch/blob/5a7c60cce265b158276326c3aef2b0db55bf9a58/csrc/cpu/aten/utils/woq.h#L797-L808), but, [it's exactly same as simplified forced unrolling in this PR](https://godbolt.org/z/esTar1bsj).
3. Explicit cache-line prefetching didn't help for input shapes I tested (LLaMA2 & LLaMA3, so I removed the prefetching code, but can add it back).
## Accuracy & Speedup
Accuracy is expected to be lower than using FP32 accumulation.
## Benchmarking script
[GitHub gist link](https://gist.github.com/sanchitintel/ed268229989ebbe930eabd050f2b979d)
cc @jgong5 @mingfeima @XiaobingSuper @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @yf225 @leslie-fang-intel @Xia-Weiwen @chunyuan-w
| true
|
2,840,452,802
|
TypeError when using torch.compile with RegionViT under torch.inference_mode()
|
hassonofer
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
NONE
|
### 🐛 Describe the bug
## Description
`torch.compile()` fails with TypeError when running inference on a RegionViT model specifically when using `torch.inference_mode()`. The same code works successfully:
- Without `torch.inference_mode()`
- During training
- When debug prints are added to the code
I've tried both PyTorch 2.5.1 and 2.6.0
## Reproduction Steps
1. Install required packages:
```sh
pip install birder torch torchvision torchaudio
```
2. Run the following minimal example:
```python
from birder.model_registry import registry
import torch
net = registry.net_factory("regionvit_t", input_channels=3, num_classes=1000)
net.to(torch.device("cuda"))
net.eval()
net = torch.compile(net)
with torch.inference_mode():
net(torch.rand(1, 3, 256, 256, device=torch.device("cuda")))
```
## Complete Error Traceback
<details>
<summary>Click to expand traceback</summary>
```
Python 3.11.2 (main, Sep 14 2024, 03:00:30) [GCC 12.2.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.32.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: from birder.model_registry import registry
...: import torch
...: net = registry.net_factory("regionvit_t", input_channels=3, num_classes=1000)
...: net.to(torch.device("cuda"))
...: net.eval()
...: net = torch.compile(net)
...: with torch.inference_mode():
...: net(torch.rand(1, 3, 256, 256, device=torch.device("cuda")))
...:
~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:167: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:1446, in OutputGraph._call_user_compiler(self, gm)
1445 compiler_fn = WrapperBackend(compiler_fn)
-> 1446 compiled_fn = compiler_fn(gm, self.example_inputs())
1447 _step_logger()(logging.INFO, f"done compiler function {name}")
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py:129, in WrapBackendDebug.__call__(self, gm, example_inputs, **kwargs)
128 else:
--> 129 compiled_gm = compiler_fn(gm, example_inputs)
131 return compiled_gm
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/__init__.py:2234, in _TorchCompileInductorWrapper.__call__(self, model_, inputs_)
2232 from torch._inductor.compile_fx import compile_fx
-> 2234 return compile_fx(model_, inputs_, config_patches=self.config)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:1521, in compile_fx(model_, example_inputs_, inner_compile, config_patches, decompositions)
1516 with V.set_fake_mode(fake_mode), torch._guards.tracing(
1517 tracing_context
1518 ), compiled_autograd.disable(), functorch_config.patch(
1519 unlift_effect_tokens=True
1520 ):
-> 1521 return aot_autograd(
1522 fw_compiler=fw_compiler,
1523 bw_compiler=bw_compiler,
1524 inference_compiler=inference_compiler,
1525 decompositions=decompositions,
1526 partition_fn=partition_fn,
1527 keep_inference_input_mutations=True,
1528 cudagraphs=cudagraphs,
1529 )(model_, example_inputs_)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/backends/common.py:72, in AotAutograd.__call__(self, gm, example_inputs, **kwargs)
71 with enable_aot_logging(), patch_config:
---> 72 cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
73 counters["aot_autograd"]["ok"] += 1
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1071, in aot_module_simplified(mod, args, fw_compiler, bw_compiler, partition_fn, decompositions, keep_inference_input_mutations, inference_compiler, cudagraphs)
1070 else:
-> 1071 compiled_fn = dispatch_and_compile()
1073 if isinstance(mod, torch._dynamo.utils.GmWrapper):
1074 # This function is called by the flatten_graph_inputs wrapper, which boxes
1075 # the inputs so that they can be freed before the end of this scope.
1076 # For overhead reasons, this is not the default wrapper, see comment:
1077 # https://github.com/pytorch/pytorch/pull/122535/files#r1560096481
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1056, in aot_module_simplified.<locals>.dispatch_and_compile()
1055 with compiled_autograd.disable():
-> 1056 compiled_fn, _ = create_aot_dispatcher_function(
1057 functional_call,
1058 fake_flat_args,
1059 aot_config,
1060 fake_mode,
1061 shape_env,
1062 )
1063 return compiled_fn
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:522, in create_aot_dispatcher_function(flat_fn, fake_flat_args, aot_config, fake_mode, shape_env)
521 with dynamo_timed("create_aot_dispatcher_function"):
--> 522 return _create_aot_dispatcher_function(
523 flat_fn, fake_flat_args, aot_config, fake_mode, shape_env
524 )
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:759, in _create_aot_dispatcher_function(flat_fn, fake_flat_args, aot_config, fake_mode, shape_env)
757 compiler_fn = choose_dispatcher(needs_autograd, aot_config)
--> 759 compiled_fn, fw_metadata = compiler_fn(
760 flat_fn,
761 _dup_fake_script_obj(fake_flat_args),
762 aot_config,
763 fw_metadata=fw_metadata,
764 )
765 return compiled_fn, fw_metadata
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:179, in aot_dispatch_base(flat_fn, flat_args, aot_config, fw_metadata)
178 with TracingContext.report_output_strides() as fwd_output_strides:
--> 179 compiled_fw = compiler(fw_module, updated_flat_args)
181 if fakified_out_wrapper.needs_post_compile:
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:1350, in compile_fx.<locals>.fw_compiler_base(model, example_inputs, is_inference)
1349 with dynamo_utils.dynamo_timed("compile_fx.<locals>.fw_compiler_base"):
-> 1350 return _fw_compiler_base(model, example_inputs, is_inference)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:1421, in compile_fx.<locals>._fw_compiler_base(model, example_inputs, is_inference)
1413 user_visible_outputs = dict.fromkeys(
1414 n.name
1415 for n in model_outputs[
(...)
1418 if isinstance(n, torch.fx.Node)
1419 )
-> 1421 return inner_compile(
1422 model,
1423 example_inputs,
1424 static_input_idxs=get_static_input_idxs(fixed),
1425 cudagraphs=cudagraphs,
1426 graph_id=graph_id,
1427 is_inference=is_inference,
1428 boxed_forward_device_index=forward_device,
1429 user_visible_outputs=user_visible_outputs,
1430 )
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:475, in compile_fx_inner(*args, **kwargs)
473 stack.enter_context(DebugContext())
--> 475 return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
476 *args, **kwargs
477 )
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/repro/after_aot.py:85, in wrap_compiler_debug.<locals>.debug_wrapper(gm, example_inputs, **kwargs)
82 try:
83 # Call the compiler_fn - which is either aot_autograd or inductor
84 # with fake inputs
---> 85 inner_compiled_fn = compiler_fn(gm, example_inputs)
86 except Exception as e:
87 # TODO: Failures here are troublesome because no real inputs,
88 # need a different serialization strategy
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:661, in _compile_fx_inner(gm, example_inputs, cudagraphs, static_input_idxs, is_backward, graph_id, cpp_wrapper, aot_mode, is_inference, boxed_forward_device_index, user_visible_outputs, layout_opt, extern_node_serializer)
659 input._is_inductor_static = True # type: ignore[attr-defined]
--> 661 compiled_graph = FxGraphCache.load(
662 codegen_and_compile,
663 gm,
664 example_inputs,
665 graph_kwargs,
666 inputs_to_check,
667 local=config.fx_graph_cache,
668 remote=fx_graph_remote_cache,
669 )
670 else:
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/codecache.py:1334, in FxGraphCache.load(compile_fx_fn, gm, example_inputs, fx_kwargs, inputs_to_check, local, remote)
1333 cache_event_time = start_time
-> 1334 compiled_graph = compile_fx_fn(
1335 gm, example_inputs, inputs_to_check, fx_kwargs
1336 )
1337 compiled_graph._time_taken_ns = time_ns() - start_time
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:570, in _compile_fx_inner.<locals>.codegen_and_compile(gm, example_inputs, inputs_to_check, fx_kwargs)
566 """
567 This function calls fx_codegen_and_compile and also adds some extra metadata to the resulting
568 compiled fx graph. The metadata is saved to FXGraphCache.
569 """
--> 570 compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
571 if isinstance(compiled_graph, str):
572 # We only return a string in aot mode, in which case we don't
573 # need to do any post-compilation steps: we just return the string,
574 # which is the filename of the compiled code.
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:878, in fx_codegen_and_compile(gm, example_inputs, cudagraphs, static_input_idxs, is_backward, graph_id, cpp_wrapper, aot_mode, is_inference, user_visible_outputs, layout_opt, extern_node_serializer)
877 _check_triton_bf16_support(graph)
--> 878 compiled_fn = graph.compile_to_fn()
879 num_bytes, nodes_num_elem, node_runtimes = graph.count_bytes()
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/graph.py:1913, in GraphLowering.compile_to_fn(self)
1912 else:
-> 1913 return self.compile_to_module().call
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/graph.py:1839, in GraphLowering.compile_to_module(self)
1836 with dynamo_timed(
1837 "GraphLowering.compile_to_module", phase_name="code_gen", fwd_only=False
1838 ):
-> 1839 return self._compile_to_module()
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/graph.py:1845, in GraphLowering._compile_to_module(self)
1842 from .codecache import PyCodeCache
1844 code, linemap = (
-> 1845 self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
1846 )
1848 GraphLowering.save_output_code(code)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/graph.py:1780, in GraphLowering.codegen(self)
1778 self.init_wrapper_code()
-> 1780 self.scheduler = Scheduler(self.operations)
1781 V.debug.draw_orig_fx_graph(self.orig_gm, self.scheduler.nodes)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/scheduler.py:1731, in Scheduler.__init__(self, nodes)
1730 with dynamo_timed("Scheduler.__init__"):
-> 1731 self._init(nodes)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/scheduler.py:1749, in Scheduler._init(self, nodes)
1741 self.available_buffer_names = OrderedSet(
1742 [
1743 *V.graph.graph_inputs.keys(),
(...)
1746 ]
1747 )
-> 1749 self.nodes = [self.create_scheduler_node(n) for n in nodes]
1750 self.update_zero_dim_cpu_tensor()
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/scheduler.py:1749, in <listcomp>(.0)
1741 self.available_buffer_names = OrderedSet(
1742 [
1743 *V.graph.graph_inputs.keys(),
(...)
1746 ]
1747 )
-> 1749 self.nodes = [self.create_scheduler_node(n) for n in nodes]
1750 self.update_zero_dim_cpu_tensor()
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/scheduler.py:1856, in Scheduler.create_scheduler_node(self, node)
1855 elif isinstance(node, (ir.ComputedBuffer, ir.TemplateBuffer)):
-> 1856 return SchedulerNode(self, node)
1857 elif isinstance(node, ir.ExternKernel):
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/scheduler.py:833, in SchedulerNode.__init__(self, scheduler, node)
832 self._init_from_node(node)
--> 833 self._compute_attrs()
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/scheduler.py:841, in SchedulerNode._compute_attrs(self, extra_indexing_constraints, recompute_sizes_body_func)
840 assert isinstance(self.node, (ir.ComputedBuffer, ir.TemplateBuffer))
--> 841 self._sizes, self._body = self.node.simplify_and_reorder(
842 extra_indexing_constraints=extra_indexing_constraints,
843 recompute_sizes_body_func=recompute_sizes_body_func,
844 )
846 group_fn = self.scheduler.get_backend(self.node.get_device()).group_fn
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:3747, in ComputedBuffer.simplify_and_reorder(self, extra_indexing_constraints, recompute_sizes_body_func)
3726 """
3727 This is a main place where we do loop transformations in a
3728 backend-agnostic way.
(...)
3741 on the default body. This can be useful to append additional loop transformations.
3742 """
3743 (
3744 (index_size, reduce_size),
3745 body,
3746 (index_vars, reduce_vars),
-> 3747 ) = self.get_default_sizes_body()
3749 if recompute_sizes_body_func:
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/utils.py:472, in cache_on_self.<locals>.wrapper(self)
471 if not hasattr(self, key):
--> 472 setattr(self, key, fn(self))
473 return getattr(self, key)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:3700, in ComputedBuffer.get_default_sizes_body(self)
3699 with patch.object(ConstantBuffer, "override_device", self.get_device()):
-> 3700 body = LoopBody(
3701 self.get_store_function(),
3702 (args if self.get_reduction_type() else args[:1]),
3703 var_ranges,
3704 *args,
3705 )
3706 index_vars = []
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/loop_body.py:96, in LoopBody.__init__(self, fn, args, var_ranges, iter_vars, reduce_vars)
95 else:
---> 96 self._init_with_tracing(fn, args)
98 self.indexing = None
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/loop_body.py:109, in LoopBody._init_with_tracing(self, fn, args)
108 self.memory_usage = {t: [] for t in MemoryUsageType}
--> 109 self.root_block = LoopBodyBlock(self, fn, args) # traces
110 del self.indexing_exprs_name
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/loop_body.py:566, in LoopBodyBlock.__init__(self, body, fn, args)
563 with V.set_ops_handler(handler):
564 # This indirection is just a cute way to get IndexPropagation to
565 # unwrap the return value.
--> 566 ops.output(fn(*args))
567 self.graph = tracer.graph
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:1532, in WelfordReduction.store_reduction(self, output_name, indexer, vars, reduction_vars)
1527 def store_reduction(self, output_name, indexer, vars, reduction_vars):
1528 values = ops.reduction(
1529 self.dtype,
1530 self.src_dtype,
1531 self.reduction_type,
-> 1532 self.inner_fn(vars, reduction_vars),
1533 )
1534 value = values[self.output_index]
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/lowering.py:5079, in _make_reduction_inner.<locals>.loader(index, reduction_index)
5078 new_index[idx] = var
-> 5079 return inner_loader(new_index)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:2191, in BaseView.make_loader.<locals>.loader(idx)
2190 def loader(idx):
-> 2191 return inner(reindex(idx))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:2191, in BaseView.make_loader.<locals>.loader(idx)
2190 def loader(idx):
-> 2191 return inner(reindex(idx))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/lowering.py:1114, in pointwise_cat.<locals>.inner_fn(idx)
1111 idx_load[dim] = Identity(idx_load[dim] - inputs_ranges[i][0])
1113 masked_loads.append(
-> 1114 ops.masked(
1115 mask,
1116 lambda: inputs_loaders[i](idx_load),
1117 0.0, # this value should be unused
1118 ),
1119 )
1121 next_val = masked_loads[-1]
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/virtualized.py:265, in OpsWrapper.__getattr__.<locals>.inner(*args, **kwargs)
264 new_kwargs = {k: OpsWrapper._unwrap(v) for k, v in kwargs.items()}
--> 265 return OpsWrapper._wrap(getattr(_ops, name)(*new_args, **new_kwargs))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/index_propagation.py:294, in IndexPropagation.__getattr__.<locals>.inner(*args, **kwargs)
293 if not hasattr(SymPyOps, name):
--> 294 return self.fallback(name, args, kwargs)
296 var_arguments = [
297 a
298 for a in itertools.chain(args, kwargs.values())
299 if isinstance(a, IndexPropVar)
300 ]
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/index_propagation.py:267, in IndexPropagation.fallback(self, name, args, kwargs)
266 new_kwargs = {k: self.unwrap(v) for k, v in kwargs.items()}
--> 267 return self.wrap(getattr(self._inner, name)(*new_args, **new_kwargs))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/loop_body.py:491, in LoopBodyBlock.__init__.<locals>.CaptureIndexing.masked(mask_proxy, masked_body, other_proxy)
490 self.body.submodules[name] = self.body.bind_masked_shim(name)
--> 491 self.body.subblocks[name] = LoopBodyBlock(self.body, masked_body, [])
492 return tracer.create_proxy(
493 "call_module", name, (mask_proxy, other_proxy), {}
494 )
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/loop_body.py:566, in LoopBodyBlock.__init__(self, body, fn, args)
563 with V.set_ops_handler(handler):
564 # This indirection is just a cute way to get IndexPropagation to
565 # unwrap the return value.
--> 566 ops.output(fn(*args))
567 self.graph = tracer.graph
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/lowering.py:1116, in pointwise_cat.<locals>.inner_fn.<locals>.<lambda>()
1111 idx_load[dim] = Identity(idx_load[dim] - inputs_ranges[i][0])
1113 masked_loads.append(
1114 ops.masked(
1115 mask,
-> 1116 lambda: inputs_loaders[i](idx_load),
1117 0.0, # this value should be unused
1118 ),
1119 )
1121 next_val = masked_loads[-1]
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:2191, in BaseView.make_loader.<locals>.loader(idx)
2190 def loader(idx):
-> 2191 return inner(reindex(idx))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:2191, in BaseView.make_loader.<locals>.loader(idx)
2190 def loader(idx):
-> 2191 return inner(reindex(idx))
[... skipping similar frames: BaseView.make_loader.<locals>.loader at line 2191 (2 times)]
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:2191, in BaseView.make_loader.<locals>.loader(idx)
2190 def loader(idx):
-> 2191 return inner(reindex(idx))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/lowering.py:3094, in index_impl_helper.<locals>.inner_fn(idx)
3093 def inner_fn(idx):
-> 3094 return x_loader(index_inner_fn(idx))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/lowering.py:3038, in index_output_size_and_inner_fn.<locals>.fn(idx)
3035 size = indexed_size[i]
3036 new_index.append(
3037 ops.indirect_indexing(
-> 3038 loader(idx[start_offset : start_offset + rank]),
3039 size,
3040 check=check,
3041 )
3042 )
3043 new_index = [
3044 *new_index,
3045 *idx[next_idx:],
3046 ]
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:2191, in BaseView.make_loader.<locals>.loader(idx)
2190 def loader(idx):
-> 2191 return inner(reindex(idx))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:2191, in BaseView.make_loader.<locals>.loader(idx)
2190 def loader(idx):
-> 2191 return inner(reindex(idx))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:2191, in BaseView.make_loader.<locals>.loader(idx)
2190 def loader(idx):
-> 2191 return inner(reindex(idx))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/lowering.py:513, in make_pointwise.<locals>.inner.<locals>.inner_fn(index)
512 for load in loaders:
--> 513 out = load(index)
514 if emulate_precision_casts:
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:2191, in BaseView.make_loader.<locals>.loader(idx)
2190 def loader(idx):
-> 2191 return inner(reindex(idx))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/ir.py:2191, in BaseView.make_loader.<locals>.loader(idx)
2190 def loader(idx):
-> 2191 return inner(reindex(idx))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/lowering.py:2451, in iota.<locals>.fn(index)
2450 def fn(index):
-> 2451 return ops.index_expr(step * index[0] + start, dtype=dtype)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/virtualized.py:265, in OpsWrapper.__getattr__.<locals>.inner(*args, **kwargs)
264 new_kwargs = {k: OpsWrapper._unwrap(v) for k, v in kwargs.items()}
--> 265 return OpsWrapper._wrap(getattr(_ops, name)(*new_args, **new_kwargs))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/index_propagation.py:304, in IndexPropagation.__getattr__.<locals>.inner(*args, **kwargs)
302 return self.fallback(name, args, kwargs)
--> 304 return self.propagate_sympy(name, args, kwargs)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/index_propagation.py:280, in IndexPropagation.propagate_sympy(self, name, args, kwargs)
279 new_kwargs = {k: unwrap(v) for k, v in kwargs.items()}
--> 280 new_expr = getattr(SymPyOps, name)(*new_args, **new_kwargs)
281 is_valid_expr = new_expr is not NotImplemented and (
282 # Inductor doesn't expect floating point in sympy expressions, but
283 # allow floating point constants to be propagated
284 new_expr.is_constant()
285 or new_expr.expr.is_integer
286 )
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/index_propagation.py:86, in SymPyOps.index_expr(value, dtype)
84 @staticmethod
85 def index_expr(value: Union[sympy.Expr, int], dtype: torch.dtype) -> TypedExpr:
---> 86 return TypedExpr(value, dtype)
File <string>:5, in __init__(self, expr, dtype)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_inductor/index_propagation.py:65, in TypedExpr.__post_init__(self)
64 if _is_constant(self.expr):
---> 65 self.expr = dtype_to_type(self.dtype)(self.expr)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/sympy/core/expr.py:308, in Expr.__int__(self)
307 raise TypeError("Cannot convert symbols to int")
--> 308 r = self.round(2)
309 if not r.is_Number:
File ~/Programming/birder/.venv/lib/python3.11/site-packages/sympy/core/expr.py:3838, in Expr.round(self, n)
3837 if not pure_complex(x.n(2), or_real=True):
-> 3838 raise TypeError(
3839 'Expected a number but got %s:' % func_name(x))
3840 elif x in _illegal:
TypeError: Expected a number but got ModularIndexing:
The above exception was the direct cause of the following exception:
BackendCompilerFailed Traceback (most recent call last)
Cell In[1], line 8
6 net = torch.compile(net)
7 with torch.inference_mode():
----> 8 net(torch.rand(1, 3, 256, 256, device=torch.device("cuda")))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:465, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
460 saved_dynamic_layer_stack_depth = (
461 torch._C._functorch.get_dynamic_layer_stack_depth()
462 )
464 try:
--> 465 return fn(*args, **kwargs)
466 finally:
467 # Restore the dynamic layer stack depth if necessary.
468 torch._C._functorch.pop_dynamic_layer_stack_and_undo_to_depth(
469 saved_dynamic_layer_stack_depth
470 )
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/Programming/birder/birder/net/base.py:132, in BaseNet.forward(self, x)
129 def classify(self, x: torch.Tensor) -> torch.Tensor:
130 return self.classifier(x)
--> 132 def forward(self, x: torch.Tensor) -> torch.Tensor:
133 x = self.embedding(x)
134 return self.classify(x)
File ~/Programming/birder/birder/net/regionvit.py:509, in RegionViT.embedding(self, x)
506 for param in module.parameters():
507 param.requires_grad = False
--> 509 def embedding(self, x: torch.Tensor) -> torch.Tensor:
510 o_x = x
511 x = self.patch_embed(x)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/Programming/birder/birder/net/regionvit.py:119, in SequentialWithTwo.forward(self, cls_tokens, patch_tokens)
115 def forward( # pylint: disable=arguments-differ
116 self, cls_tokens: torch.Tensor, patch_tokens: torch.Tensor
117 ) -> tuple[torch.Tensor, torch.Tensor]:
118 for module in self:
--> 119 (cls_tokens, patch_tokens) = module(cls_tokens, patch_tokens)
121 return (cls_tokens, patch_tokens)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:1269, in CatchErrorsWrapper.__call__(self, frame, cache_entry, frame_state)
1263 return hijacked_callback(
1264 frame, cache_entry, self.hooks, frame_state
1265 )
1267 with compile_lock, _disable_current_modes():
1268 # skip=1: skip this frame
-> 1269 return self._torchdynamo_orig_callable(
1270 frame, cache_entry, self.hooks, frame_state, skip=1
1271 )
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:1064, in ConvertFrame.__call__(self, frame, cache_entry, hooks, frame_state, skip)
1062 counters["frames"]["total"] += 1
1063 try:
-> 1064 result = self._inner_convert(
1065 frame, cache_entry, hooks, frame_state, skip=skip + 1
1066 )
1067 counters["frames"]["ok"] += 1
1068 return result
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:526, in ConvertFrameAssert.__call__(self, frame, cache_entry, hooks, frame_state, skip)
510 compile_id = CompileId(frame_id, frame_compile_id)
512 signpost_event(
513 "dynamo",
514 "_convert_frame_assert._compile",
(...)
523 },
524 )
--> 526 return _compile(
527 frame.f_code,
528 frame.f_globals,
529 frame.f_locals,
530 frame.f_builtins,
531 self._torchdynamo_orig_callable,
532 self._one_graph,
533 self._export,
534 self._export_constraints,
535 hooks,
536 cache_entry,
537 cache_size,
538 frame,
539 frame_state=frame_state,
540 compile_id=compile_id,
541 skip=skip + 1,
542 )
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:924, in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, cache_entry, cache_size, frame, frame_state, compile_id, skip)
922 guarded_code = None
923 try:
--> 924 guarded_code = compile_inner(code, one_graph, hooks, transform)
925 return guarded_code
926 except Exception as e:
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:666, in _compile.<locals>.compile_inner(code, one_graph, hooks, transform)
664 with dynamo_timed("_compile.compile_inner", phase_name="entire_frame_compile"):
665 with CompileTimeInstructionCounter.record():
--> 666 return _compile_inner(code, one_graph, hooks, transform)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_utils_internal.py:87, in compile_time_strobelight_meta.<locals>.compile_time_strobelight_meta_inner.<locals>.wrapper_function(*args, **kwargs)
84 kwargs["skip"] = kwargs["skip"] + 1
86 if not StrobelightCompileTimeProfiler.enabled:
---> 87 return function(*args, **kwargs)
89 return StrobelightCompileTimeProfiler.profile_compile_time(
90 function, phase_name, *args, **kwargs
91 )
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:699, in _compile.<locals>._compile_inner(code, one_graph, hooks, transform)
697 CompileContext.get().attempt = attempt
698 try:
--> 699 out_code = transform_code_object(code, transform)
700 break
701 except exc.RestartAnalysis as e:
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py:1322, in transform_code_object(code, transformations, safe)
1319 instructions = cleaned_instructions(code, safe)
1320 propagate_line_nums(instructions)
-> 1322 transformations(instructions, code_options)
1323 return clean_and_assemble_instructions(instructions, keys, code_options)[1]
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:219, in preserve_global_state.<locals>._fn(*args, **kwargs)
215 exit_stack.enter_context(
216 torch.fx._symbolic_trace._maybe_revert_all_patches()
217 )
218 try:
--> 219 return fn(*args, **kwargs)
220 finally:
221 cleanup.close()
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:634, in _compile.<locals>.transform(instructions, code_options)
632 try:
633 with tracing(tracer.output.tracing_context), tracer.set_current_tx():
--> 634 tracer.run()
635 except exc.UnspecializeRestartAnalysis:
636 speculation_log.clear()
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2796, in InstructionTranslator.run(self)
2795 def run(self):
-> 2796 super().run()
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:983, in InstructionTranslatorBase.run(self)
981 try:
982 self.output.push_tx(self)
--> 983 while self.step():
984 pass
985 except BackendCompilerFailed:
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:895, in InstructionTranslatorBase.step(self)
892 self.update_block_stack(inst)
894 try:
--> 895 self.dispatch_table[inst.opcode](self, inst)
896 return not self.output.should_exit
897 except exc.ObservedException as e:
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:580, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
578 if speculation.failed:
579 assert speculation.reason is not None
--> 580 return handle_graph_break(self, inst, speculation.reason)
581 try:
582 return inner_fn(self, inst)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:649, in break_graph_if_unsupported.<locals>.decorator.<locals>.handle_graph_break(self, inst, reason)
644 def handle_graph_break(
645 self: "InstructionTranslatorBase",
646 inst: Instruction,
647 reason: GraphCompileReason,
648 ):
--> 649 self.output.compile_subgraph(self, reason=reason)
650 cg = PyCodegen(self)
651 cleanup: List[Instruction] = []
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:1142, in OutputGraph.compile_subgraph(self, tx, partial_convert, reason)
1139 output = []
1140 if count_calls(self.graph) != 0 or len(pass2.graph_outputs) != 0:
1141 output.extend(
-> 1142 self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
1143 )
1145 if len(pass2.graph_outputs) != 0:
1146 output.append(pass2.create_store(graph_output_var))
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:1369, in OutputGraph.compile_and_call_fx_graph(self, tx, rv, root)
1366 self.tracing_context.fake_mode = backend_fake_mode
1368 with self.restore_global_state():
-> 1369 compiled_fn = self.call_user_compiler(gm)
1371 from torch.fx._lazy_graph_module import _LazyGraphModule
1373 if isinstance(compiled_fn, _LazyGraphModule) or (
1374 isinstance(getattr(compiled_fn, "__self__", None), _LazyGraphModule)
1375 and compiled_fn.__name__ == "_lazy_forward" # type: ignore[attr-defined]
(...)
1379 # this is a _LazyGraphModule. This makes it easier for dynamo to
1380 # optimize a _LazyGraphModule.
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:1416, in OutputGraph.call_user_compiler(self, gm)
1412 def call_user_compiler(self, gm: fx.GraphModule) -> CompiledFn:
1413 with dynamo_timed(
1414 "OutputGraph.call_user_compiler", phase_name="backend_compile"
1415 ):
-> 1416 return self._call_user_compiler(gm)
File ~/Programming/birder/.venv/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:1465, in OutputGraph._call_user_compiler(self, gm)
1463 raise e
1464 except Exception as e:
-> 1465 raise BackendCompilerFailed(self.compiler_fn, e) from e
1467 signpost_event(
1468 "dynamo",
1469 "OutputGraph.call_user_compiler",
(...)
1475 },
1476 )
1478 return compiled_fn
BackendCompilerFailed: backend='inductor' raised:
TypeError: Expected a number but got ModularIndexing:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
</details>
## Initial Analysis
* Issue appears to be related to reshape operations at lines [308](https://gitlab.com/birder/birder/-/blob/3f9312fa0b0f39ef814caaffbfcc17610ae26b48/birder/net/regionvit.py#L308) and [311](https://gitlab.com/birder/birder/-/blob/3f9312fa0b0f39ef814caaffbfcc17610ae26b48/birder/net/regionvit.py#L311) in the model code
* Removing the attention line at 309 doesn't resolve the issue
* Removing all reshape operations allows successful compilation
* Adding print statements anywhere in the code makes the compilation succeed, making debugging challenging
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.36
Python version: 3.11.2 (main, Sep 14 2024, 03:00:30) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.1.0-28-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A5000
GPU 1: NVIDIA RTX A5000
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X3D 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 75%
CPU max MHz: 5758.5928
CPU min MHz: 3000.0000
BogoMIPS: 8400.52
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 128 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] flake8-pep585==0.1.7
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0
[pip3] torch==2.5.1+cu124
[pip3] torch-model-archiver==0.12.0
[pip3] torch-workflow-archiver==0.2.15
[pip3] torchaudio==2.5.1+cu124
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.6.1
[pip3] torchprofile==0.0.4
[pip3] torchserve==0.12.0
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,840,428,303
|
[not for commit] Add assert that is_parallel is true
|
jamesjwu
|
closed
|
[
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146779
* #146417
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,840,423,603
|
[torch.jit] INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/mobile/register_ops_common_utils.cpp":34, please report a bug to PyTorch.
|
cybersupersoap
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when using TorchScript modules and `torch.jit.annotate`. The code is as follows:
```python
import inspect
from typing import Dict, Iterator, List, Optional, Tuple, Any
import torch
import torch.testing._internal.jit_utils
from torch.testing._internal.common_utils import enable_profiling_mode_for_profiling_tests, ProfilingMode
import textwrap
def get_frame_vars(frames_up):
frame = inspect.currentframe()
if not frame:
raise RuntimeError("failed to inspect frame")
i = 0
while i < frames_up + 1:
frame = frame.f_back
if not frame:
raise RuntimeError("failed to get frame")
i += 1
defined_vars: Dict[str, Any] = {}
defined_vars.update(frame.f_locals)
defined_vars.update(frame.f_globals)
return defined_vars
def execWrapper(code, glob, loc):
exec(code, glob, loc)
def checkScript(script,
inputs,
name='func',
optimize=True,
inputs_requires_grad=False,
capture_output=False,
frames_up=1,
profiling=ProfilingMode.PROFILING,
atol=None,
rtol=None):
with torch.jit.optimized_execution(optimize):
with enable_profiling_mode_for_profiling_tests():
extra_profile_runs = any(isinstance(x, torch.Tensor) and x.requires_grad for x in inputs)
if isinstance(script, str):
cu = torch.jit.CompilationUnit(script, _frames_up=frames_up)
frame = get_frame_vars(frames_up)
the_locals: Dict[str, Any] = {}
execWrapper(script, glob=frame, loc=the_locals)
frame.update(the_locals)
scripted_fn = getattr(cu, name)
else:
source = textwrap.dedent(inspect.getsource(script))
checkScript(
source,
inputs,
script.__name__,
optimize=optimize,
inputs_requires_grad=inputs_requires_grad,
capture_output=capture_output,
profiling=profiling,
frames_up=2)
# Continue checking the Python frontend
scripted_fn = torch.jit.script(script, _frames_up=1)
# profiling run
script_outputs = scripted_fn(*inputs)
if inputs_requires_grad or extra_profile_runs:
opt_script_outputs = scripted_fn(*inputs)
opt_script_outputs = scripted_fn(*inputs)
def to_list_float_1D(x: torch.Tensor) -> List[float]:
li = torch.jit.annotate(List[float], x.tolist())
return li
checkScript(to_list_float_1D, (torch.randn(5, dtype=torch.float16),))
```
Error messages:
```
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: scalar_ty == at::ScalarType::Float || scalar_ty == at::ScalarType::Double INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/mobile/re[gist](https://colab.research.google.com/drive/1VjGcZhuy09VoInlNA_B3fPsfWlutZy9m?usp=sharing)er_ops_common_utils.cpp":34, please report a bug to PyTorch. Unexpected scalar type for Tensor
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
Please find the [gist](https://colab.research.google.com/drive/1VjGcZhuy09VoInlNA_B3fPsfWlutZy9m?usp=sharing) here for reference.
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,840,423,330
|
Enable explicitly vectorized `_weight_int8pack_mm` op for FP16 dtype on x86_64 CPU
|
sanchitintel
|
open
|
[
"module: cpu",
"triaged",
"open source",
"ciflow/trunk",
"intel",
"release notes: intel"
] | 4
|
COLLABORATOR
|
## Summary
Currently, `_weight_int8pack_mm` is only explicitly vectorized for BF16 activations for x86_64 CPU, and has different AVX2 & AVX512 implementations.
This PR unifies its separate AVX512 & AVX2 implementations, and also makes it common for Float/BFloat16/Half activation dtypes, which is feasible since compute & accumulation happen in FP32 even in case of FP16/BF16 activations.
Most of the code added in this PR has been copy-pasted from Inductor-CPP FP32 GEMM micro-kernel template (so, credits to the original authors).
There's no performance regression. The input shapes (M, N, K) benchmarked are:
[1, 4096, 4096], [1, 4096, 11008], [1, 11008, 4096], [4, 4096, 4096], [4, 4096, 11008], [4, 11008, 4096], [1, 4096, 14336], [1, 14336, 4096], [4, 4096, 14336], [4, 14336, 4096]
Intel OpenMP & tcmalloc were preloaded for benchmarking.
Now the non-vectorized (not explicitly vectorized) micro-kernel would only be used when:
1 `ATEN_CPU_CAPABILITY` is default.
2. x86_64 CPUs MSVC builds.
3. aarch64 builds with `C10_MOBILE` true? Not sure if such builds exist on PyTorch CI
cc @jgong5 @mingfeima @XiaobingSuper @ashokei @jingxu10 @jerryzh168
| true
|
2,840,413,799
|
INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/testing/file_check.cpp":607, please report a bug to PyTorch
|
cybersupersoap
|
open
|
[
"oncall: jit",
"module: testing"
] | 0
|
NONE
|
### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when using `torch.testing.FileCheck.checkcount`
```python
from torch.testing import FileCheck
FileCheck().check_count('is being compiled', 0).run("")
```
Error messages:
```
RuntimeError Traceback (most recent call last)
<ipython-input-3-214611a61ccb> in <cell line: 0>()
1 from torch.testing import FileCheck
----> 2 FileCheck().check_count('is being compiled', 0).run("")
RuntimeError: count != 0 || exactly INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/testing/file_check.cpp":607, please report a bug to PyTorch. Count == 0 && !exactly doesn't do anything
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
Please find the [gist](https://colab.research.google.com/drive/1tE36xGF4JtyxUfh18P2_sCjd8GUgrHVs?usp=sharing) here for reference.
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,840,408,692
|
[torch.jit] Crash would be raised when using torch.jit.script
|
cybersupersoap
|
open
|
[
"oncall: jit"
] | 1
|
NONE
|
### 🐛 Describe the bug
Segmentation fault would be triggered when using `torch.jit.script` and inserting a constant into the graph . The code is as follows:
```python
import torch
@torch.jit.script
def foo(inp):
x = inp + 1
y = x / 2
z = y * y
return z
with foo.graph.insert_point_guard(foo.graph.findNode('aten::summary.create_file_writer')):
foo.graph.insertConstant('bad_logdir', [1,2,3])
```
Error messages:
```
Segmentation fault (core dumped)
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
Please find the [gist](https://colab.research.google.com/drive/12hvDFDAShiBFGJ9tyAlJ4Hyo9ZDvMEyr?usp=sharing) here for reference.
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,840,404,538
|
[cuda] Simplify the sinc function a bit.
|
dcci
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
MEMBER
|
`else` after `return` can be removed & the indentation can be reduced, for readability.
| true
|
2,840,401,346
|
INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/quantized/cpu/qsigmoid.cpp":65, please report a bug to PyTorch.
|
cybersupersoap
|
open
|
[
"oncall: jit",
"oncall: quantization"
] | 0
|
NONE
|
### 🐛 Describe the bug
INTERNAL ASSERT Error would be raised when using `quantized tensor`and `torch.jit.trace`. The code is as follows:
```python
import torch
torch.backends.quantized.engine = "qnnpack"
def qpt(t, scale, zero_point, dtype=torch.quint8):
t = torch.tensor(t)
return torch.quantize_per_tensor(t, scale, zero_point, dtype)
class UnaryModule(torch.nn.Module):
def forward(self, arg):
return torch.sigmoid(arg)
torch.jit.trace(UnaryModule(), qpt(torch.tensor([-1.0, 1.0]), 0, 0))
```
Error messages:
```
<ipython-input-4-d9728d615b76> in forward(self, arg)
6 class UnaryModule(torch.nn.Module):
7 def forward(self, arg):
----> 8 return torch.sigmoid(arg)
9 torch.jit.trace(UnaryModule(), qpt(torch.tensor([-1.0, 1.0]), 0, 0))
RuntimeError: createStatus == pytorch_qnnp_status_success INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/quantized/cpu/qsigmoid.cpp":65, please report a bug to PyTorch. failed to create QNNPACK sigmoid operator
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
Please find the [gist](https://colab.research.google.com/drive/1STOJ_LfxjDAnPHJ_XRLjvoXrWMLnFtB2?usp=sharing) here for reference.
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
2,840,392,339
|
Torch showing tensors are not equal, even though they are equal
|
Tylersuard
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
I create 2 tensors that should be identical, but PyTorch is saying they are not equal. I even print the two tensors out and they are identical.
import torch
first_tensor = torch.tensor([0.1, 0.2, 0.3]) + torch.tensor([0.4, 0.5, 0.6])
print(first_tensor)
second_tensor = torch.tensor([0.5, 0.7, 0.9])
print(second_tensor)
are_tensors_equal = torch.equal(first_tensor, second_tensor)
if are_tensors_equal:
print("The two tensors are equal")
else:
print("The two tensors are NOT equal")
### Versions
How can we fix this bug?
| true
|
2,840,386,336
|
[mps] Add a shader for spherical_bessel_j0.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 4
|
MEMBER
|
In preparation for adding the operation to inductor/eager.
Adapted from the CUDA version of the shader.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,840,381,285
|
There should be a single version of exec_unary_kernel()
|
dcci
|
closed
|
[
"triaged",
"module: mps"
] | 3
|
MEMBER
|
### 🐛 Describe the bug
Filing this one so I don't forget (and in case someone else wants to take a look)
```
davidino@davidino-mbp operations % git grep unary_kernel
SpecialOps.mm:static void unary_kernel_mps(TensorIteratorBase& iter, const std::string& name) {
SpecialOps.mm: unary_kernel_mps(iter, "i0");
SpecialOps.mm: unary_kernel_mps(iter, "i1");
UnaryKernel.mm:static void exec_unary_kernel(const Tensor& self, const Tensor& output_, const std::string& name) {
UnaryKernel.mm: exec_unary_kernel(self, output_, "erfinv");
UnaryKernel.mm: exec_unary_kernel(self, output_, "exp");
UnaryKernel.mm: exec_unary_kernel(self, output_, "sinc");
UnaryKernel.mm: exec_unary_kernel(self, output_, "tanh");
```
there are two versions of unary_kernel_mps execution. I believe it would be better/easier if we had one, maybe move that to a util file, and also provide a variant for two (maybe stealing/moving it from BinaryKernels.metal).
### Versions
N/A
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,840,368,800
|
MPS Error on sequoia 15.3: NDArray dimension length > INT_MAX'
|
fatemark
|
open
|
[
"needs reproduction",
"triaged",
"module: mps"
] | 9
|
NONE
|
### 🐛 Describe the bug
I get this error in comfyui on sequoia 15.3. The error only occurs beyond a certain size of the image i'm working with.
/AppleInternal/Library/BuildRoots/d187755d-b9a3-11ef-83e5-aabfac210453/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:829: failed assertion `[MPSNDArray initWithDevice:descriptor:isTextureBacked:] Error: NDArray dimension length > INT_MAX'
### Versions
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using `python3 -m pip install --upgrade 'optree>=0.13.0'`.
warnings.warn(
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.30.2
Libc version: N/A
Python version: 3.10.11 (v3.10.11:7d4cc5aa85, Apr 4 2023, 19:05:19) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-15.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpy-quaternion==2023.0.3
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.18.1
[pip3] open_clip_torch==2.26.1
[pip3] optree==0.12.1
[pip3] pytorch-lightning==2.4.0
[pip3] rotary-embedding-torch==0.8.6
[pip3] torch==2.6.0
[pip3] torchao==0.8.0
[pip3] torchaudio==2.6.0
[pip3] torchdiffeq==0.2.5
[pip3] torchmetrics==1.3.2
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,840,292,145
|
[EZ] Add logic to build Metal shader with debug info
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
By appending `-frecord-sources -gline-tables-only` to the compilation command
Helpful when debugging shaders compiled into libtorch
Test plan: Run
`python ../tools/build_with_debinfo.py ../aten/src/ATen/native/mps/kernels/UpSample.metal ../aten/src/ATen/native/mps/operations/UpSample.mm`
And then run following to capture shader and check that it contains debug info
```python
import torch
import os
os.environ["MTL_CAPTURE_ENABLED"]="1"
inp = torch.rand(size=(6, 3, 10, 20), device="mps", dtype=torch.float32)
with torch.mps.profiler.metal_capture("bilinear2d"):
out = torch.nn.functional.interpolate(x, scale_factor=(1.7,0.9), mode="bilinear")
```
<img width="769" alt="image" src="https://github.com/user-attachments/assets/e0316c1c-07a4-4da5-97b9-886c56857c1d" />
| true
|
2,840,285,726
|
Tensor Parallel (TP) broken on 2.6 (cannot `parallelize_module` correctly)
|
Cyrilvallez
|
closed
|
[
"oncall: distributed"
] | 5
|
NONE
|
### 🐛 Describe the bug
Hey! It looks like Tensor Parallel (TP) is broken in v2.6. Running the below simple snippet with `torchrun --nproc-per-node 4 test.py` would yield the following error:
`torch.distributed.DistBackendError: Attempt to perform collective on tensor not on device passed to init_process_group`
But as you can see, the model was correctly moved to the correct device beforehand, so it should not be an issue.
The same snippet runs perfectly fine on previous versions.
If this is an overlook/mistake on my side, please let me know, I might have missed it (docs/ressources on TP are still a bit scarce). But I don't think it is!
Anyway, amazing work with TP, we are starting to rely on it in [transformers](https://github.com/huggingface/transformers), both for our direct users, and when using our modelings as a backend in vLLM or TGI!
```py
import torch
import os
from torch.distributed.tensor.parallel import ColwiseParallel
class Dummy(torch.nn.Module):
def __init__(self):
super().__init__()
self.x = torch.nn.Linear(1000, 1000)
def forward(self, x):
return self.y(self.x(x))
rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
device = torch.device(f"cuda:{rank}")
torch.distributed.init_process_group("nccl", device_id=device)
dummy = Dummy().to(device)
tp_plan = {"x": ColwiseParallel()}
device_mesh = torch.distributed.init_device_mesh("cuda", (world_size,))
torch.distributed.barrier()
torch.distributed.tensor.parallel.parallelize_module(
dummy,
device_mesh=device_mesh,
parallelize_plan=tp_plan,
)
torch.distributed.barrier()
torch.distributed.destroy_process_group()
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.3.52
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA DGX Display
GPU 4: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1838.820
CPU max MHz: 2250,0000
CPU min MHz: 1500,0000
BogoMIPS: 4491.60
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.2.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,840,238,335
|
object of type 'SymInt' has no len() when split is called with tensor of specific dynamic sizes.
|
laithsakka
|
open
|
[
"needs reproduction",
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 1
|
CONTRIBUTOR
|
seen multiple times on internal model when dynamic = True. in different places
seems like issue in on of split implementations.
no local repo yet
1) example 1
aps-no_break2-de8c3fc544
```
return self._abstract_fn(*args, **kwargs)
File "/packages/aps.ads.icvr/icvr_launcher#link-tree/ads_mkl/ops/triton/triton_highway_self_gating.py", line 258, in _triton_highway_self_gating
weight1, weight2 = weight.split(N, dim=-1)
torch._dynamo.exc.TorchRuntimeError: Failed running call_function ads_mkl.XXXXXt(*(FakeTensor(..., device='cuda:0', size=(s0*s1, s2), dtype=torch.bfloat16,
grad_fn=<ViewBackward0>), Parameter(FakeTensor(..., device='cuda:0', size=(s3, s4), dtype=torch.bfloat16,
requires_grad=True)), Parameter(FakeTensor(..., device='cuda:0', size=(s3,), dtype=torch.bfloat16,
requires_grad=True))), **{'use_torch_bwd': False}):
object of type 'SymInt' has no len()
from user code:
File "/packages/aps.ads.icvr/icvr_lau
```
and
2) example 2 [link](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/aps-omnifmv1-5_32_test_with_autotune_disable_all-b12a923903/attempt_0/version_0/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100#[26/0])
```
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <function split at 0x7f87a917eb90>(*(FakeTensor(..., device='cuda:0', size=(s0, s1, s2), dtype=torch.bfloat16,
grad_fn=<Error>), s3), **{'dim': 1}):
object of type 'SymInt' has no len()
```
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,840,200,114
|
Automatically resolve tensor mismatch issues, tensor conversion, and moving tensors to devices
|
Tylersuard
|
open
|
[
"triaged",
"module: python frontend"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
I love PyTorch, but if I ever have any problems, it's one of these 3:
1. Tensor dimensions mismatch
2. Numpy array not converted to tensor
3. Tensor is on the wrong device
It would be really cool if PyTorch could automatically resolve these. For number 1, it could silently create an interface layer that transforms the tensor to the correct dimensions. For number 2, it could automatically transform anything that is uses in a torch function or used wit ha torch tensor. It would also be awesome if I didn't have to do the .to(device) for tensors and models.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD
| true
|
2,840,161,030
|
Fix standalone runner for CUTLASS auto-tuning backend
|
alexsamardzic
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146764
* #146755
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,840,114,632
|
[Break XPU] Align meta calculation for fft_r2c with _fft_r2c_mkl
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146763
* #146880
* #145248
* #146762
Fix #146761
| true
|
2,840,114,609
|
[Break XPU][Inductor UT] Fix XPU Inductor UT failures introduced from community.
|
etaf
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146763
* #146880
* #145248
* __->__ #146762
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,840,089,812
|
[Break XPU][Inductor] The PR #145080 introduce wrong fft_r2c result on XPU.
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
I found XPU CI failure after the PR #145080 landed:
https://github.com/pytorch/pytorch/actions/runs/13158392419/job/36759585266
There are many FFT related OP failure in test_torchinductor_opinfo.py, for example:
```
=================================== FAILURES ===================================
2025-02-06T04:47:18.5772834Z ________ TestInductorOpInfoXPU.test_comprehensive_fft_hfftn_xpu_float32 ________
2025-02-06T04:47:18.5773110Z Traceback (most recent call last):
2025-02-06T04:47:18.5773485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
2025-02-06T04:47:18.5773863Z return test(*args, **kwargs)
2025-02-06T04:47:18.5774218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 1444, in only_fn
2025-02-06T04:47:18.5774588Z return fn(self, *args, **kwargs)
2025-02-06T04:47:18.5774937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 2262, in wrapper
2025-02-06T04:47:18.5775293Z fn(*args, **kwargs)
2025-02-06T04:47:18.5775630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
2025-02-06T04:47:18.5775996Z return fn(slf, *args, **kwargs)
2025-02-06T04:47:18.5776354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
2025-02-06T04:47:18.5776717Z return fn(slf, *args, **kwargs)
2025-02-06T04:47:18.5777071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
2025-02-06T04:47:18.5777433Z return fn(slf, *args, **kwargs)
2025-02-06T04:47:18.5777777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 1620, in wrapper
2025-02-06T04:47:18.5778122Z fn(*args, **kwargs)
2025-02-06T04:47:18.5778450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 1542, in wrapper
2025-02-06T04:47:18.5778800Z fn(*args, **kwargs)
2025-02-06T04:47:18.5779046Z File "/opt/conda/envs/py_3.9/lib/python3.9/unittest/mock.py", line 1336, in patched
2025-02-06T04:47:18.5779332Z return func(*newargs, **newkeywargs)
2025-02-06T04:47:18.5779602Z File "/opt/conda/envs/py_3.9/lib/python3.9/contextlib.py", line 79, in inner
2025-02-06T04:47:18.5779885Z return func(*args, **kwds)
2025-02-06T04:47:18.5780154Z File "/opt/conda/envs/py_3.9/lib/python3.9/contextlib.py", line 79, in inner
2025-02-06T04:47:18.5780460Z return func(*args, **kwds)
2025-02-06T04:47:18.5780797Z File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 949, in inner
2025-02-06T04:47:18.5781130Z raise e
2025-02-06T04:47:18.5781470Z File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 941, in inner
2025-02-06T04:47:18.5781828Z fn(self, device, dtype, op)
2025-02-06T04:47:18.5782212Z File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1188, in test_comprehensive
2025-02-06T04:47:18.5782604Z raise e
2025-02-06T04:47:18.5782942Z File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1148, in test_comprehensive
2025-02-06T04:47:18.5783330Z self.check_model_gpu(
2025-02-06T04:47:18.5783609Z File "/opt/conda/envs/py_3.9/lib/python3.9/contextlib.py", line 79, in inner
2025-02-06T04:47:18.5783899Z return func(*args, **kwds)
2025-02-06T04:47:18.5784247Z File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 629, in check_model_gpu
2025-02-06T04:47:18.5784597Z check_model(
2025-02-06T04:47:18.5784905Z File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 587, in check_model
2025-02-06T04:47:18.5785258Z self.assertEqual(
2025-02-06T04:47:18.5785640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4042, in assertEqual
2025-02-06T04:47:18.5786123Z raise error_metas.pop()[0].to_error( # type: ignore[index]
2025-02-06T04:47:18.5786431Z AssertionError: Tensor-likes are not close!
2025-02-06T04:47:18.5786583Z
2025-02-06T04:47:18.5786676Z Mismatched elements: 200 / 210 (95.2%)
2025-02-06T04:47:18.5787017Z Greatest absolute difference: 0.6952186822891235 at index (1, 0, 3) (up to 1.5e-05 allowed)
2025-02-06T04:47:18.5787461Z Greatest relative difference: 285.22479248046875 at index (3, 5, 5) (up to 1.3e-05 allowed)
2025-02-06T04:47:18.5787711Z
2025-02-06T04:47:18.5787798Z The failure occurred for item [0]
```
**Root cause:**
I found that all the failed test case use fft_r2c. Since the PR #145080 updated the meta calculation for fft_r2c, I found it's the totally the same with the implementation in `aten/src/ATen/native/mkl/SpectralOps.cpp`.
The part https://github.com/pytorch/pytorch/blob/46e83bb6377ad11c475fafc93c9ea15433056573/torch/_meta_registrations.py#L372-L375
is not the same with: https://github.com/pytorch/pytorch/blob/46e83bb6377ad11c475fafc93c9ea15433056573/aten/src/ATen/native/mkl/SpectralOps.cpp#L541-L559
I think we should align them to correct the fft_r2c meta calculation.
### Versions
PyTorch version: 2.7.0a0+git9c78fb92
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,840,073,144
|
[torch.jit] INTERNAL ASSERT FAILED at "../aten/src/ATen/core/ivalue_inl.h":1967, please report a bug to PyTorch.
|
cybersupersoap
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when using `torch.jit.script` and `torch.jit.freeze`. The code is as follows:
```python
import torch
from torch import nn
from torch.testing._internal.jit_utils import clear_class_registry
clear_class_registry()
conv1 = torch.nn.Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
max_pool = torch.nn.MaxPool2d(kernel_size=3.1, stride=2, padding=1, dilation=1, ceil_mode=False)
conv2 = nn.Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
mod = torch.jit.freeze(torch.jit.script(nn.Sequential(conv1, max_pool, conv2).eval()))
```
Error messages:
```
RuntimeError Traceback (most recent call last)
<ipython-input-4-c916c0278861> in <cell line: 0>()
6 max_pool = torch.nn.MaxPool2d(kernel_size=3.1, stride=2, padding=1, dilation=1, ceil_mode=False)
7 conv2 = nn.Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
----> 8 mod = torch.jit.freeze(torch.jit.script(nn.Sequential(conv1, max_pool, conv2).eval()))
7 frames
/usr/local/lib/python3.11/dist-packages/torch/jit/_script.py in script(obj, optimize, _frames_up, _rcb, example_inputs)
1427 prev = _TOPLEVEL
1428 _TOPLEVEL = False
-> 1429 ret = _script_impl(
1430 obj=obj,
1431 optimize=optimize,
/usr/local/lib/python3.11/dist-packages/torch/jit/_script.py in _script_impl(obj, optimize, _frames_up, _rcb, example_inputs)
1145 if isinstance(obj, torch.nn.Module):
1146 obj = call_prepare_scriptable_func(obj)
-> 1147 return torch.jit._recursive.create_script_module(
1148 obj, torch.jit._recursive.infer_methods_to_compile
1149 )
/usr/local/lib/python3.11/dist-packages/torch/jit/_recursive.py in create_script_module(nn_module, stubs_fn, share_types, is_tracing)
555 if not is_tracing:
556 AttributeTypeIsSupportedChecker().check(nn_module)
--> 557 return create_script_module_impl(nn_module, concrete_type, stubs_fn)
558
559
/usr/local/lib/python3.11/dist-packages/torch/jit/_recursive.py in create_script_module_impl(nn_module, concrete_type, stubs_fn)
628
629 # Actually create the ScriptModule, initializing it with the function we just defined
--> 630 script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
631
632 # Compile methods if necessary
/usr/local/lib/python3.11/dist-packages/torch/jit/_script.py in _construct(cpp_module, init_fn)
648 """
649 script_module = RecursiveScriptModule(cpp_module)
--> 650 init_fn(script_module)
651
652 # Finalize the ScriptModule: replace the nn.Module state with our
/usr/local/lib/python3.11/dist-packages/torch/jit/_recursive.py in init_fn(script_module)
604 else:
605 # always reuse the provided stubs_fn to infer the methods to compile
--> 606 scripted = create_script_module_impl(
607 orig_value, sub_concrete_type, stubs_fn
608 )
/usr/local/lib/python3.11/dist-packages/torch/jit/_recursive.py in create_script_module_impl(nn_module, concrete_type, stubs_fn)
632 # Compile methods if necessary
633 if concrete_type not in concrete_type_store.methods_compiled:
--> 634 create_methods_and_properties_from_stubs(
635 concrete_type, method_stubs, property_stubs
636 )
/usr/local/lib/python3.11/dist-packages/torch/jit/_recursive.py in create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
464 property_rcbs = [p.resolution_callback for p in property_stubs]
465
--> 466 concrete_type._create_methods_and_properties(
467 property_defs, property_rcbs, method_defs, method_rcbs, method_defaults
468 )
RuntimeError: isIntList() INTERNAL ASSERT FAILED at "../aten/src/ATen/core/ivalue_inl.h":1967, please report a bug to PyTorch. Expected IntList but got Int
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
Please find the [gist](https://colab.research.google.com/drive/1s7cLKhGLvQZzEZ09snKVvV-f4waPWc7u?usp=sharing) here for reference.
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,840,069,298
|
INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":617, please report a bug to PyTorch.
|
cybersupersoap
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when using `alias_db`. The code is as follows:
```python
from torch._C import parse_ir
graph_str = '\n graph(%a.1 : Tensor, %b.1 : Tensor):\n %11 : NoneType = prim::Constant()\n %8 : int = prim::Constant[value=0]()\n %7 : int = prim::Constant[value=1]()\n %x.1 : Tensor = aten::add(%a.1, %b.1, %7)\n %y.1 : Tensor[] = aten::split(%x.1, %x.1, %8)\n return ()\n '
graph = parse_ir(graph_str)
alias_db = graph.alias_db()
```
Error messages:
```
RuntimeError Traceback (most recent call last)
<ipython-input-1-af6e79f0e704> in <cell line: 0>()
2 graph_str = '\n graph(%a.1 : Tensor, %b.1 : Tensor):\n %11 : NoneType = prim::Constant()\n %8 : int = prim::Constant[value=0]()\n %7 : int = prim::Constant[value=1]()\n %x.1 : Tensor = aten::add(%a.1, %b.1, %7)\n %y.1 : Tensor[] = aten::split(%x.1, %x.1, %8)\n return ()\n '
3 graph = parse_ir(graph_str)
----> 4 alias_db = graph.alias_db()
RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":617, please report a bug to PyTorch. We don't have an op for aten::split but it isn't a special case. Argument types: Tensor, Tensor, int,
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
Please find the [gist](https://colab.research.google.com/drive/1ECqf2I9IP3nAzt8J20jg-9w-OhlYDfxn?usp=sharing) here for reference.
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,840,049,335
|
INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/autograd/functions/utils.h":74, please report a bug to PyTorch
|
cybersupersoap
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when predicting. The code is as follows:
```python
import torch
class CustomLinear(torch.nn.Module):
def __init__(self, a, b):
super().__init__()
self.weight = torch.nn.Parameter(torch.randn(a, b))
def forward(self, x):
return torch.mm(x, self.weight)
class ToyModel(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, x):
return torch.nn.Sequential(*[CustomLinear(10, 10)] + [CustomLinear(10, 10000)] + [CustomLinear(10000, 5)])(x)
model = ToyModel()
model = torch.compile(model, backend='aot_eager')
x = torch.randn((20, 10)).type(torch.int64)
pred = model(x)
```
Error messages:
```
/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py in run_node(tracer, node, args, kwargs, nnmodule)
3136 try:
3137 if op == "call_function":
-> 3138 return node.target(*args, **kwargs)
3139 elif op == "call_method":
3140 if not hasattr(args[0], node.target):
TorchRuntimeError: Failed running call_function <built-in method mm of type object at 0x7ed84baf6f20>(*(FakeTensor(..., size=(20, 10), dtype=torch.int64), Parameter(FakeTensor(..., size=(10, 10), requires_grad=True))), **{}):
isDifferentiableType(variable.scalar_type()) INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/autograd/functions/utils.h":74, please report a bug to PyTorch.
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
Please find the [gist](https://colab.research.google.com/drive/1vkMHBA8aZdmUNjOM-q7eKiak99SLKyf-?usp=sharing) here for reference.
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,840,038,308
|
[torch.jit.script] INTERNAL ASSERT FAILED at "./torch/csrc/jit/ir/ir.h":505, please report a bug to PyTorch
|
cybersupersoap
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when using torch.jit.script. The code is as follows:
```python
import torch
@torch.jit.script
def foo(i: int, z):
y = z.view([z.size(i), 3, 2, z.size(i)])
return y
view = foo.graph.findNode('aten::view').input()
```
Error messages:
```
RuntimeError Traceback (most recent call last)
<ipython-input-6-ba0bb8d89b23> in <cell line: 0>()
8 else:
9 return y
---> 10 view = foo.graph.findNode('aten::view').input().type().symbolic_sizes()
RuntimeError: inputs_.size() == 1 INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/ir/ir.h":505, please report a bug to PyTorch.
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
Please find the [gist](https://colab.research.google.com/drive/140Al2CqTcYYdRftwdlFRcPCPcYYis2j1?usp=sharing) here for reference.
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,840,014,281
|
[Inductor][CPU] Add GEMM templates for _weight_int4pack_mm_for_cpu with AVX512
|
Xia-Weiwen
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146756
**Summary**
It's part of the task to enable max-autotune with GEMM template for WoQ INT4 GEMM on CPU.
This PR adds GEMM templates for `torch.ops.aten_weight_int4pack_mm_for_cpu`. The micro kernel used for the templates is based on AVX512 and it's a copy of the ATen implementation of `torch.ops.aten_weight_int4pack_mm_for_cpu` with minor changes.
Due to better blocking and loop schedule, the GEMM template based implementation outperforms the ATen implementation in all cases we tested.
**Test plan**
```
python test/inductor/test_cpu_select_algorithm.py -k test_int4_woq_mm_avx512
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,839,878,192
|
Fix CUTLASS 2.x kernels for auto-tuning
|
alexsamardzic
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"merging"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146764
* __->__ #146755
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,839,766,409
|
[MPS] fix inverse bug for N>1024
|
Isalia20
|
closed
|
[
"triaged",
"open source",
"Merged",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 12
|
COLLABORATOR
|
Fixes #138200
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,839,704,716
|
[MPS] fix lu factor for large tensors with bs>1
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 3
|
COLLABORATOR
|
Try this:
```python
import torch
batch_size = 2
A = torch.eye(256, device="mps")[None, :, :].expand(batch_size, -1, -1) + 0.1 * torch.randn((batch_size, 256, 256), device="mps")
A_cpu = A.cpu()
LU_cpu, pivots_cpu = torch.linalg.lu_factor(A_cpu)
LU, pivots = torch.linalg.lu_factor(A)
torch.testing.assert_close(LU.cpu(), LU_cpu)
```
You'll get huge difference in LU tensors
<img width="706" alt="Screenshot 2025-02-08 at 12 14 39" src="https://github.com/user-attachments/assets/b45f2b3c-e0a5-49c8-aa07-42792150b781" />
| true
|
2,839,670,894
|
realize stride symbols in estimate_runtime
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146752
Unfortuanlty could not create a local repo, or unit test.
fix https://github.com/pytorch/pytorch/issues/146686
| true
|
2,839,665,509
|
[MTIA] (4/n) Implement PyTorch APIs to query/reset device peak memory usage
|
chaos5958
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Summary: Public summary (shared with Github): This diff updates the unit test for the PyTorch API "reset_peak_memory_stats".
Test Plan:
```
buck2 test //mtia/host_runtime/torch_mtia/tests:test_torch_mtia_api -- -r test_reset_peak_memory_stats
```
https://www.internalfb.com/intern/testinfra/testrun/9007199321947161
Reviewed By: yuhc
Differential Revision: D68989900
| true
|
2,839,643,802
|
Update instructions about faster linker
|
oraluben
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
This PR adds instructions to specify linker via cmake env `CMAKE_LINKER_TYPE` and also adds `mold` as a linker alternative.
Since 3.29, cmake introduced [`CMAKE_LINKER_TYPE`](https://cmake.org/cmake/help/latest/variable/CMAKE_LINKER_TYPE.html) that can specify linker without overwriting `ld` file or changing build script.
`mold` is already stable and **the fastest** (afaict) linker out there, and also easier to install compared with `lld`. So I added it here. After switching to `mold`, the time of linking `libtorch_cuda.so` has been reduced from ~7s to ~0.6s locally.
Also note `gold` has been marked deprecated recently[1].
[1] https://lwn.net/Articles/1007541/
| true
|
2,839,639,588
|
dest = zeros_like(source, dtype=DTYPE) changes source's DTensor dtype
|
janeyx99
|
closed
|
[
"high priority",
"triage review",
"oncall: distributed",
"module: correctness (silent)",
"module: dtensor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Calling zeros_like on a DTensor should not have side effects on the source tensor, but it does. Specifically, the dtype recorded as a part of the DTensor spec is changed, which is wrong.
Example.
```
import torch
import torch.nn as nn
from torch.distributed.fsdp import fully_shard
lin1 = nn.Linear(2,2, bias=False)
fully_shard(lin1)
print(f"BEFORE, the param has dtype fp32 {lin1.weight=} {lin1.weight._spec.tensor_meta}")
t = torch.zeros_like(lin1.weight, dtype=torch.bfloat16)
print(f"AFTER, the param has dtype bf16????? {lin1.weight=} {lin1.weight._spec.tensor_meta}")
```
While the local tensor for source remains the right dtype, the metadata stored in DTensor is now mismatched and will cause propagation to be wrong further down. I noticed this when attempting to enable a POC for mixed precision optim #146640 with FSDP and spent the last few hours debugging to find this surprising behavior.
### Versions
on source
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,839,557,379
|
Update strided test to float32
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146748
Fixes #146377
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,839,514,607
|
Add hint message for `pack_padded_sequence`
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Fixes #144207
Add truncate hint message in docs [torch.nn.utils.rnn.pack_padded_sequence](https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_padded_sequence.html)
## Test Result

| true
|
2,839,465,359
|
[Inductor] Fix the lowering of squeeze when input is not contiguous
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146746
**Summary**
Fix issue https://github.com/pytorch/pytorch/issues/143498. The issue happens when we lowering `select = torch.ops.aten.select.int(cat, 1, 0)`.
For example, when `cat` is contiguous with size[2, 2] stride[2,1]
- for eager, it returns a view of size[2,] stride[2,]
- for Inductor lowering, it returns wrong stride 1 instead of 2
```
TensorBox(
ReinterpretView(
StorageBox(
ConcatKernel(name='buf10', layout=FixedLayout('cpu', torch.int64, size=[u0, 2], stride=[2, 1]), inputs=[ComputedBuffer(name='buf8', layout=NonOwningLayout('cpu', torch.int64, size=[u0, 1], stride=[2, 1]), data=Pointwise(device=device(type='cpu'), dtype=torch.int64, inner_fn=<function ReinterpretView.make_loader.<locals>.loader at 0x7f6b856449d0>, ranges=[u0, 1])), ComputedBuffer(name='buf9', layout=NonOwningLayout('cpu', torch.int64, size=[u0, 1], stride=[2, 1]), data=Pointwise(device=device(type='cpu'), dtype=torch.int64, inner_fn=<function ReinterpretView.make_loader.<locals>.loader at 0x7f6b85644790>, ranges=[u0, 1]))])
),
FixedLayout('cpu', torch.int64, size=[u0], stride=[**1**]),
origins=OrderedSet([select])
)
)
```
To fix this issue, we give the right stride when lowering of `squeeze`.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_unbacked_symints.py -k test_issue_143498
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,839,464,754
|
[Flex Attention] Errors with Dynamic Shapes (Cannot determine truth value of Relational)
|
ChenlongDeng
|
closed
|
[
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 4
|
NONE
|
### 🐛 Describe the bug
Thanks for the team's great work! But it seems that the latest version (torch==2.6.0) still hasn't resolved the issue with dynamic shape inputs. I can easily reproduce this problem with a few lines of chunked-prefill code. I am curious if this is the same issue reported in https://github.com/pytorch/pytorch/issues/139064 and how to solve it?
I have narrowed down the issue to whether the `create_block_mask` function is compiled or not. **If this function is not compiled, the program runs normally.** However, for longer sequence masks (e.g., 64K*64K), not compiling create_block_mask will lead to huge GPU memory overhead, causing OOM. I'm not sure if this is because a whole bf16 data type mask tensor is created in the background? But if I compile this function, the same `LoweringException: TypeError: cannot determine truth value of Relational` error as in https://github.com/pytorch/pytorch/issues/139064 occurs.
You can easily reproduce this with the following code:
```python
from torch.nn.attention.flex_attention import flex_attention, create_block_mask, _DEFAULT_SPARSE_BLOCK_SIZE
import torch
import argparse
import math
from tqdm import tqdm
parser = argparse.ArgumentParser()
parser.add_argument("--seq_len", type=int, default=32*1024)
parser.add_argument("--head_num", type=int, default=32)
parser.add_argument("--head_dim", type=int, default=128)
parser.add_argument("--chunk_size", type=int, default=2*1024)
args = parser.parse_args()
flex_attention = torch.compile(flex_attention, dynamic=False, mode="max-autotune")
def get_dynamic_mod(recent_token_num):
def get_mask(b, h, q_idx, kv_idx):
recent_mask = kv_idx < recent_token_num
real_kv_idx = kv_idx - recent_token_num
casual_mask = q_idx >= real_kv_idx
return recent_mask | casual_mask
return get_mask
@torch.no_grad
def main():
q = torch.randn(1, args.head_num, args.seq_len, args.head_dim, dtype=torch.bfloat16).cuda()
k = torch.randn(1, args.head_num, args.seq_len, args.head_dim, dtype=torch.bfloat16).cuda()
v = torch.randn(1, args.head_num, args.seq_len, args.head_dim, dtype=torch.bfloat16).cuda()
iter_num = math.ceil(args.seq_len / args.chunk_size)
num_past_tokens = 0
for i in tqdm(range(iter_num)):
query_states = q[:, :, i*args.chunk_size:(i+1)*args.chunk_size, :]
key_states = k[:, :, i*args.chunk_size-num_past_tokens:(i+1)*args.chunk_size, :]
value_states = v[:, :, i*args.chunk_size-num_past_tokens:(i+1)*args.chunk_size, :]
print(query_states.shape, key_states.shape, value_states.shape)
mask_mod = get_dynamic_mod(num_past_tokens)
# wheter to use `_compile=True` here is important!
block_mask = create_block_mask(mask_mod, 1, 1, args.chunk_size, args.chunk_size+num_past_tokens, device="cuda", BLOCK_SIZE=(128, 64), _compile=True)
attn_output = flex_attention(query_states, key_states, value_states, block_mask=block_mask)
num_past_tokens = args.chunk_size * (i+1)
# num_past_tokens = 0
if __name__ == "__main__":
main()
```
### Versions
torch==2.6.0
GPU: Nvidia A100-40G SXM
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,839,439,912
|
`torch.nn.utils.rnn.pack_padded_sequence` need better check for `input` dim
|
zeshengzong
|
closed
|
[] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
In [`torch.nn.utils.rnn.pack_padded_sequence`](https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_padded_sequence.html) docs, there's a presumption about `T` is longest
> The returned Tensor’s data will be of size T x B x * (if batch_first is False) or B x T x * (if batch_first is True) , where **T is the length of the longest sequence and B is the batch size**.
Seems missing check about it, and hint message not have clear suggestion about what is wrong for users.
```python
import torch
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
input_tensor = torch.randn(3, 5, 3)
# Note: The first sequence has a length smaller than its actual length (4 > 3)
lengths = [4, 2, 3]
packed = pack_padded_sequence(input_tensor, lengths, batch_first=False, enforce_sorted=False)
unpacked, unpacked_lengths = pad_packed_sequence(packed, batch_first=False)
# Outputs: (3, 4, 3)
print("Unpacked Sequence Shape:", unpacked.shape)
# Outputs: [4, 2, 3]
print("Unpacked Lengths:", unpacked_lengths)
print("Original Sequence:", input_tensor)
# Note: the last sequence length inde has been truncated
print("Unpacked Sequence:", unpacked)
Traceback (most recent call last):
File "/home/zong/code/rnn2.py", line 10, in <module>
unpacked, unpacked_lengths = pad_packed_sequence(packed, batch_first=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zong/code/pytorch/torch/nn/utils/rnn.py", line 397, in pad_packed_sequence
padded_output, lengths = _VF._pad_packed_sequence(
^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: shape '[1, 1, 3]' is invalid for input of size 0 # Not clear about where does [1, 1, 3] comes from.
```
### Versions
PyTorch version: 2.7.0a0+git9feba2a
Is debug build: True
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.12.0 | packaged by Anaconda, Inc. | (main, Oct 2 2023, 17:29:18) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6151 CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 4
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat md_clear flush_l1d arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.0
[pip3] optree==0.13.0
[pip3] pytorch_openreg==1.0
[pip3] torch==2.7.0a0+git9feba2a
[pip3] triton==3.1.0
[conda] mkl-include 2024.2.2 pypi_0 pypi
[conda] mkl-static 2024.2.2 pypi_0 pypi
[conda] numpy 2.1.0 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-openreg 1.0 dev_0 <develop>
[conda] torch 2.7.0a0+git9feba2a dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
| true
|
2,839,389,191
|
[cutlass backend][BE] refactor tests to remove duplicate logic
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147173
* #147169
* #147158
* #147148
* __->__ #146743
Doing many things here:
* remove duplicate hip checking logic
* check for CUDA in setup
* remove CUTLASS_DIR setting. That is not needed when building from source and fbcode anymore
* fix some typing errors
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,839,384,955
|
[Dynamo][autograd.Function] Relax backward speculation strict mode: support .requires_grad
|
yanboliang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146742
* #146741
* #146571
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,839,384,923
|
[Dynamo][autograd.Function] Relax backward speculation strict mode: support .data
|
yanboliang
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146742
* __->__ #146741
* #146571
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,839,362,824
|
PyTorch Compilation on AGX Xavier Error with -march=armv8.2-a+bf16 in KleidiAI
|
MaTwickenham
|
closed
|
[
"module: build",
"triaged",
"module: arm"
] | 4
|
NONE
|
### 🐛 Describe the bug
I am trying to compile PyTorch on my Jetson AGX Xavier, but I encounter the following error when compiling the third party lib `kleidiai`:
```
FAILED: third_party/kleidiai/CMakeFiles/kleidiai.dir/kai/ukernels/matmul/pack/kai_lhs_quant_pack_bf16p_f32_neon.c.o
/usr/bin/cc -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -I/home/walker/workspace/llm-serving/pytorch/cmake/../third_party/benchmark/include -I/home/walker/workspace/llm-serving/pytorch/third_party/onnx -I/home/walker/workspace/llm-serving/pytorch/build/third_party/onnx -I/home/walker/workspace/llm-serving/pytorch/third_party/kleidiai/. -isystem /home/walker/workspace/llm-serving/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/walker/workspace/llm-serving/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/walker/workspace/llm-serving/pytorch/third_party/protobuf/src -isystem /home/walker/workspace/llm-serving/pytorch/third_party/XNNPACK/include -isystem /home/walker/workspace/llm-serving/pytorch/cmake/../third_party/eigen -isystem /usr/local/cuda-11.4/include -ffunction-sections -fdata-sections -DNDEBUG -O3 -DNDEBUG -DNDEBUG -std=c99 -fPIC -D__NEON__ -Wall -Wdisabled-optimization -Werror -Wextra -Wformat-security -Wformat=2 -Winit-self -Wno-ignored-attributes -Wno-misleading-indentation -Wno-overlength-strings -Wstrict-overflow=2 -Wswitch-default -march=armv8.2-a+bf16 -MD -MT third_party/kleidiai/CMakeFiles/kleidiai.dir/kai/ukernels/matmul/pack/kai_lhs_quant_pack_bf16p_f32_neon.c.o -MF third_party/kleidiai/CMakeFiles/kleidiai.dir/kai/ukernels/matmul/pack/kai_lhs_quant_pack_bf16p_f32_neon.c.o.d -o third_party/kleidiai/CMakeFiles/kleidiai.dir/kai/ukernels/matmul/pack/kai_lhs_quant_pack_bf16p_f32_neon.c.o -c /home/walker/workspace/llm-serving/pytorch/third_party/kleidiai/kai/ukernels/matmul/pack/kai_lhs_quant_pack_bf16p_f32_neon.c
cc1: error: invalid feature modifier ‘bf16’ in ‘-march=armv8.2-a+bf16’
cc1: note: valid arguments are: fp simd crypto crc lse fp16 rcpc rdma dotprod aes sha2 sha3 sm4 fp16fml sve profile rng memtag sb ssbs predres;
```
My spec is:

My compile options is:
```
# get source code
git clone --recursive https://github.com/pytorch/pytorch.git
cd pytorch
# set build options
export USE_NCCL=1
export USE_DISTRIBUTED=1
export USE_QNNPACK=0
export USE_PYTORCH_QNNPACK=0
export TORCH_CUDA_ARCH_LIST="7.2"
export PYTORCH_BUILD_VERSION=2.3.1
export PYTORCH_BUILD_NUMBER=1
export MAX_JOBS=16
pip install -r requirements.txt
python setup.py bdist_wheel
```
As far as I know, Jetson AGX Xavier does not support bf16, so I want to disable the compilation option related to bf16. How can I do this? :D
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.30.0-rc3
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:18:56) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.192-tegra-aarch64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: 11.4.315
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 4
Vendor ID: Nvidia
Model: 0
Model name: ARMv8 Processor rev 0 (v8l)
Stepping: 0x0
CPU max MHz: 2265.6001
CPU min MHz: 115.2000
BogoMIPS: 62.50
L1d cache: 512 KiB
L1i cache: 1 MiB
L2 cache: 8 MiB
L3 cache: 4 MiB
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Branch predictor hardening, but not BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm dcpop
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] optree==0.14.0
[conda] numpy 2.2.2 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
cc @ptrblck @msaroufim @eqy @malfet @snadampal @milpuz01 @seemethere
| true
|
2,839,340,321
|
Testing
|
mikaylagawarecki
|
closed
|
[
"release notes: releng",
"ciflow/binaries_wheel"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146739
* #145748
This reverts commit 5cd5b4d2d54c0220b92ee488dd36d789c9b60af3.
| true
|
2,839,333,662
|
[audio hash update] update the pinned audio hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 18
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
| true
|
2,839,332,048
|
[dynamo][user-defined] Unify standard and non-standard __new__ codebase
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146819
* __->__ #146737
* #146677
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,839,327,914
|
Document dynamo
|
Raymo111
|
closed
|
[
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 6
|
MEMBER
|
Many files in dynamo are currently lacking file/module-level documentation, which makes it hard to know what they do at a glance and without digging into the code. This fixes that.
Note: documentation was AI-generated and could be incorrect, please review carefully.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @StrongerXi @xmfan @svekars @brycebortree @sekyondaMeta @AlannaBurke
| true
|
2,839,297,293
|
[ca] log graph before reodering passes
|
xmfan
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147021
* #146875
* __->__ #146735
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi @yf225
| true
|
2,839,287,326
|
[CUDA][CUDNN][SDPA] Pass dropout seed and offset to cuDNN in `int64`
|
eqy
|
closed
|
[
"module: cudnn",
"module: cuda",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: sdpa"
] | 12
|
COLLABORATOR
|
Workaround for limitation in cuDNN that does not accept dropout seed/offset in `int32` for SM 10.0 kernels.
cc @csarofeen @ptrblck @xwang233 @msaroufim
| true
|
2,839,286,302
|
[CUDA][SDPA] Don't dispatch to mem eff attn for batch_size >= 65536
|
eqy
|
open
|
[
"module: cuda",
"open source",
"Stale",
"topic: not user facing",
"module: sdpa"
] | 3
|
COLLABORATOR
|
#146704
cc @ptrblck @msaroufim
| true
|
2,839,274,170
|
increase lwork/rwork sizes for all float->int conversions
|
wdvr
|
open
|
[
"triaged",
"module: linear algebra"
] | 0
|
CONTRIBUTOR
|
This is a follow up to https://github.com/pytorch/pytorch/issues/145801 and https://github.com/pytorch/pytorch/pull/146456.
To do:
- extract the solution in https://github.com/pytorch/pytorch/pull/146456 to a method
- call the method in all lapack functions
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @malfet
| true
|
2,839,234,257
|
dont specialize symints when testing truthiness
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: composability",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #133044
* __->__ #146731
* #146729
* #146642
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,839,226,304
|
[BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands)
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: foreach_frontend",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146730
Our three main users are OK with this, with two of them (foreach_map,
invoke_quant) prefering it like this.
I was originally worried about BC issues (this now means you cannot add
any positional args) but I think that's not a concern -- one can always
add kwonly args.
Test Plan
- tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.