id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,754,588,032
|
[inductor][gpu] torch.fft.fft outputs incorrect results when `n>1`
|
maybeLee
|
closed
|
[
"triaged",
"module: fft",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I find this issue when running on a GPU card (RTX 3090). After torch.compile, torch.fft.fft outputs incorrect results. Here is the code to reproduce.
```
import torch
print(torch.__version__)
@torch.compile
def fft(input, n=None, dim=-1, norm=None):
return torch.fft.fft(input, n, dim, norm)
input = torch.tensor([[[1.3703]]])
input = input.to('cuda')
n = 2
dim = -1
print(f"[CUDA] FFT in compiled mode: {fft(input, n, dim)}")
print(f"[CUDA] FFT in eager mode: {torch.fft.fft(input, n, dim)}")
input = input.cpu()
print(f"[CPU] FFT in compiled mode: {fft(input, n, dim)}")
print(f"[CPU] FFT in eager mode: {torch.fft.fft(input, n, dim)}")
```
Output:
```
2.6.0a0+gite15442a
[CUDA] FFT in compiled mode: tensor([[[2.7406+0.j, 0.0000+0.j]]], device='cuda:0')
[CUDA] FFT in eager mode: tensor([[[1.3703+0.j, 1.3703+0.j]]], device='cuda:0')
[CPU] FFT in compiled mode: tensor([[[1.3703+0.j, 1.3703+0.j]]])
[CPU] FFT in eager mode: tensor([[[1.3703+0.j, 1.3703+0.j]]])
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gite15442a
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gite15442a
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gite15442a pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @mruberry @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,754,565,536
|
[ROCm] [STILL NOT FIXED] (Stable diffusion LoRA training, sd-scripts) ModuleNotFoundError: No module named 'triton.ops'
|
devsantiagoweb
|
closed
|
[
"module: rocm",
"triaged"
] | 5
|
NONE
|
### 🐛 Describe the bug
ROCm 6.2.4 + Linux Ubuntu 22.04.5 LTS, Using latest Pytorch Preview (Nightly) version.
AMD® Radeon graphics / AMD® Radeon rx 6700 xt
``` Traceback (most recent call last):
File "/home/santi-linux/trainer_kohya_ss/sd-scripts/library/train_util.py", line 4161, in get_optimizer
import bitsandbytes as bnb
File "/home/santi-linux/.local/lib/python3.10/site-packages/bitsandbytes/__init__.py", line 69, in <module>
from .nn import modules
File "/home/santi-linux/.local/lib/python3.10/site-packages/bitsandbytes/nn/__init__.py", line 21, in <module>
from .triton_based_modules import (
File "/home/santi-linux/.local/lib/python3.10/site-packages/bitsandbytes/nn/triton_based_modules.py", line 7, in <module>
from bitsandbytes.triton.int8_matmul_mixed_dequantize import (
File "/home/santi-linux/.local/lib/python3.10/site-packages/bitsandbytes/triton/int8_matmul_mixed_dequantize.py", line 12, in <module>
from triton.ops.matmul_perf_model import early_config_prune, estimate_matmul_time
ModuleNotFoundError: No module named 'triton.ops'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/santi-linux/trainer_kohya_ss/sd-scripts/sdxl_train_network.py", line 185, in <module>
trainer.train(args)
File "/home/santi-linux/trainer_kohya_ss/sd-scripts/train_network.py", line 367, in train
optimizer_name, optimizer_args, optimizer = train_util.get_optimizer(args, trainable_params)
File "/home/santi-linux/trainer_kohya_ss/sd-scripts/library/train_util.py", line 4163, in get_optimizer
raise ImportError("No bitsandbytes / bitsandbytesがインストールされていないようです")
ImportError: No bitsandbytes / bitsandbytesがインストールされていないようです
Traceback (most recent call last):
File "/home/santi-linux/trainer_kohya_ss/sd-scripts/library/train_util.py", line 4161, in get_optimizer
import bitsandbytes as bnb
File "/home/santi-linux/.local/lib/python3.10/site-packages/bitsandbytes/__init__.py", line 69, in <module>
from .nn import modules
File "/home/santi-linux/.local/lib/python3.10/site-packages/bitsandbytes/nn/__init__.py", line 21, in <module>
from .triton_based_modules import (
File "/home/santi-linux/.local/lib/python3.10/site-packages/bitsandbytes/nn/triton_based_modules.py", line 7, in <module>
from bitsandbytes.triton.int8_matmul_mixed_dequantize import (
File "/home/santi-linux/.local/lib/python3.10/site-packages/bitsandbytes/triton/int8_matmul_mixed_dequantize.py", line 12, in <module>
from triton.ops.matmul_perf_model import early_config_prune, estimate_matmul_time
ModuleNotFoundError: No module named 'triton.ops'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/santi-linux/trainer_kohya_ss/sd-scripts/sdxl_train_network.py", line 185, in <module>
trainer.train(args)
File "/home/santi-linux/trainer_kohya_ss/sd-scripts/train_network.py", line 367, in train
optimizer_name, optimizer_args, optimizer = train_util.get_optimizer(args, trainable_params)
File "/home/santi-linux/trainer_kohya_ss/sd-scripts/library/train_util.py", line 4163, in get_optimizer
raise ImportError("No bitsandbytes / bitsandbytesがインストールされていないようです")
ImportError: No bitsandbytes / bitsandbytesがインストールされていないようです
Traceback (most recent call last):
File "/home/santi-linux/.local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/santi-linux/.local/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 46, in main
args.func(args)
File "/home/santi-linux/.local/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1082, in launch_command
simple_launcher(args)
File "/home/santi-linux/.local/lib/python3.10/site-packages/accelerate/commands/launch.py", line 688, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', 'sdxl_train_network.py', '--config_file=/home/santi-linux/trainer_kohya_ss/train_network_SDXL_AdamW.toml']' returned non-zero exit status 1.
Press Enter to continue...
```
### Versions
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Arquitectura: x86_64
modo(s) de operación de las CPUs: 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Orden de los bytes: Little Endian
CPU(s): 16
Lista de la(s) CPU(s) en línea: 0-15
ID de fabricante: AuthenticAMD
Nombre del modelo: AMD Ryzen 7 5700G with Radeon Graphics
Familia de CPU: 25
Modelo: 80
Hilo(s) de procesamiento por núcleo: 2
Núcleo(s) por «socket»: 8
«Socket(s)» 1
Revisión: 0
CPU MHz máx.: 4673,0000
CPU MHz mín.: 400,0000
BogoMIPS: 7600.24
Indicadores: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualización: AMD-V
Caché L1d: 256 KiB (8 instances)
Caché L1i: 256 KiB (8 instances)
Caché L2: 4 MiB (8 instances)
Caché L3: 16 MiB (1 instance)
Modo(s) NUMA: 1
CPU(s) del nodo NUMA 0: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,754,558,089
|
[Fake Tensor] [aot_eager] `.div` pass the check on inductor when divisor is zero
|
shaoyuyoung
|
open
|
[
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
An error is raised when the divisor is 0 on eager. However, inductor passes the check and outputs the max value of Long.
both occurs on cpu and cuda.
**aot_eager** is where it starts to return bad results.
```python
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x, y):
x.div_(y)
return x
x = torch.tensor([1])
y = torch.tensor([0])
inputs = [x, y]
def run_test(inputs, device, mode):
model = Model()
if mode == "aot_eager":
model = torch.compile(model, backend="aot_eager")
if device == "cuda":
inputs = [x.cuda() for x in inputs]
model = model.cuda()
try:
output = model(*inputs)
print(f"{mode} succeeds: {output}")
except Exception as e:
print(e)
run_test(inputs, "cuda", "eager")
run_test(inputs, "cuda", "aot_eager")
run_test(inputs, "cpu", "eager")
run_test(inputs, "cpu", "aot_eager")
```
### Error logs
```
result type Float can't be cast to the desired output type Long
inductor succeeds: tensor([9223372036854775807], device='cuda:0')
result type Float can't be cast to the desired output type Long
inductor succeeds: tensor([-9223372036854775808])
```
### Versions
PyTorch version: 2.6.0.dev20241218+cu126
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241218+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-202-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.996
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+gitf9cdf582
[pip3] torch==2.6.0.dev20241218+cu126
[pip3] torchaudio==2.6.0.dev20241218+cu126
[pip3] torchvision==0.22.0.dev20241218+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+gitf9cdf582 pypi_0 pypi
[conda] torch 2.6.0.dev20241218+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241218+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241218+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @eellison @zou3519 @bdhirsh @yf225 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,754,527,910
|
[2/N] Add Intel GPU Support to Torch Test Cases
|
daisyden
|
closed
|
[
"oncall: distributed",
"open source",
"release notes: distributed (fsdp)",
"module: inductor",
"module: dynamo"
] | 4
|
NONE
|
This PR is merged with https://github.com/pytorch/pytorch/pull/141479 for testing purpose.
For RFC https://github.com/pytorch/pytorch/issues/142029, this PR will make the op_db general for all GPU devices defined in GPU_TYPES list.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,754,432,320
|
Unused var python
|
cyyever
|
closed
|
[
"open source",
"Stale",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,754,431,854
|
[16/N] Fix extra warnings brought by clang-tidy-17
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,754,430,060
|
Upgrading torch 2.5.0+xpu to torch 2.6.0+xpu breaks import torch on Ubuntu 24.04.1 / Python 3.12
|
bmilde
|
open
|
[
"needs reproduction",
"module: binaries",
"triaged",
"module: regression",
"module: xpu"
] | 4
|
NONE
|
### 🐛 Describe the bug
Installing the new 2.6.0 xpu torch version from https://download.pytorch.org/whl/test/xpu on Ubuntu 24.04.1 / Python 3.12 breaks
`import torch`
for me with an undefined symbol error. This error does not happen with version 2.5.0+xpu, where I can successfully import torch on the same system and use the xpu backend on my Intel N100 iGPU.
```
:: initializing oneAPI environment ...
-bash: BASH_VERSION = 5.2.21(1)-release
args: Using "$@" for oneapi-vars.sh arguments:
:: compiler -- processing etc/compiler/vars.sh
:: debugger -- processing etc/debugger/vars.sh
:: dpl -- processing etc/dpl/vars.sh
:: mkl -- processing etc/mkl/vars.sh
:: tbb -- processing etc/tbb/vars.sh
:: oneAPI environment initialized ::
Python 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/me/.local/lib/python3.12/site-packages/torch/__init__.py", line 379, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home/me/.local/lib/python3.12/site-packages/torch/lib/../../../../libsycl.so.8: undefined symbol: urBindlessImagesImportExternalMemoryExp, version LIBUR_LOADER_0.10
>>>
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.31.2
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.12.6-zabbly+-x86_64-with-glibc2.39
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) N100
CPU family: 6
Model: 190
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 0
CPU(s) scaling MHz: 95%
CPU max MHz: 3400.0000
CPU min MHz: 700.0000
BogoMIPS: 1612.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 256 KiB (4 instances)
L2 cache: 2 MiB (1 instance)
L3 cache: 6 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.5.0
[pip3] numpy==2.0.2
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime-tools==1.7.0
[pip3] optree==0.13.1
[pip3] pytorch-lamb==1.0.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] pytorch-triton-xpu==3.2.0
[pip3] pytorch-wpe==0.0.1
[pip3] torch==2.6.0+xpu
[pip3] torch-complex==0.4.4
[pip3] torchaudio==2.6.0+xpu
[pip3] torchvision==0.21.0+xpu
[pip3] triton==3.2.0
[conda] Could not collect
cc @seemethere @malfet @osalpekar @atalman @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,754,349,896
|
Make functionalization `ViewMeta` serializable with pickle.
|
ysiraichi
|
open
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"suppress-bc-linter",
"ci-no-td"
] | 18
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143712
Fix: #141974
This PR makes `ViewMeta` sequence, present in functional tensors,
serializable with pickle. In order to accomplish that, it makes
`ViewMeta` an abstract class with overridable `forward` and `reverse`
functions. In this context, each operation that once instanciated
`ViewMeta`, should now create a new specialized class that inherits from
`ViewMeta. Therefore, this PR also uses codegen for creating these
specializations.
In summary, these are the changes this PR introduces:
- `ViewMeta` is turned into an abstract class (see
_FunctionalStorageImpl.cpp_). `forward` and `reverse` are pure virtual
functions that need to be implemented. `to_out_index` should be
implemented by operations that might return more than 1 output.
- New `ViewMeta` specializations for `resize_` and `_unsafe_view` are
created (see _FunctionalizeFallbackKernel.h_).
- New templates _ViewMetaClasses.{cpp,h}_ are created. They hold the
declaration and definition of the `ViewMeta` specializations, which
are automatically generated in the ATen codegen (see _gen.py_).
- New `_functionalization` Python sub-module is created (see
_Module.cpp_). It serves as namespace for the `ViewMeta`
specializations and `InverseReturnMode` enum.
- New template _ViewMetaClassesPythonBinding.cpp_ is created. It holds
the automatically generated Python bindings for the `ViewMeta`
specialization, which are generated in the torch codegen (see
_generate_code.py_).
Note that this PR makes use of codegen at 2 different moments:
- ATen codegen (_gen.py_): generates the `ViewMeta` specialized classes.
- Torch codegen (_generate_code.py_): generated the Python bindings for
them.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,754,331,431
|
DistNetworkError when using multiprocessing_context parameter in pytorch dataloader
|
forestbat
|
closed
|
[
"oncall: distributed",
"module: dataloader"
] | 3
|
NONE
|
### 🐛 Describe the bug
Because of some special reasons I want to use `spawn` method to create worker in `DataLoader` of Pytorch, this is demo:
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.utils.data import TensorDataset
import lightning
fabric = lightning.Fabric(devices=[0, 2], num_nodes=1, strategy='ddp')
fabric.launch()
class LinearModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 2)
def forward(self, x):
return self.linear(x)
if __name__ == '__main__':
x = torch.randn(100, 10)
y = torch.rand(100, 2)
dataset = TensorDataset(x, y)
# crashed because of multiprocessing_context='spawn', 'forkserver' has same problem
train_loader = fabric.setup_dataloaders(DataLoader(dataset, batch_size=10, shuffle=True,
num_workers=1, multiprocessing_context='spawn'))
model = LinearModel()
crit = nn.MSELoss()
model, optimizer = fabric.setup(model, optim.Adam(model.parameters(), lr=0.01))
for epoch in range(0, 10):
print(f'Epoch {epoch}')
for xs, ys in train_loader:
output = model(xs)
loss = crit(output, ys)
fabric.backward(loss)
optimizer.step()
```
But it crashed with this error:
```
# https://pastebin.com/BqA9mjiE
Epoch 0
Epoch 0
……
torch.distributed.DistNetworkError: The server socket has failed to listen on any local network address.
The server socket has failed to bind to [::]:55733 (errno: 98 - Address already in use).
The server socket has failed to bind to 0.0.0.0:55733 (errno: 98 - Address already in use).
```
Port 55733 is listened by training processes before so it will crash. But I want to know, why port will be bind repeatedly when `multiprocessing_context` is `spawn`?
Although this problem occurs during I'm using `lightning_fabric`, I think this problem comes from pytorch itself not `lightning`.
Hope for your reply.
### Versions
<details>
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-195-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A40
GPU 1: NVIDIA A40
GPU 2: NVIDIA RTX 5000 Ada Generation
Nvidia driver version: 535.146.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 60
On-line CPU(s) list: 0-59
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 10
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
Stepping: 7
CPU MHz: 2095.046
BogoMIPS: 4190.09
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.9 MiB
L1i cache: 1.9 MiB
L2 cache: 240 MiB
L3 cache: 160 MiB
NUMA node0 CPU(s): 0-59
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.2.2
[pip3] torchaudio==2.2.2
[pip3] torchdata==0.7.1
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.17.2
[pip3] triton==2.2.0
[conda] blas 1.0 mkl conda-forge
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.4.127 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] cudatoolkit 11.8.0 h4ba93d1_13 conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.5.147 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] libopenvino-pytorch-frontend 2024.3.0 he02047a_0 conda-forge
[conda] mkl 2022.1.0 hc2b9512_224 defaults
[conda] numpy 1.26.4 py311h64a7726_0 conda-forge
[conda] pytorch 2.2.2 py3.11_cuda12.1_cudnn8.9.2_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.2.2 py311_cu121 pytorch
[conda] torchdata 0.7.1 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] torchtriton 2.2.0 py311 pytorch
[conda] torchvision 0.17.2 py311_cu121 pytorch
</details>
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @andrewkho @divyanshk @SsnL @VitalyFedyunin @dzhulgakov
| true
|
2,754,327,143
|
Refactor AdamW into Adam (heavily inspired by tfsingh)
|
EmmettBicker
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"suppress-bc-linter",
"release notes: optim"
] | 11
|
CONTRIBUTOR
|
Fixes #104899
Refactors AdamW into Adam by making AdamW a subclass of Adam. Additionally adds a test to assert that the added parameter `decoupled_weight_decay` is True in AdamW and also updates test_defaults_changed_to_foreach to account for the differences in module location for AdamW.
Heavily heavily inspired by #118857 by @tfsingh
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,754,311,211
|
Rename cache limit to recompile limit in configs
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143709
This PR renames every cache_limit to recompile_limit via sed.
Old config options are maintained via Config(alias='xyz')
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Differential Revision: [D67580275](https://our.internmc.facebook.com/intern/diff/D67580275)
| true
|
2,754,208,625
|
Add a test for checking that the CUDA stubs directory is not in libcaffe2_nvrts.so's RPATH or RUNPATH
|
Flamefire
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
The CUDA stub directory must not appear in the rpath or RUNPATH of any library as that would make it unusable at runtime. This should no longer happen (it did before, see the previous PR) but we better check that it stays like that. See the referenced issue https://github.com/pytorch/pytorch/issues/35418
The test verifies this.
Closes https://github.com/pytorch/pytorch/issues/35418
See also #134669
| true
|
2,754,075,812
|
[inductor] handle empty matrix on addmv on torch.compile
|
maybeLee
|
closed
|
[
"open source",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
Fix an issue when torch.addmv behaves inconsistent between torch.compile mode and eager mode. Here is the code to reproduce:
```
import torch
import numpy as np
@torch.compile
def test_optimized(input, mat, vec):
return torch.addmv(input, mat, vec)
def test(input, mat, vec):
return torch.addmv(input, mat, vec)
input = torch.tensor([2], dtype=torch.int32)
mat = torch.tensor(np.random.randn(0, 0), dtype=torch.int32)
vec = torch.tensor([])
origin_out = test(input, mat, vec)
optimized_out = test_optimized(input, mat, vec)
print(origin_out) # tensor([2.])
print(optimized_out) # tensor([])
```
According to the equation (https://pytorch.org/docs/stable/generated/torch.addmv.html), when matrix and vector is empty, returning `[2.]` seems more reasonable to me.
Following the cpu implementation of this API:https://github.com/pytorch/pytorch/blob/e97b97af56204230f1030bd297dda9bc6b053a4c/aten/src/ATen/native/Blas.cpp#L62
I add an additional branch to handle empty matrix
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,753,940,063
|
RuntimeError: tensor does not have a device
|
dev-kamran2001
|
closed
|
[
"module: onnx",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
When trying to export a PyTorch YOLOv11 model (trained using ultralytics on a custom dataset) I get the error in the title.
Using .to(torch.device('cpu')) or just .to('cpu') on both or one of the models doesn't help, tried to tell torch to use CPU with every available function but no luck in sovling this error.
```
import torch
# This code produces the error
# using torch.zeros((1, 3, 640, 640)).half().to(device) doesn't help
device = torch.device('cpu')
model = torch.load('best.pt', map_location=device)['model']
torch.onnx.export(model, torch.zeros((1, 3, 640, 640)).half(), 'model.onnx', export_params=True, opset_version=12)
```
```
import torch
# This code successfully exports
# onnxruntime successfully loads the model but fails to detect with it (throws exception on detect call, using C++ for detection)
device = torch.device('cpu')
model = torch.load('best.pt', map_location=device)['model'].float()
torch.onnx.export(model, torch.zeros((1, 3, 640, 640)), 'model.onnx', export_params=True, opset_version=12)
```
Using ONNXRUNTIME in Python outputs a clear error description: "INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(uint8)) , expected: (tensor(float))" digging online i found out that this is not ONNXRUNTIME issue.
I'm confused, the model should already be torch.FloatTensor so why onnxruntime says its uint8? Also, using "ultralytics" to export doesn't cause any problems but during detection, the model is not able to properly classify objects (all objects are detected with incorrect classes, this issue doesn't happen when using the original PyTorch model).
### Versions
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro (10.0.19045 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060 SUPER
Nvidia driver version: 551.86
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 12th Gen Intel(R) Core(TM) i3-12100F
Manufacturer: GenuineIntel
Family: 206
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3300
MaxClockSpeed: 3300
L2CacheSize: 5120
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxslim==0.1.43
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[conda] Could not collect
| true
|
2,753,862,599
|
Unable for CMake in setup.py to list anything OpenCL-ROCm
|
KISSEsWHISPERsFEEtBACKHUGs
|
open
|
[
"module: build",
"module: rocm",
"triaged"
] | 2
|
NONE
|
### Commands that are run to build PyTorch
```
python3.11 -m venv /opt/pyt2c1k/pyenv
source /opt/pyt2c1k/pyenv/bin/activate
export HSA_OVERRIDE_GFX_VERSION=9.0.0
export PATH=/opt/rocm/bin:$PATH
export LD_LIBRARY_PATH=/opt/rocm/lib:$LD_LIBRARY_PATH
export OpenCL_INCLUDE_DIR=/opt/rocm-6.3.0/include
export OpenCL_LIBRARY=/opt/rocm-6.3.0/lib/libOpenCL.so
git clone --recursive https://github.com/pytorch/pytorch.git PyTorch
cd PyTorch
git pull
git checkout main
git submodule sync
git submodule update --init --recursive
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install setuptools ninja mkl-static mkl-include -r requirements.txt
doas rm -rf $(locate LibTorch|grep -Eie 'CMakeCache.txt|CMakeFiles')
PYTORCH_ROCM_ARCH=gfx90c USE_OPENCL=1 USE_CUDA=0 USE_CUDNN=0 USE_CUSPARSELT=0 USE_CUDSS=0 USE_CUFILE=0 BUILD_TEST=0 PROJECT_BINARY_DIR=/opt/LibTorch/MkTorch CFLAGS="-DCMAKE_C_FLAGS='-w',-DCMAKE_CXX_FLAGS='-w'" python3.11 setup.py clean
PYTORCH_ROCM_ARCH=gfx90c USE_OPENCL=1 USE_CUDA=0 USE_CUDNN=0 USE_CUSPARSELT=0 USE_CUDSS=0 USE_CUFILE=0 BUILD_TEST=0 PROJECT_BINARY_DIR=/opt/LibTorch/MkTorch CFLAGS="-DCMAKE_C_FLAGS='-w',-DCMAKE_CXX_FLAGS='-w'" python3.11 setup.py bdist_wheel
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install dist/*.whl
python3.11 -c "import torch; print(torch.__version__)"
cd ..
git clone https://github.com/mlverse/torchvision.git TorchVision
cd TorchVision
git pull
git checkout main
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install setuptools ninja -r requirements.txt
python3.11 setup.py clean
python3.11 setup.py bdist_wheel
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install dist/*.whl
cd ..
git clone https://github.com/mlverse/torchaudio.git TorchAudio
cd TorchAudio
git pull
git checkout main
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install setuptools ninja -r requirements.txt
python3.11 setup.py clean
python3.11 setup.py bdist_wheel
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install dist/*.whl
cd ..
python3.11 -c "import torch; print(torch.__version__)"
python3.11 -c "import torchvision; print(torchvision.__version__)"
python3.11 -c "import torchaudio; print(torchaudio.__version__)"
echo "LibTorch, TorchVision, and TorchAudio installed successfully from my custom wheel files! :)"
```
### 🐛 Describe the bug
```INFOUSING OPENCL
-- Looking for CL_VERSION_3_0
-- Looking for CL_VERSION_3_0 - not found
-- Looking for CL_VERSION_2_2
-- Looking for CL_VERSION_2_2 - not found
-- Looking for CL_VERSION_2_1
-- Looking for CL_VERSION_2_1 - not found
-- Looking for CL_VERSION_2_0
-- Looking for CL_VERSION_2_0 - not found
-- Looking for CL_VERSION_1_2
-- Looking for CL_VERSION_1_2 - not found
-- Looking for CL_VERSION_1_1
-- Looking for CL_VERSION_1_1 - not found
-- Looking for CL_VERSION_1_0
-- Looking for CL_VERSION_1_0 - not found
CMake Error at /opt/pyt2c1k/pyenv/lib/python3.11/site-packages/cmake/data/share/cmake-3.31/Modules/FindPackageHandleStandardArgs.cmake:233 (message):
Could NOT find OpenCL (missing: OpenCL_INCLUDE_DIR)
Call Stack (most recent call first):
/opt/pyt2c1k/pyenv/lib/python3.11/site-packages/cmake/data/share/cmake-3.31/Modules/FindPackageHandleStandardArgs.cmake:603 (_FPHSA_FAILURE_MESSAGE)
/opt/pyt2c1k/pyenv/lib/python3.11/site-packages/cmake/data/share/cmake-3.31/Modules/FindOpenCL.cmake:177 (find_package_handle_standard_args)
cmake/Dependencies.cmake:761 (find_package)
CMakeLists.txt:865 (include)
-- Configuring incomplete, errors occurred!
WARNING: Requirement 'dist/*.whl' looks like a filename, but the file does not exist
ERROR: *.whl is not a valid wheel filename.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/local/.B/terminal/AI/LibTorch/PyTorch/torch/__init__.py", line 77, in <module>
from torch.torch_version import __version__ as __version__
File "/opt/local/.B/terminal/AI/LibTorch/PyTorch/torch/torch_version.py", line 4, in <module>
from torch.version import __version__ as internal_version
ModuleNotFoundError: No module named 'torch.version'
```
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Artix Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.31.2
Libc version: glibc-2.40
Python version: 3.11.9 (main, Jul 9 2024, 00:31:01) [GCC 14.1.1 20240522] (64-bit runtime)
Python platform: Linux-6.12.5-lqx1-1-lqx-x86_64-with-glibc2.40
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 PRO 4750G with Radeon Graphics
CPU family: 23
Model: 96
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 19%
CPU max MHz: 4454.0000
CPU min MHz: 400.0000
BogoMIPS: 7186.09
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 8 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] optree==0.13.1
[conda] Could not collect
```
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,753,861,784
|
Segmentation Fault (core dumped) on as_strided with torch.compile
|
maybeLee
|
open
|
[
"module: crash",
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: aotdispatch"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The following script lead to a segmentation fault.
```
import torch
@torch.compile
def as_strided(input, size, stride, storage_offset=0):
return input.as_strided(size, stride, storage_offset)
input = torch.tensor([], dtype=torch.float32)
size = [17,18]
stride = [-80,1]
storage_offset = 1
out2 = as_strided(input,size,stride,storage_offset)
```
Without torch.compile, this function will raises a runtime error:
```
as_strided: Negative strides are not supported at the moment, got strides: [-80, 1]
```
Here are some details:
- This issue seems to be related to the first element of the `stride`. If I change the stride to `[1, -80]`, no segmentation fault but normal runtime error raises.
- I faced this warning when run this script: `Bypassing autograd cache due to: Cannot cache a graph with functional tensor`
### Versions
<details>
<summary>Envs</summary>
Collecting environment information...
PyTorch version: 2.6.0a0+gitdeb1da1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gitdeb1da1
[pip3] torchaudio==2.5.1+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] magma-cuda124 2.6.1 1 pytorch
[conda] mkl 2025.0.0 h901ac74_941 conda-forge
[conda] mkl-include 2025.0.0 hf2ce2f3_941 conda-forge
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gitdeb1da1 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
</details>
cc @chauhang @penguinwu @eellison @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @bdhirsh
| true
|
2,753,859,917
|
Remove unused <ATen/core/Array.h> inclusion
|
cyyever
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ciflow/s390"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,753,829,320
|
fix: all_gather_intotensor in torch.compile graph
|
yangxiaorun
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing"
] | 5
|
NONE
|
Fixes #ISSUE_NUMBER
My PR request is to fix the bug in torch.compile.
An example of the error is as follows:
```
import os
import torch
import torch.distributed as dist
torch._logging.set_logs(graph=True, graph_code=True)
class allgather_in_tensor(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, out_tensor, x):
torch.distributed.all_gather_into_tensor(out_tensor, x)
return out_tensor
def test_allgather_in_tensor_static(rank, world_size):
torch.cuda.set_device("cuda:" + str(rank))
dist.init_process_group("nccl", rank=rank, world_size=world_size)
x = torch.ones(2, 2, dtype=torch.int64).to("cuda:" + str(rank)) + 1 + 2 * rank
print("x-----===:", x)
tensor_list = torch.zeros(4, 2, dtype=torch.int64).to("cuda:" + str(rank))
print("tensor_list-----===:", tensor_list)
mod = allgather_in_tensor()
mod = mod.to("cuda:" + str(rank))
ori_result = mod(tensor_list, x)
print("ori_result:", ori_result)
torch._dynamo.reset()
opt_mod = torch.compile(mod, dynamic=False, fullgraph=True)
compile_result = opt_mod(tensor_list, x)
print("compile_result:", compile_result)
assert ori_result.equal(compile_result)
def mp():
world_size = 2
torch.multiprocessing.spawn(test_allgather_in_tensor_static, args=(world_size,), nprocs=world_size, join=True)
if __name__ == '__main__':
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "29506"
mp()
```
This test case triggers the following error. This problem can be solved by using the modification in the PR.
```
File "/data1/anaconda/envs/yxr_py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 749, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data1/anaconda/envs/yxr_py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2666, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data1/anaconda/envs/yxr_py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2716, in inline_call_
raise ArgsMismatchError( # noqa: B904
torch._dynamo.exc.ArgsMismatchError: missing a required argument: 'group'.
func = 'all_gather_tensor_inplace' /data1/anaconda/envs/yxr_py310/lib/python3.10/site-packages/torch/distributed/_functional_collectives.py:1003, args = [], kwargs = {'output_tensor': LazyVariableTracker(), 'input_tensor': LazyVariableTracker()}
from user code:
File "/home/yxr/allgather_test/allgather_error2.py", line 10, in forward
torch.distributed.all_gather_into_tensor(out_tensor, x)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,753,725,117
|
[dynamo] Remove dead code after introducing UserDefinedDictVariable
|
anijain2305
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143698
* __->__ #143699
* #143722
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,753,722,530
|
[dynamo] Remove HFPretrained config hack
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143698
* #143888
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,753,691,278
|
[torch.compile] `torch.compile` throws an error when nn.Module contains a dataclass with float values.
|
nanlliu
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
### 🐛 Describe the bug
I think `torch.compile` somehow treats scalar values as `tensors` but i don't see how scalar values should be a problem in this case.
This helps but falls back to eager mode
```
torch._dynamo.config.suppress_errors = True
```
What is the best way to debug this?
```
Traceback (most recent call last):
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/__init__.py", line 2235, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx
return aot_autograd(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 586, in aot_dispatch_autograd
compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1350, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1421, in _fw_compiler_base
return inner_compile(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 475, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 85, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 661, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1334, in load
compiled_graph = compile_fx_fn(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 570, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 789, in fx_codegen_and_compile
_recursive_post_grad_passes(gm, is_inference=is_inference)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 288, in _recursive_post_grad_passes
post_grad_passes(gm, is_inference)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/fx_passes/post_grad.py", line 100, in post_grad_passes
patterns.apply(gm.graph) # type: ignore[arg-type]
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 1729, in apply
if is_match(m) and entry.extra_check(m):
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/fx_passes/quantization.py", line 1448, in fn
scales = match.kwargs["scales"].meta["val"]
```
### Versions
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] open_clip_torch==2.28.0
[pip3] optree==0.13.1
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] torch==2.5.0
[pip3] torch_scatter==2.1.2.dev4
[pip3] torch-tb-profiler==0.4.3
[pip3] torchao==0.6.1
[pip3] torchaudio==2.5.0
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.6.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,753,688,581
|
[cumsum][CUDA][64-bit indexing] Add 64-bit indexing path for `cumsum`
|
eqy
|
closed
|
[
"module: cuda",
"triaged",
"module: 64-bit",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
For #143486
Interestingly enough changing the indexing type seems to degrade performance when a larger width is not needed, even on small sizes, so making this a template param rather than forcing all cases to 64-bit
cc @ptrblck @msaroufim
| true
|
2,753,681,638
|
[ROCm] CK Flash Attention Backend
|
xw285cornell
|
closed
|
[
"module: rocm",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"topic: not user facing",
"skip-pr-sanity-checks",
"module: dynamo",
"ciflow/inductor",
"ciflow/rocm"
] | 14
|
CONTRIBUTOR
|
Replace https://github.com/pytorch/pytorch/pull/138947 for re-import.
Replaces https://github.com/ROCm/pytorch/pull/1592
This PR contains the initial implementation of SDPA with composable_kernel backend. The CK path can be forced by simply calling torch.backends.cuda.preferred_rocm_fa_library("ck"). Similarly, you can force the incumbent aotriton implementation by passing in "aotriton" or "default". As you'd expect, not setting this option will result in aotriton to be used as the backend. In the case of CK, if pytorch deems flash attention usable, then it will use the CK path in all the same places aotriton would have been used. This PR makes no changes to the heuristics which select which attention scheme to use (i.e. flash attention vs memory efficient attention vs math etc etc). It only gets called when flash attention is both enabled (via USE_FLASH_ATTENTION) and is selected at runtime by the existing heuristics.
Files located in pytorch/aten/src/ATen/native/transformers/hip/flash_attn/ck/mha* have been pulled from https://github.com/Dao-AILab/flash-attention courtesy of @tridao's hard work who is the co-author
NOTE: In order to use this backend, the user MUST set USE_CK_FLASH_ATTENTION=1 in their environment when they build PyTorch.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,753,660,904
|
[audio hash update] update the pinned audio hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
| true
|
2,753,656,321
|
Fix issue with setAttribute and int8_t vs int32_t variables
|
r-barnes
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cpp",
"topic: improvements",
"ci-no-td"
] | 78
|
CONTRIBUTOR
|
Test Plan: Sandcastle
| true
|
2,753,655,322
|
Enable Dynamic Memory Budget Solver
|
basilwong
|
closed
|
[
"fb-exported",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Summary:
Full Context: https://docs.google.com/document/d/1-j5KSbfGFJQcH4sYh7BIeJXso3zYzl5G5yFQqXdKx_o/edit?usp=sharing
tl;dr
This change introduces classes which help determine a dynamic memory budget. This will mostly be helpful for models with many implicit graph breaks.
---
New Classes:
*GraphInfoProvider*
* Takes the joint_graph as well as the input memories and runtimes and parses the graph + values into usable forms for the SolverEvaluator.
*KnapsackEvaluator*
* Provides a function: Given all of the four inputs (solver function as a callable, max_dynamic_memory_budget, min_dynamic_memory_budget, dynamic_memory_budget_pareto_granularity) it returns an approximation of the knee point of the pareto distribution.
Test Plan:
### Local E2E Test
https://www.internalfb.com/mlhub/pipeline/1116570856577237
### Distributed E2E Test
aps-fb_fm_v4_768_01_dynamic_updated-b4db74faa6
Differential Revision: D67549590
| true
|
2,753,647,263
|
Apply TorchFix TOR203 fixes
|
kit1980
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Codemodded via `torchfix . --select=TOR203 --fix`.
This is a step to unblock https://github.com/pytorch/pytorch/pull/141076
| true
|
2,753,626,149
|
[rpc] Fix unit test after c10::nullopt removal
|
yf225
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"topic: improvements",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
| null | true
|
2,753,615,508
|
[torch][fx] Add support for EXIR dialect overload ops in normalize_function
|
dulinriley
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 11
|
CONTRIBUTOR
|
Summary:
I had a minor annoyance when debugging graphs using EXIR dialect ops,
that all the function normalization went away. For functions with > 5 arguments,
some of which are just simple bools and ints, it's very helpful to have
the kwarg names attached.
Enhance `normalize_target` to handle EdgeOpOverload targets. To avoid
a circular dependency on Executorch from pytorch core, I just use a `hasattr`
check for "_op". This only happens if the target is not already a recognized
torch function.
Also, I noticed that the new `fx.Node.normalized_arguments` function
didn't forward an important kwarg to `normalize_target`, so I fixed that too.
Test Plan: Tested with FxGraphDrawer and an fx Graph containing EXIR nodes.
Differential Revision: D67545909
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,753,605,578
|
[Codemod][AddExplicitStrictExportArg] caffe2/test
|
gmagogsfm
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 19
|
CONTRIBUTOR
|
Reviewed By: avikchaudhuri
Differential Revision: D67530154
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,753,604,001
|
[ROCm] Inductor CK GEMM backend very slow
|
LunNova
|
open
|
[
"module: rocm",
"triaged",
"oncall: pt2"
] | 8
|
NONE
|
### 🐛 Describe the bug
When using the CK backend via `TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS="CK,ATEN,TRITON,CPP"` compilation of CK kernels is very slow. (>one minute per file in some cases)
It looks like some very long symbol names in these files are making compilation slower because LLVM uses `SmallString<128>` buffers to build up symbol names and now has to allocate a bunch in places that were otherwise allocation free. From some `perf` sampling it looks like this is causing LLVM to spend a lot more time in TargetMachine::getSymbol.
https://github.com/LunNova/llvm-project-rocm/blob/5a9ddc6f57430d5e8c5154779c647219c8e7cb99/llvm/lib/Target/TargetMachine.cpp#L283-L292
It's possible I've misdiagnosed this and the long symbol allocations here are mostly inside the CK code and the top level long names aren't important, or the slow compilation is entirely unrelated to the symbol names. In any case, it's very slow.
<details>
<summary>Long logs/code on torch 20241218 nightly</summary>
```
[torch/_inductor/codecache.py:3202] Compilation took 70.78182601928711 seconds. Compile command: /nix/store/baxdbiqlbq3xgfrycxz0l3lhgqr30gpg-rocmcxx/bin/clang -O3 -x hip -std=c++17 --offload-arch=gfx908 -fno-gpu-rdc -fPIC -mllvm -amdgpu-early-inline-all=true -mllvm -amdgpu-function-calls=false -mllvm -enable-post-misched=0 -DNDEBUG -DCK_TILE_FMHA_FWD_FAST_EXP2=1 -fgpu-flush-denormals-to-zero -ffast-math -I/nix/store/jx8h8d85ymv8l950czxz8wpvd562mblx-unpack-composable_kernel-6.4.0a20241217/lib/python3.12/site-packages/ck4inductor/include -I/nix/store/jx8h8d85ymv8l950czxz8wpvd562mblx-unpack-composable_kernel-6.4.0a20241217/lib/python3.12/site-packages/ck4inductor/library/include -I/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/include -include __clang_hip_runtime_wrapper.h -L/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/lib -L/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/hip/lib -lamdhip64 -shared -o ~/ml-cache/torchinductor/yo/cyoxkhocdaqxywmu537oo7n6qsdsnn4hk6n75y3lg5sknrsei4ol.so ~/ml-cache/torchinductor/yo/cyoxkhocdaqxywmu537oo7n6qsdsnn4hk6n75y3lg5sknrsei4ol.cpp
$ cat ~/ml-cache/torchinductor/yo/cyoxkhocdaqxywmu537oo7n6qsdsnn4hk6n75y3lg5sknrsei4ol.cpp
/**
* Generated code for CK inductor backend
* See torch._inductor.codegen.rocm.ck_universal_gemm_template.CKGemmTemplate
*
* Template instance CKGemmOperation(a_layout='Row', b_layout='Col', ds_layouts=(), c_layout='Row', a_element_dtype='BF16', b_element_dtype='BF16', ds_element_dtypes=(), c_element_dtype='BF16', acc_dtype='F32', c_shuffle_dtype='BF16', a_elementwise_op='PassThrough', b_elementwise_op='PassThrough', c_elementwise_op='PassThrough', gemm_specialization='GemmSpecialization::NPadding', block_size=256, m_per_block=128, n_per_block=128, k_per_block=64, a_k1=8, b_k1=8, m_per_xdl=32, n_per_xdl=32, m_xdl_per_wave=2, n_xdl_per_wave=2, a_block_transfer_thread_cluster_lengths_ak0_m_ak1=(8, 32, 1), a_block_transfer_thread_cluster_arrange_order=(1, 0, 2), a_block_transfer_src_access_order=(1, 0, 2), a_block_transfer_src_vector_dim=2, a_block_transfer_src_scalar_per_vector=8, a_block_transfer_dst_scalar_per_vector_ak1=8, a_block_lds_extra_m=0, b_block_transfer_thread_cluster_lengths_bk0_n_bk1=(8, 32, 1), b_block_transfer_thread_cluster_arrange_order=(1, 0, 2), b_block_transfer_src_access_order=(1, 0, 2), b_block_transfer_src_vector_dim=2, b_block_transfer_src_scalar_per_vector=8, b_block_transfer_dst_scalar_per_vector_bk1=8, b_block_lds_extra_n=0, c_shuffle_m_xdl_per_wave_per_shuffle=1, c_shuffle_n_xdl_per_wave_per_shuffle=1, c_shuffle_block_transfer_cluster_lengths_m_block_m_per_block_n_block_n_per_block=(1, 16, 1, 16), c_shuffle_block_transfer_scalar_per_vector_n_per_block=(4,), block_gemm_pipeline_scheduler='BlockGemmPipelineScheduler::Intrawave', block_gemm_pipeline_version='BlockGemmPipelineVersion::v3', a_compute_dtype=None, b_compute_dtype=None)
*
* torch.__version__='2.6.0a.post20241218'
* torch.version.git_version=Unknown
*/
#include <exception>
#include <iostream>
#include <memory>
#include <random>
#include <vector>
// CK headers
#ifdef DEBUG_LOG
#define DEBUG_LOG_TMP DEBUG_LOG
#undef DEBUG_LOG
#else
#define DEBUG_LOG_TMP 0
#endif
#include "ck/ck.hpp"
#undef DEBUG_LOG
#define DEBUG_LOG DEBUG_LOG_TMP
#include "ck/utility/data_type.hpp"
#include "ck/library/utility/check_err.hpp"
#include "ck/library/utility/device_memory.hpp"
#include "ck/library/utility/fill.hpp"
#include "ck/library/utility/host_tensor.hpp"
#include "ck/library/utility/host_tensor_generator.hpp"
#include "ck/library/utility/literals.hpp"
// CK GEMM header(s)
#include "ck/tensor_operation/gpu/device/impl/device_gemm_multiple_d_xdl_cshuffle_v3.hpp"
// We compile all models with -fvisibility=hidden. Any symbols that need to be
// exposed in the final shared library must be declared with PT_EXPORT to make
// them visible.
#ifdef __GNUC__ // Applies to any compiler with GNU extensions (clang and g++)
#define PT_EXPORT __attribute__((__visibility__("default")))
#else
#ifdef _WIN32
#define PT_EXPORT __declspec(dllexport)
#else
#define PT_EXPORT
#endif
#endif
// as long as there is no custom arithmetic it's fine
using bfloat16 = uint16_t;
using float8_e4m3fnuz = uint8_t;
using float8_e5m2fnuz = uint8_t;
// CK globals
template <ck::index_t... Is>
using S = ck::Sequence<Is...>;
template<typename... Ts>
using Tuple = ck::Tuple<Ts...>;
using PassThrough = ck::tensor_operation::element_wise::PassThrough;
using Bilinear = ck::tensor_operation::element_wise::Bilinear;
using Scale = ck::tensor_operation::element_wise::Scale;
using ScaleAdd = ck::tensor_operation::element_wise::ScaleAdd;
using MultiplyMultiply = ck::tensor_operation::element_wise::MultiplyMultiply;
// see "composable_kernel/include/ck/utility/data_type.hpp"
using F8 = ck::f8_t;
using BF8 = ck::bf8_t;
using F16 = ck::half_t;
using F32 = float;
// using F64 = double;
using BF16 = ck::bhalf_t;
// using I32 = int32_t;
// using I8 = int8_t;
// using I4 = ck::int4_t;
#if DEBUG_LOG
static constexpr auto kDEBUG_LOG = 1;
#else
static constexpr auto kDEBUG_LOG = 0;
#endif
// CK GEMM globals
using Row = ck::tensor_layout::gemm::RowMajor;
using Col = ck::tensor_layout::gemm::ColumnMajor;
using BlockGemmPipelineScheduler = ck::BlockGemmPipelineScheduler;
using GemmSpecialization = ck::tensor_operation::device::GemmSpecialization;
using BlockGemmPipelineVersion = ck::BlockGemmPipelineVersion;
struct MultiplyMultiplyAdd {
template <typename E, typename C, typename D0, typename D1, typename D2>
__host__ __device__ constexpr void
operator()(E& e, const C& c, const D0& d0, const D1& d1, const D2& d2) const {
e = ck::type_convert<E>(
ck::type_convert<float>(c)
* ck::type_convert<float>(d0)
* ck::type_convert<float>(d1)
+ ck::type_convert<float>(d2)
);
}
};
// Gemm operator ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVRow_KblayoutVCol_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationNPadding_KblocksizeV256_KmperblockV128_KnperblockV128_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV32_KnperxdlV32_KmxdlperwaveV2_KnxdlperwaveV2_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV1x0x2_KablocktransfersrcaccessorderV1x0x2_KablocktransfersrcvectordimV2_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV1x0x2_KbblocktransfersrcaccessorderV1x0x2_KbblocktransfersrcvectordimV2_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV1_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x16x1x16_KcshuffleblocktransferscalarpervectornperblockV4_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone
using Operation_ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVRow_KblayoutVCol_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationNPadding_KblocksizeV256_KmperblockV128_KnperblockV128_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV32_KnperxdlV32_KmxdlperwaveV2_KnxdlperwaveV2_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV1x0x2_KablocktransfersrcaccessorderV1x0x2_KablocktransfersrcvectordimV2_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV1x0x2_KbblocktransfersrcaccessorderV1x0x2_KbblocktransfersrcvectordimV2_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV1_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x16x1x16_KcshuffleblocktransferscalarpervectornperblockV4_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone =
ck::tensor_operation::device::DeviceGemmMultiD_Xdl_CShuffle_V3<
/* a_layout */ Row,
/* b_layout */ Col,
/* ds_layouts */ Tuple<>,
/* c_layout */ Row,
/* a_element_dtype */ BF16,
/* b_element_dtype */ BF16,
/* ds_element_dtypes */ Tuple<>,
/* c_element_dtype */ BF16,
/* acc_dtype */ F32,
/* c_shuffle_dtype */ BF16,
/* a_elementwise_op */ PassThrough,
/* b_elementwise_op */ PassThrough,
/* c_elementwise_op */ PassThrough,
/* gemm_specialization */ GemmSpecialization::NPadding,
/* block_size */ 256,
/* m_per_block */ 128,
/* n_per_block */ 128,
/* k_per_block */ 64,
/* a_k1 */ 8,
/* b_k1 */ 8,
/* m_per_xdl */ 32,
/* n_per_xdl */ 32,
/* m_xdl_per_wave */ 2,
/* n_xdl_per_wave */ 2,
/* a_block_transfer_thread_cluster_lengths_ak0_m_ak1 */ S<8, 32, 1>,
/* a_block_transfer_thread_cluster_arrange_order */ S<1, 0, 2>,
/* a_block_transfer_src_access_order */ S<1, 0, 2>,
/* a_block_transfer_src_vector_dim */ 2,
/* a_block_transfer_src_scalar_per_vector */ 8,
/* a_block_transfer_dst_scalar_per_vector_ak1 */ 8,
/* a_block_lds_extra_m */ 0,
/* b_block_transfer_thread_cluster_lengths_bk0_n_bk1 */ S<8, 32, 1>,
/* b_block_transfer_thread_cluster_arrange_order */ S<1, 0, 2>,
/* b_block_transfer_src_access_order */ S<1, 0, 2>,
/* b_block_transfer_src_vector_dim */ 2,
/* b_block_transfer_src_scalar_per_vector */ 8,
/* b_block_transfer_dst_scalar_per_vector_bk1 */ 8,
/* b_block_lds_extra_n */ 0,
/* c_shuffle_m_xdl_per_wave_per_shuffle */ 1,
/* c_shuffle_n_xdl_per_wave_per_shuffle */ 1,
/* c_shuffle_block_transfer_cluster_lengths_m_block_m_per_block_n_block_n_per_block */ S<1, 16, 1, 16>,
/* c_shuffle_block_transfer_scalar_per_vector_n_per_block */ S<4>,
/* block_gemm_pipeline_scheduler */ BlockGemmPipelineScheduler::Intrawave,
/* block_gemm_pipeline_version */ BlockGemmPipelineVersion::v3>;
extern "C" {
PT_EXPORT int rocm_fused_1(const bfloat16* X, const bfloat16* W, bfloat16* Y, int32_t M, int32_t N, int32_t K, int32_t LDA, int32_t LDB, int32_t LDC, int32_t LDD, size_t* workspace_size, uint8_t* workspace, hipStream_t stream) {
auto gemm =
Operation_ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVRow_KblayoutVCol_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationNPadding_KblocksizeV256_KmperblockV128_KnperblockV128_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV32_KnperxdlV32_KmxdlperwaveV2_KnxdlperwaveV2_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV1x0x2_KablocktransfersrcaccessorderV1x0x2_KablocktransfersrcvectordimV2_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV1x0x2_KbblocktransfersrcaccessorderV1x0x2_KbblocktransfersrcvectordimV2_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV1_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x16x1x16_KcshuffleblocktransferscalarpervectornperblockV4_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone {};
auto invoker = gemm.MakeInvoker();
auto argument = gemm.MakeArgument(
reinterpret_cast<const BF16*>(X),
reinterpret_cast<const BF16*>(W),
std::array<const void*, 0>{ },
reinterpret_cast<BF16*>(Y),
M,
N,
K,
LDA,
LDB,
std::array<ck::index_t, 0>{ },
LDC,
1, // kBatch
PassThrough {},
PassThrough {},
PassThrough {} // c_elementwise_op
);
if (!gemm.IsSupportedArgument(argument)) {
// we do our best to statically avoid this case in `filter_op`
std::cerr << "invalid argument for gemm instance " << gemm.GetTypeString() << std::endl;
argument.Print();
return -23;
}
if (workspace_size) {
*workspace_size = gemm.GetWorkSpaceSize(&argument);
return 0;
}
// run the kernel
#ifdef GENERATE_CK_STANDALONE_RUNNER
const auto stream_config = StreamConfig{
stream,
/* time kernel */ 1,
/* log level */ 1,
/* n_cold_iter */ 100,
/* n_hot_iter */ 100,
/* flush_l2_cache */ 1,
/* rotate_count */ 5};
#else
const auto stream_config = StreamConfig{stream, /* time kernel */ false, /* log level */ 0};
#endif
const float elapsed_time = invoker.Run(argument, stream_config);
#ifdef GENERATE_CK_STANDALONE_RUNNER
std::cout << "elapsed time: " << elapsed_time << " ms" << std::endl;
#else
(void)elapsed_time;
#endif
return 0;
} // kernel definition
} // extern C
```
Also seeing some errors:
```
rank1]:E1220 13:17:04.168000 677380 torch/_inductor/select_algorithm.py:2003] [3/1] CUDA compilation error during autotuning:
[rank1]:E1220 13:17:04.168000 677380 torch/_inductor/select_algorithm.py:2003] [3/1] C++ compile error
[rank1]:E1220 13:17:04.168000 677380 torch/_inductor/select_algorithm.py:2003] [3/1]
[rank1]:E1220 13:17:04.168000 677380 torch/_inductor/select_algorithm.py:2003] [3/1] Command:
[rank1]:E1220 13:17:04.168000 677380 torch/_inductor/select_algorithm.py:2003] [3/1] /nix/store/baxdbiqlbq3xgfrycxz0l3lhgqr30gpg-rocmcxx/bin/clang -O3 -x hip -std=c++17 --offload-arch=gfx908 -fno-gpu-rdc -fPIC -mllvm -amdgpu-early-inline-all=true -mllvm -amdgpu-function-calls=false -mllvm -enable-post-misched=0 -DNDEBUG -DCK_TILE_FMHA_FWD_FAST_EXP2=1 -fgpu-flush-denormals-to-zero -ffast-math -I/nix/store/jx8h8d85ymv8l950czxz8wpvd562mblx-unpack-composable_kernel-6.4.0a20241217/lib/python3.12/site-packages/ck4inductor/include -I/nix/store/jx8h8d85ymv8l950czxz8wpvd562mblx-unpack-composable_kernel-6.4.0a20241217/lib/python3.12/site-packages/ck4inductor/library/include -I/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/include -include __clang_hip_runtime_wrapper.h -L/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/lib -L/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/hip/lib -lamdhip64 -shared -o ~/ml-cache/torchinductor/q3/cq3qdsttyk3ptde6djpzxm46uboeevoyov3pe365w5q5xy36a525.so ~/ml-cache/torchinductor/q3/cq3qdsttyk3ptde6djpzxm46uboeevoyov3pe365w5q5xy36a525.cpp
$ cat ~/ml-cache/torchinductor/q3/cq3qdsttyk3ptde6djpzxm46uboeevoyov3pe365w5q5xy36a525.cpp
/**
* Generated code for CK inductor backend
* See torch._inductor.codegen.rocm.ck_universal_gemm_template.CKGemmTemplate
*
* Template instance CKGemmOperation(a_layout='Col', b_layout='Row', ds_layouts=(), c_layout='Row', a_element_dtype='BF16', b_element_dtype='BF16', ds_element_dtypes=(), c_element_dtype='BF16', acc_dtype='F32', c_shuffle_dtype='BF16', a_elementwise_op='PassThrough', b_elementwise_op='PassThrough', c_elementwise_op='PassThrough', gemm_specialization='GemmSpecialization::MNPadding', block_size=256, m_per_block=224, n_per_block=256, k_per_block=64, a_k1=8, b_k1=8, m_per_xdl=16, n_per_xdl=16, m_xdl_per_wave=7, n_xdl_per_wave=8, a_block_transfer_thread_cluster_lengths_ak0_m_ak1=(8, 32, 1), a_block_transfer_thread_cluster_arrange_order=(0, 2, 1), a_block_transfer_src_access_order=(0, 2, 1), a_block_transfer_src_vector_dim=1, a_block_transfer_src_scalar_per_vector=8, a_block_transfer_dst_scalar_per_vector_ak1=8, a_block_lds_extra_m=0, b_block_transfer_thread_cluster_lengths_bk0_n_bk1=(8, 32, 1), b_block_transfer_thread_cluster_arrange_order=(0, 2, 1), b_block_transfer_src_access_order=(0, 2, 1), b_block_transfer_src_vector_dim=1, b_block_transfer_src_scalar_per_vector=8, b_block_transfer_dst_scalar_per_vector_bk1=8, b_block_lds_extra_n=0, c_shuffle_m_xdl_per_wave_per_shuffle=1, c_shuffle_n_xdl_per_wave_per_shuffle=2, c_shuffle_block_transfer_cluster_lengths_m_block_m_per_block_n_block_n_per_block=(1, 32, 1, 8), c_shuffle_block_transfer_scalar_per_vector_n_per_block=(8,), block_gemm_pipeline_scheduler='BlockGemmPipelineScheduler::Intrawave', block_gemm_pipeline_version='BlockGemmPipelineVersion::v3', a_compute_dtype=None, b_compute_dtype=None)
*
* torch.__version__='2.6.0a.post20241218'
* torch.version.git_version=Unknown
*/
#include <exception>
#include <iostream>
#include <memory>
#include <random>
#include <vector>
// CK headers
#ifdef DEBUG_LOG
#define DEBUG_LOG_TMP DEBUG_LOG
#undef DEBUG_LOG
#else
#define DEBUG_LOG_TMP 0
#endif
#include "ck/ck.hpp"
#undef DEBUG_LOG
#define DEBUG_LOG DEBUG_LOG_TMP
#include "ck/utility/data_type.hpp"
#include "ck/library/utility/check_err.hpp"
#include "ck/library/utility/device_memory.hpp"
#include "ck/library/utility/fill.hpp"
#include "ck/library/utility/host_tensor.hpp"
#include "ck/library/utility/host_tensor_generator.hpp"
#include "ck/library/utility/literals.hpp"
// CK GEMM header(s)
#include "ck/tensor_operation/gpu/device/impl/device_gemm_multiple_d_xdl_cshuffle_v3.hpp"
// We compile all models with -fvisibility=hidden. Any symbols that need to be
// exposed in the final shared library must be declared with PT_EXPORT to make
// them visible.
#ifdef __GNUC__ // Applies to any compiler with GNU extensions (clang and g++)
#define PT_EXPORT __attribute__((__visibility__("default")))
#else
#ifdef _WIN32
#define PT_EXPORT __declspec(dllexport)
#else
#define PT_EXPORT
#endif
#endif
// as long as there is no custom arithmetic it's fine
using bfloat16 = uint16_t;
using float8_e4m3fnuz = uint8_t;
using float8_e5m2fnuz = uint8_t;
// CK globals
template <ck::index_t... Is>
using S = ck::Sequence<Is...>;
template<typename... Ts>
using Tuple = ck::Tuple<Ts...>;
using PassThrough = ck::tensor_operation::element_wise::PassThrough;
using Bilinear = ck::tensor_operation::element_wise::Bilinear;
using Scale = ck::tensor_operation::element_wise::Scale;
using ScaleAdd = ck::tensor_operation::element_wise::ScaleAdd;
using MultiplyMultiply = ck::tensor_operation::element_wise::MultiplyMultiply;
// see "composable_kernel/include/ck/utility/data_type.hpp"
using F8 = ck::f8_t;
using BF8 = ck::bf8_t;
using F16 = ck::half_t;
using F32 = float;
// using F64 = double;
using BF16 = ck::bhalf_t;
// using I32 = int32_t;
// using I8 = int8_t;
// using I4 = ck::int4_t;
#if DEBUG_LOG
static constexpr auto kDEBUG_LOG = 1;
#else
static constexpr auto kDEBUG_LOG = 0;
#endif
// CK GEMM globals
using Row = ck::tensor_layout::gemm::RowMajor;
using Col = ck::tensor_layout::gemm::ColumnMajor;
using BlockGemmPipelineScheduler = ck::BlockGemmPipelineScheduler;
using GemmSpecialization = ck::tensor_operation::device::GemmSpecialization;
using BlockGemmPipelineVersion = ck::BlockGemmPipelineVersion;
struct MultiplyMultiplyAdd {
template <typename E, typename C, typename D0, typename D1, typename D2>
__host__ __device__ constexpr void
operator()(E& e, const C& c, const D0& d0, const D1& d1, const D2& d2) const {
e = ck::type_convert<E>(
ck::type_convert<float>(c)
* ck::type_convert<float>(d0)
* ck::type_convert<float>(d1)
+ ck::type_convert<float>(d2)
);
}
};
// Gemm operator ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVCol_KblayoutVRow_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationMNPadding_KblocksizeV256_KmperblockV224_KnperblockV256_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV16_KnperxdlV16_KmxdlperwaveV7_KnxdlperwaveV8_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV0x2x1_KablocktransfersrcaccessorderV0x2x1_KablocktransfersrcvectordimV1_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV0x2x1_KbblocktransfersrcaccessorderV0x2x1_KbblocktransfersrcvectordimV1_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV2_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x32x1x8_KcshuffleblocktransferscalarpervectornperblockV8_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone
using Operation_ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVCol_KblayoutVRow_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationMNPadding_KblocksizeV256_KmperblockV224_KnperblockV256_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV16_KnperxdlV16_KmxdlperwaveV7_KnxdlperwaveV8_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV0x2x1_KablocktransfersrcaccessorderV0x2x1_KablocktransfersrcvectordimV1_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV0x2x1_KbblocktransfersrcaccessorderV0x2x1_KbblocktransfersrcvectordimV1_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV2_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x32x1x8_KcshuffleblocktransferscalarpervectornperblockV8_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone =
ck::tensor_operation::device::DeviceGemmMultiD_Xdl_CShuffle_V3<
/* a_layout */ Col,
/* b_layout */ Row,
/* ds_layouts */ Tuple<>,
/* c_layout */ Row,
/* a_element_dtype */ BF16,
/* b_element_dtype */ BF16,
/* ds_element_dtypes */ Tuple<>,
/* c_element_dtype */ BF16,
/* acc_dtype */ F32,
/* c_shuffle_dtype */ BF16,
/* a_elementwise_op */ PassThrough,
/* b_elementwise_op */ PassThrough,
/* c_elementwise_op */ PassThrough,
/* gemm_specialization */ GemmSpecialization::MNPadding,
/* block_size */ 256,
/* m_per_block */ 224,
/* n_per_block */ 256,
/* k_per_block */ 64,
/* a_k1 */ 8,
/* b_k1 */ 8,
/* m_per_xdl */ 16,
/* n_per_xdl */ 16,
/* m_xdl_per_wave */ 7,
/* n_xdl_per_wave */ 8,
/* a_block_transfer_thread_cluster_lengths_ak0_m_ak1 */ S<8, 32, 1>,
/* a_block_transfer_thread_cluster_arrange_order */ S<0, 2, 1>,
/* a_block_transfer_src_access_order */ S<0, 2, 1>,
/* a_block_transfer_src_vector_dim */ 1,
/* a_block_transfer_src_scalar_per_vector */ 8,
/* a_block_transfer_dst_scalar_per_vector_ak1 */ 8,
/* a_block_lds_extra_m */ 0,
/* b_block_transfer_thread_cluster_lengths_bk0_n_bk1 */ S<8, 32, 1>,
/* b_block_transfer_thread_cluster_arrange_order */ S<0, 2, 1>,
/* b_block_transfer_src_access_order */ S<0, 2, 1>,
/* b_block_transfer_src_vector_dim */ 1,
/* b_block_transfer_src_scalar_per_vector */ 8,
/* b_block_transfer_dst_scalar_per_vector_bk1 */ 8,
/* b_block_lds_extra_n */ 0,
/* c_shuffle_m_xdl_per_wave_per_shuffle */ 1,
/* c_shuffle_n_xdl_per_wave_per_shuffle */ 2,
/* c_shuffle_block_transfer_cluster_lengths_m_block_m_per_block_n_block_n_per_block */ S<1, 32, 1, 8>,
/* c_shuffle_block_transfer_scalar_per_vector_n_per_block */ S<8>,
/* block_gemm_pipeline_scheduler */ BlockGemmPipelineScheduler::Intrawave,
/* block_gemm_pipeline_version */ BlockGemmPipelineVersion::v3>;
extern "C" {
PT_EXPORT int rocm_ck_gemm_template(const bfloat16* X, const bfloat16* W, bfloat16* Y, int32_t M, int32_t N, int32_t K, int32_t LDA, int32_t LDB, int32_t LDC, int32_t LDD, size_t* workspace_size, uint8_t* workspace, hipStream_t stream) {
auto gemm =
Operation_ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVCol_KblayoutVRow_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationMNPadding_KblocksizeV256_KmperblockV224_KnperblockV256_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV16_KnperxdlV16_KmxdlperwaveV7_KnxdlperwaveV8_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV0x2x1_KablocktransfersrcaccessorderV0x2x1_KablocktransfersrcvectordimV1_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV0x2x1_KbblocktransfersrcaccessorderV0x2x1_KbblocktransfersrcvectordimV1_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV2_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x32x1x8_KcshuffleblocktransferscalarpervectornperblockV8_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone {};
auto invoker = gemm.MakeInvoker();
auto argument = gemm.MakeArgument(
reinterpret_cast<const BF16*>(X),
reinterpret_cast<const BF16*>(W),
std::array<const void*, 0>{ },
reinterpret_cast<BF16*>(Y),
M,
N,
K,
LDA,
LDB,
std::array<ck::index_t, 0>{ },
LDC,
1, // kBatch
PassThrough {},
PassThrough {},
PassThrough {} // c_elementwise_op
);
if (!gemm.IsSupportedArgument(argument)) {
// we do our best to statically avoid this case in `filter_op`
std::cerr << "invalid argument for gemm instance " << gemm.GetTypeString() << std::endl;
argument.Print();
return -23;
}
if (workspace_size) {
*workspace_size = gemm.GetWorkSpaceSize(&argument);
return 0;
}
// run the kernel
#ifdef GENERATE_CK_STANDALONE_RUNNER
const auto stream_config = StreamConfig{
stream,
/* time kernel */ 1,
/* log level */ 1,
/* n_cold_iter */ 100,
/* n_hot_iter */ 100,
/* flush_l2_cache */ 1,
/* rotate_count */ 5};
#else
const auto stream_config = StreamConfig{stream, /* time kernel */ false, /* log level */ 0};
#endif
const float elapsed_time = invoker.Run(argument, stream_config);
#ifdef GENERATE_CK_STANDALONE_RUNNER
std::cout << "elapsed time: " << elapsed_time << " ms" << std::endl;
#else
(void)elapsed_time;
#endif
return 0;
} // kernel definition
} // extern C
⏎
```
</details>
### Versions
<details>
<summary>Very long version info on python3.12-torch-2.6.0a-nightly-20241218:</summary>
```
env | grep -E -i '(torch|hsa|rocm|rocr|ccl).*='
TORCHINDUCTOR_CK_DIR=/nix/store/8wrnjabmr02rhknibrjr6qya3fimml2f-python3.12-ck4inductor-6.4.0a20241217/lib/python3.12/site-packages/ck4inductor
TORCHINDUCTOR_CACHE_DIR=~/ml-cache/torchinductor
TORCHINDUCTOR_AUTOGRAD_CACHE=1
ROCM_BUILD_ID=release-nixos-60300
TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS=CK,ATEN,TRITON,CPP
ROCM_LIBPATCH_VERSION=60300
ROCM_PATH=/nix/store/8zk35m6vnbcf339zi9k6jra4xs4ipd49-rocm-hip-libraries-meta
TORCHINDUCTOR_FX_GRAPH_CACHE=1
TORCH_ROCM_FA_PREFER_CK=1
acl-2.3.2
aotriton-unstable-20241122
attr-2.5.2
bash-5.2p37
binutils-2.43.1
binutils-2.43.1-lib
binutils-wrapper-2.43.1
blas-3
blas-3-dev
brotli-1.1.0-lib
bzip2-1.0.8
bzip2-1.0.8-bin
bzip2-1.0.8-dev
clang-rocm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
clang-rocm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-dev
clang-rocm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-lib
clr-6.3.0
clr-6.3.0-icd
cmake-3.30.5
compiler-rt-libc-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
compiler-rt-libc-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-dev
coreutils-9.5
curl-8.11.0
elfutils-0.191
expand-response-params
expat-2.6.4
expat-2.6.4-dev
find-xml-catalogs-hook
gcc-13.3.0
gcc-13.3.0-lib
gcc-13.3.0-libgcc
gcc-prefix
gcc-wrapper-13.3.0
gdbm-1.24
gdbm-1.24-dev
gdbm-1.24-lib
getopt-1.1.6
gfortran-13.3.0
gfortran-13.3.0-lib
gfortran-13.3.0-libgcc
gfortran-wrapper-13.3.0
glibc-2.40-36
glibc-2.40-36-bin
glibc-2.40-36-dev
gmp-6.3.0
gmp-with-cxx-6.3.0
gnugrep-3.11
hipblas-6.3.0
hipblas-common-unstable
hipblaslt-6.3.0
hipcub-6.3.0
hipfft-6.3.0
hipfort-6.3.0
hipify-6.3.0
hiprand-6.3.0
hipsolver-6.3.0
hipsparse-6.3.0
hwdata-0.388
hwloc-2.11.2-lib
isl-0.20
keyutils-1.6.3-lib
krb5-1.21.3-lib
libarchive-3.7.7-lib
libcap-2.70-lib
libdrm-2.4.123
libevent-2.1.12
libfabric-1.22.0
libffi-3.4.6
libffi-3.4.6-dev
libgcrypt-1.10.3-lib
libglvnd-1.7.0
libgpg-error-1.50
libidn2-2.3.7
libmpc-1.3.1
libnl-3.10.0
libpciaccess-0.18.1
libpfm-4.13.0
libpsl-0.21.5
libpsm2-12.0.1
libsodium-1.0.20
libssh2-1.11.1
libunistring-1.2
libunwind-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
libunwind-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-dev
libuv-1.48.0
libX11-1.8.10
libXau-1.0.11
libxcb-1.17.0
libxcrypt-4.4.36
libXdmcp-1.1.5
libXext-1.3.6
libxml2-2.13.4
libxml2-2.13.4-bin
libxml2-2.13.4-dev
libyaml-0.2.5
linux-headers-6.10
lld-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
lld-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-dev
lld-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-lib
llhttp-9.2.1
llvm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
llvm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-dev
llvm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-lib
llvm-binutils-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
llvm-binutils-wrapper-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
lsb_release
mailcap-2.1.54
miopen-6.3.0
miopen-gfx1030.kdb
miopen-gfx900.kdb
miopen-gfx906.kdb
miopen-gfx908.kdb
miopen-gfx90a.kdb
mpdecimal-4.0.0
mpdecimal-4.0.0-cxx
mpdecimal-4.0.0-dev
mpfr-4.2.1
mpich-4.2.3
mpich-4.2.3-doc
mpich-4.2.3-man
munge-0.5.16
ncurses-6.4.20221231
ncurses-6.4.20221231-dev
ncurses-6.4.20221231-man
nghttp2-1.64.0-lib
nss-cacert-3.104
numactl-2.0.18
openblas-0.3.28
openmp-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
openmpi-5.0.6
openmpi-5.0.6-man
openssl-3.3.2
openssl-3.3.2-bin
openssl-3.3.2-dev
pcre2-10.44
perl-5.40.0
pkg-config-0.29.2
pkg-config-wrapper-0.29.2
pmix-5.0.4
prrte-3.0.7
publicsuffix-list-0-unstable-2024-10-25
python3.12-aiodns-3.2.0
python3.12-aiohappyeyeballs-2.4.2
python3.12-aiohttp-3.10.10
python3.12-aiosignal-1.3.1
python3.12-async-timeout-4.0.3
python3.12-attrs-24.2.0
python3.12-bcrypt-4.2.0
python3.12-brotli-1.1.0
python3.12-brotlicffi-1.1.0.0
python3.12-certifi-2024.08.30
python3.12-cffi-1.17.1
python3.12-charset-normalizer-3.3.2
python3.12-ck4inductor-6.4.0a20241217
python3.12-cryptography-43.0.1
python3.12-filelock-3.16.1
python3.12-frozenlist-1.4.1
python3.12-fsspec-2024.3.0
python3.12-huggingface-hub-0.26.2
python3.12-idna-3.10
python3.12-joblib-1.4.2
python3.12-lz4-4.3.3
python3.12-markdown-it-py-3.0.0
python3.12-mdurl-0.1.2
python3.12-msgpack-1.1.0
python3.12-multidict-6.1.0
python3.12-numpy-1.26.4
python3.12-orjson-3.10.7
python3.12-packaging-24.1
python3.12-pandas-2.2.3
python3.12-paramiko-3.5.0
python3.12-pip-24.0
python3.12-psutil-6.0.0
python3.12-pycares-4.4.0
python3.12-pycparser-2.22
python3.12-pygments-2.18.0
python3.12-pynacl-1.5.0
python3.12-pyspnego-0.11.1
python3.12-python-dateutil-2.9.0.post0
python3.12-pytz-2024.2
python3.12-pyyaml-6.0.2
python3.12-requests-2.32.3
python3.12-rich-13.8.1
python3.12-simplejson-3.19.3
python3.12-six-1.16.0
python3.12-smbprotocol-1.14.0
python3.12-tensile-6.3.0
python3.12-tensilelite-6.3.0
python3.12-torch-2.6.0a-nightly-20241218
python3.12-torch-2.6.0a-nightly-20241218-lib
python3.12-tqdm-4.66.5
python3.12-typing-extensions-4.12.2
python3.12-tzdata-2024.2
python3.12-ujson-5.10.0
python3.12-urllib3-2.2.3
python3.12-yarl-1.13.1
python3.12-zstd-1.5.5.1
python3-3.12.7
rccl-6.3.0
rdma-core-54.0
readline-8.2p13
readline-8.2p13-dev
rhash-1.4.4
rocalution-6.3.0
rocblas-6.3.0
rocfft-6.3.0
rocm-comgr-6.3.0
rocm-core-6.3.0
rocmcxx
rocm-device-libs-6.3.0
rocminfo-6.3.0
rocm-llvm-merge
rocm-merged
rocm-runtime-6.3.0
rocm-smi-6.3.0
rocprim-6.3.0
rocprofiler-register-6.3.0
rocrand-6.3.0
rocsolver-6.3.0
rocsparse-6.3.0
rocthrust-6.3.0
roctracer-6.3.0
shell-deps
source
sqlite-3.46.1
sqlite-3.46.1-bin
sqlite-3.46.1-dev
strip.sh
systemd-minimal-libs-256.7
tzdata-2024b
ucc-1.3.0
ucx-1.17.0
unpack-composable_kernel-6.4.0a20241217
util-linux-minimal-2.39.4-lib
xgcc-13.3.0-libgcc
xz-5.6.3
xz-5.6.3-bin
xz-5.6.3-dev
zlib-1.3.1
zlib-1.3.1-dev
zstd-1.5.6
zstd-1.5.6-bin
zstd-1.5.6-dev
Collecting environment information...
PyTorch version: 2.6.0a.post20241218
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-0
OS: NixOS 25.05 (Warbler) (x86_64)
GCC version: Could not collect
Clang version: 18.0.0git
CMake version: version 3.30.5
Libc version: glibc-2.40
Python version: 3.12.7 (main, Oct 1 2024, 02:05:46) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.12.4-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI100 (gfx908:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD Eng Sample: 100-000000425_37/24_N
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 35%
CPU max MHz: 5616.0000
CPU min MHz: 400.0000
BogoMIPS: 7400.01
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.6.0a0.post20241218
[pip3] triton==3.2.0
[conda] Could not collect
```
</details>
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @chauhang @penguinwu
| true
|
2,753,597,236
|
[Codemod][AddExplicitStrictExportArg] caffe2/benchmarks/dynamo
|
gmagogsfm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"test-config/default",
"module: dynamo",
"ciflow/inductor"
] | 26
|
CONTRIBUTOR
|
Reviewed By: avikchaudhuri
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,753,579,064
|
[BE] Remove gcc-5 workaround for unused args
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
ditto
| true
|
2,753,569,748
|
Enhance provenance tracing unit test to cover `torch.compile()`
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 14
|
CONTRIBUTOR
|
Summary: Follow up as title.
Test Plan:
```
buck2 run -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=h100 @//mode/opt fbcode//caffe2/test/inductor:provenance_tracing -- -r test_triton_kernel_to_post_grad_tracing
```
Differential Revision: D67543556
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,753,555,060
|
[inductor] Fix an aten.squeeze stride computation issue
|
desertfire
|
closed
|
[
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143683
Summary: Fixes https://github.com/pytorch/pytorch/issues/143498. The root cause is incorrect output stride for aten.squeeze (coming from aten.select in this case). If the input to aten.squeeze is non-contiguous, its output strides should also be non-contiguous. In addition to that, aten.cat also causes problem to the stride computation if it is turned into pointwise cat.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov
| true
|
2,753,540,311
|
Use random64 in Fischer-Yates algorithm for large N
|
ngimel
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cpp",
"ci-no-td"
] | 27
|
COLLABORATOR
|
Fixes bug in randperm https://nbsanity.com/static/a4774194938414dedcec7d6e99727d31/Shuffling_20in_20torch_20vs_20numpy-public.html
| true
|
2,753,536,066
|
[NestedTensors] Add an op to index on the ragged dimensions
|
krzysztofjordan
|
open
|
[
"triaged",
"open source",
"fb-exported",
"Stale",
"release notes: nested tensor"
] | 3
|
CONTRIBUTOR
|
Summary:
One of the functionality we want is to be able to truncate nested tensors to new specified length arrays (rather than a narrow that does a uniform truncation across all batches).
This change enables that through the indexing operator.
Test Plan: N6362104
Differential Revision: D67514922
| true
|
2,753,534,383
|
[AOTI][reland] Emit a CMakeLists.txt when package_cpp_only
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: new features",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ciflow/rocm",
"ci-no-td"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143680
Summary: Emit a CMakeLists.txt with compile and link options when package_cpp_only is specified. After unzipping AOTI generated .pt2 package file, user can manually build the generated model code in their local environment.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov
| true
|
2,753,517,797
|
Add support for differentiable weight decay
|
EmmettBicker
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: optim"
] | 7
|
CONTRIBUTOR
|
(Actual) second PR in a larger project to broaden support for differentiable optimizers with @janeyx99!
In this PR, I did a lot of pattern matching from the previous PR to add support for differentiable weight_decay.
And also added a single new line on line 359 (previously line 352) to make the code from the last PR a little easier to read
Continuation of progress on #141832
| true
|
2,753,512,774
|
[PTD] Dump rcclexp proxy trace in pytorch
|
dmwu
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 12
|
CONTRIBUTOR
|
Summary:
Dump the active proxyOp status per rank and per communicator when WatchDog timeout or aborts.
Added
`#if defined(USE_ROCM) && defined(NCCL_COMM_DUMP)` guard in the print function, so only rcclexp users will see this dump in console.
This is the changes of the PTD.
Test Plan:
Job with A2A hang due to receiver failing to post receive operations https://fburl.com/mlhub/95vg12r3
{F1971449692}
Reviewed By: c-p-i-o
Differential Revision: D67036093
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,753,506,415
|
Upload METADATA file with whl binaries
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Upload the metadata file for wheels for pep658 https://peps.python.org/pep-0658/
Using a python script but using bash might be easier...
--
Testing
Example run https://github.com/pytorch/pytorch/actions/runs/12550595201/job/34994883276 without actual upload, just dry run
Lightly tested the script to make sure it uploads to s3, but integration with the bash script + workflow is untested
| true
|
2,753,486,970
|
graph module retracing without preserving MCS
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143676
* #143664
Retracing while preserving module call signatures used to be a problem because graph modules don't have submodules at given paths. This led to a number of failing retracebility tests. By not trying to wrap modules with export tracepoints we can pass most of these tests; the only exception is where you do module swapping on retraced programs, which is still not possible.
Differential Revision: [D67539304](https://our.internmc.facebook.com/intern/diff/D67539304/)
| true
|
2,753,464,526
|
Fix incorrect python expression
|
mhorowitz
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13
|
CONTRIBUTOR
|
Summary:
This expression would return True always, causing the input to be deleted
on error, even for non-write modes:
```
>>> bool("w" or "+" or "a" in "rb")
True
```
Test Plan: new test in test_fsspec.py
Differential Revision: D67537234
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @pradeepfn @ekr0
| true
|
2,753,451,772
|
[easy] Set feature use for aot autograd remote cache
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: AO frontend"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143674
Use set_feature_use for logging aot autograd cache so that dynamo_compile has this data as well as PT2 Compile Events.
Differential Revision: [D67536293](https://our.internmc.facebook.com/intern/diff/D67536293/)
| true
|
2,753,386,276
|
[ROCm] Enable post-merge trunk workflow on MI300 runners; skip and fix MI300 related failed tests
|
dnikolaev-amd
|
closed
|
[
"oncall: distributed",
"module: rocm",
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"module: inductor",
"ciflow/inductor",
"rocm",
"rocm priority",
"keep-going",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 33
|
CONTRIBUTOR
|
This PR
* makes changes to the workflow files and scripts so we can run CI workflows on the MI300 runners
* skips and fixes several tests, failed on MI300, observed in https://github.com/pytorch/pytorch/pull/140989
Skipped due to unsupported Float8_e4m3fn data type on MI300 (need to update test code to use datatypes supported by MI300):
- distributed.tensor.parallel.test_micro_pipeline_tp.py::MicroPipelineTPTest::test_fuse_all_gather_scaled_matmul_A_dims_\*_gather_dim_\* (24 tests across inductor/distributed configs)
- distributed.tensor.parallel.test_micro_pipeline_tp.py::test_fuse_scaled_matmul_reduce_scatter_A_dims_\*_scatter_dim_\* (12 tests across inductor/distributed configs))
- inductor.test_loop_ordering::LoopOrderingTest::test_fp8_cast_and_t
- inductor.test_loop_ordering::LoopOrderingTest::test_fp8_pattern_2
Skipped due to AssertionError on MI300:
- inductor.test_mkldnn_pattern_matcher.py::test_qconv2d_int8_mixed_bf16
- distributed._tools.test_sac_ilp::TestSACILP::test_sac_ilp_case1
Skipped:
- test_cuda.py::TestCudaMallocAsync::test_clock_speed
- test_cuda.py::TestCudaMallocAsync::test_power_draw
- test_torch.py::TestTorchDeviceTypeCUDA::test_deterministic_cumsum_cuda
Skipped flaky tests on MI300:
- distributed.test_c10d_gloo.py::ProcessGroupGlooTest::test_gather_stress_cuda
- inductor.test_cpu_repro::CPUReproTests::test_lstm_packed_unbatched_False* (256 tests)
Fixed:
- test_matmul_cuda.py::TestFP8MatmulCudaCUDA::test_float8_basics_cuda
Features:
- inductor/test_fp8.py - declare a new function to convert FP8 datatypes to ROCm supported FP8 datatypes. It keeps test names for CUDA and ROCm and allows to enable Inductor FP8 tests on CPU
cc: @jithunnair-amd
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,753,347,424
|
ci: Add scaffolding for buidling wheels sequentially
|
seemethere
|
open
|
[
"release notes: releng"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149675
* __->__ #143672
* #148419
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,753,323,779
|
Getattr access for subclasses in pre-dispatch
|
tugsbayasgalan
|
closed
|
[
"Stale",
"ciflow/trunk",
"fx",
"ciflow/inductor",
"release notes: export"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143671
This is a draft PR that tries to prototype how to capture attribute access in pre-dispatch IR. The motivating use case is: https://github.com/pytorch/ao/blob/039cef4ad546716aa04cd54c461feb173f7fe403/tutorials/developer_api_guide/export_to_executorch.py#L54 where TorchAO overrides Embedding.weight with a tensor subclass and then do attribute access inside it. We have to solve this problem in both strict and non-strict because even when dynamo translates subclass.inner_tensor into a fx Graph, the underlying tracer that converts torch IR to aten IR will need to handle subclass.inner_tensor as well.
I think there are roughly two ways to implement this:
1) Override getattr on tensor in export to monkey patch inner tensors as properties and add torch function handler
2) When we first see a subclass tensor in make_fx, attach proxy to its' call inner tensors.
I tried implementing (1) here: https://github.com/pytorch/pytorch/pull/143518, but it turned out to be quite intrusive. So I prefer (2) for now. Only downside for (2) is we are creating proxy for unnecessarily many inner tensors but i think it is ok because:
1) torch.compile will never see tensor subclass in make_fx so it won't regress any tracing speed
2) Hopefully tensor subclasses that are used in export are not that nest-y.
I also noticed a small bug in tensor subclass unwrapping logic. cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @IvanKobzarev
It seems easier if we just implement it recursively so that it is easier to track the inner attrs to corresponding plain tensors and both aot_autograd and fake_tensor implement subclass unwrapping recursively.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D67531713](https://our.internmc.facebook.com/intern/diff/D67531713)
| true
|
2,753,319,835
|
Potential rooms for fewer recompilations by introducing higher-level guards
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I encountered this while investigating recompilations in #128071. The relevant model code is [here](https://github.com/HazyResearch/based/blob/5cee0bf62be1582580d073af069b96f7fb8dc6b2/based/models/mixers/convolution.py#L131-L146).
## Repro
```python
import torch
@torch.compile(backend="eager")
def f(x, int_dict, n):
if n in int_dict:
return x + 1
return x + 2
x = torch.ones(2)
f(x, {1 : '1'}, 1)
f(x, {1 : '1', 2 : '2'}, 1)
f(x, {2 : '2'}, 2)
```
Running `TORCH_LOGS="recompiles" repro.py` gives 2 recompilations:
```
V1220 10:52:51.577000 12464 torch/_dynamo/guards.py:2817] [0/1] [__recompiles] Recompiling function f in /Users/ryanguo99/Documents/work/scratch/test-dict-contains-guards.py:3
V1220 10:52:51.577000 12464 torch/_dynamo/guards.py:2817] [0/1] [__recompiles] triggered by the following guard failure(s):
V1220 10:52:51.577000 12464 torch/_dynamo/guards.py:2817] [0/1] [__recompiles] - 0/0: len(L['int_dict']) == 1
V1220 10:52:51.592000 12464 torch/_dynamo/guards.py:2817] [0/2] [__recompiles] Recompiling function f in /Users/ryanguo99/Documents/work/scratch/test-dict-contains-guards.py:3
V1220 10:52:51.592000 12464 torch/_dynamo/guards.py:2817] [0/2] [__recompiles] triggered by the following guard failure(s):
V1220 10:52:51.592000 12464 torch/_dynamo/guards.py:2817] [0/2] [__recompiles] - 0/1: L['n'] == 1
V1220 10:52:51.592000 12464 torch/_dynamo/guards.py:2817] [0/2] [__recompiles] - 0/0: KeyError on L['int_dict'][1]
```
Relevant guards by running `TORCH_LOGS="guards" python repro.py`:
- For `f(x, {1 : '1'}, 1)`
```
[__guards] TREE_GUARD_MANAGER:
[__guards] +- RootGuardManager
[__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:493 in init_ambient_guards
[__guards] | +- GLOBAL_STATE: ___check_global_state()
[__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
[__guards] | +- GuardManager: source=L['n'], accessed_by=FrameLocalsGuardAccessor(key='n', framelocals_idx=2)
[__guards] | | +- EQUALS_MATCH: L['n'] == 1 # if n in int_dict: # scratch/test.py:22 in f
[__guards] | +- GuardManager: source=L['x'], accessed_by=FrameLocalsGuardAccessor(key='x', framelocals_idx=0)
[__guards] | | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[2], stride=[1]) # return x + 1 # scratch/test.py:23 in f
[__guards] | | +- NO_HASATTR: hasattr(L['x'], '_dynamo_dynamic_indices') == False # return x + 1 # scratch/test.py:23 in f
[__guards] | +- GuardManager: source=L['int_dict'], accessed_by=FrameLocalsGuardAccessor(key='int_dict', framelocals_idx=1)
[__guards] | | +- DICT_LENGTH: len(L['int_dict']) == 1 # if n in int_dict: # scratch/test.py:22 in f
[__guards] | | +- GuardManager: source=L['int_dict'][1], accessed_by=DictGetItemGuardAccessor(1)
[__guards] | | | +- EQUALS_MATCH: L['int_dict'][1] == '1' # if n in int_dict: # scratch/test.py:22 in f
```
- For `f(x, {2 : '2'}, 2)`
```
[__guards] TREE_GUARD_MANAGER:
[__guards] +- RootGuardManager
[__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:493 in init_ambient_guards
[__guards] | +- GLOBAL_STATE: ___check_global_state()
[__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
[__guards] | +- GuardManager: source=L['n'], accessed_by=FrameLocalsGuardAccessor(key='n', framelocals_idx=2)
[__guards] | | +- TYPE_MATCH: ___check_type_id(L['n'], 4308563856) # if n in int_dict: # scratch/test.py:22 in f
[__guards] | +- GuardManager: source=L['x'], accessed_by=FrameLocalsGuardAccessor(key='x', framelocals_idx=0)
[__guards] | | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[2], stride=[1]) # return x + 2 # scratch/test.py:24 in f
[__guards] | | +- NO_HASATTR: hasattr(L['x'], '_dynamo_dynamic_indices') == False # return x + 2 # scratch/test.py:24 in f
[__guards] | +- GuardManager: source=L['int_dict'], accessed_by=FrameLocalsGuardAccessor(key='int_dict', framelocals_idx=1)
[__guards] | | +- DICT_LENGTH: len(L['int_dict']) == 1 # if n in int_dict: # scratch/test.py:22 in f
[__guards] | | +- GuardManager: source=L['int_dict'][1], accessed_by=DictGetItemGuardAccessor(1)
[__guards] | | | +- EQUALS_MATCH: L['int_dict'][2] == '2'
[__guards] +- LAMBDA_GUARD: L['n'] == 2 # if n in int_dict: # scratch/test.py:22 in f (_dynamo/variables/tensor.py:1200 in evaluate_expr)
```
## Thoughts
As shown above, currently Dynamo specializes pretty hard on `int_dict` and `n`, when processing the expression `n in int_dict`; this causes a lot of recompilations both in this contrived example and #128071.
However, in theory we don't care about the specifics of `int_dict` and `n`, rather we just care about whether `int_dict` contains `n`. Thus, we could emit a more general and higher level guard `DICT_CONTAINS` that's parameterized over both the dictionary source and integer source (the current `DICT_CONTAINS` still specializes over the integer source, as we only allow [1 source](https://github.com/pytorch/pytorch/blob/3ee029d4020c40a07c7e20d4f36f08d9697f8d8f/torch/_guards.py#L217) for each guard).
Is this a big problem? Well, for #128071 we would be able to circumvent this problem by fixing the graph breaks, although in other words this rare-looking scenario is exposed by graph breaks in the wild.
Fixing this feels like a non-trivial undertaking, and it's unclear what the ROI is, so I'm creating this issue to track some findings for now.
### Error logs
_No response_
### Versions
main d8ea4ce63, python 3.12
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,753,300,281
|
String representation of nn.MultiheadAttention should contain arguments
|
fiskrt
|
open
|
[
"module: nn",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
Following the recommendation of the [python docs](https://docs.python.org/3/reference/datamodel.html#object.__repr__), the string representation of an object should contain enough information to be re-constructable. Hence, the `nn.MultiheadAttention` class should follow this, like other modules do, c.f. `nn.Linear`, `nn.RNN`, etc.
Current behaviour:
```
import torch.nn as nn
repr(nn.MultiheadAttention(num_heads=2, embed_dim=4)) = 'MultiheadAttention( (out_proj): NonDynamicallyQuantizableLinear(in_features=4, out_features=4, bias=True))'
```
Expected behaviour:
```
repr(nn.MultiheadAttention(num_heads=2, embed_dim=4)) = 'MultiheadAttention(num_heads=2, embed_dim=4)'
```
For example the string representation of a linear layer allows you to reconstruct it:
```
repr(nn.Linear(in_features=3, out_features=5)) = 'Linear(in_features=3, out_features=5, bias=True)'
```
as it contains all the information about the arguments passed. Modules with more parameters such as `nn.RNN` solves this by repeating all arguments that was passed.
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.29.6
Libc version: N/A
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 10:07:17) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.3.3
[pip3] torch==2.5.1
[pip3] torchmetrics==1.4.0.post0
[pip3] torchtext==0.17.2
[pip3] torchvision==0.20.1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-lightning 2.3.3 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchmetrics 1.4.0.post0 pypi_0 pypi
[conda] torchtext 0.17.2 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,753,277,709
|
Fix test_serialization_zipfile_actually_jit when weights_only is not default
|
mikaylagawarecki
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Fails in fbcode where weights_only isn't default
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143668
* #143403
* #143326
| true
|
2,753,268,676
|
Inductor specializes over input dimension when it's used in a `torch.full` call
|
StrongerXi
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I encountered this while investigating recompilations in #128071. The relevant model code is [here](https://github.com/HazyResearch/based/blob/5cee0bf62be1582580d073af069b96f7fb8dc6b2/based/generation.py#L143-L153).
## Repro
Run the following with `TORCH_LOGS="guards, dynamic, recompiles" python repro.py`
### No recompilation under backend="eager"
```python
import torch
@torch.compile(dynamic=True, backend="eager")
def f(x):
s0 = x.shape[0]
y = torch.full((1,), s0)
return x + y
f(torch.ones(10))
f(torch.ones(11))
```
Log:
```
I1220 10:37:01.759000 90569 torch/fx/experimental/symbolic_shapes.py:3198] [0/0] create_env
I1220 10:37:01.812000 90569 torch/fx/experimental/symbolic_shapes.py:4439] [0/0] create_symbol s0 = 10 for L['x'].size()[0] [2, int_oo] s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f (_dynamo/variables/builder.py:2870 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0" or to suppress this message run with TORCHDYNAMO_EXTENDED_ADVICE="0"
I1220 10:37:01.840000 90569 torch/fx/experimental/symbolic_shapes.py:4563] [0/0] produce_guards
V1220 10:37:01.842000 90569 torch/_dynamo/guards.py:2390] [0/0] [__guards] GUARDS:
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards]
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] TREE_GUARD_MANAGER:
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] +- RootGuardManager
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:493 in init_ambient_guards
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | +- GuardManager: source=L['x'], accessed_by=FrameLocalsGuardAccessor(key='x', framelocals_idx=0)
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | +- TYPE_MATCH: ___check_type_id(L['x'], 5105381728) # s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[None], stride=[1]) # s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | +- NO_HASATTR: hasattr(L['x'], '_dynamo_dynamic_indices') == False # s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | +- GuardManager: source=G['torch'], accessed_by=DictGetItemGuardAccessor('torch')
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | | +- ID_MATCH: ___check_obj_id(G['torch'], 4319782256) # y = torch.full((1,), s0) # scratch/test-inductor-fold.py:6 in f
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | | +- GuardManager: source=G['torch'].full, accessed_by=GetAttrGuardAccessor(full)
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].full, 4326939744) # y = torch.full((1,), s0) # scratch/test-inductor-fold.py:6 in f
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards] +- LAMBDA_GUARD: 2 <= L['x'].size()[0] # s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f (user code shown is first use of this value--the guard itself is not due user code but due to 0/1 specialization in the framework; to avoid specialization try torch._dynamo.mark_unbacked(tensor, dim))
V1220 10:37:01.843000 90569 torch/_dynamo/guards.py:2347] [0/0] [__guards]
V1220 10:37:02.844000 90569 torch/_dynamo/guards.py:2372] [0/0] [__guards] Guard eval latency = 1.47 us
```
### Recompilation under bakend="inductor"
```python
import torch
@torch.compile(dynamic=True, backend="inductor")
def f(x):
s0 = x.shape[0]
y = torch.full((1,), s0)
return x + y
f(torch.ones(10))
f(torch.ones(11))
```
Log:
```
I1220 10:36:25.350000 89728 torch/fx/experimental/symbolic_shapes.py:3198] [0/0] create_env
I1220 10:36:25.388000 89728 torch/fx/experimental/symbolic_shapes.py:4439] [0/0] create_symbol s0 = 10 for L['x'].size()[0] [2, int_oo] s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f (_dynamo/variables/builder.py:2870 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0" or to suppress this message run with TORCHDYNAMO_EXTENDED_ADVICE="0"
I1220 10:36:26.045000 89728 torch/fx/experimental/symbolic_shapes.py:5979] [0/0] set_replacement s0 = 10 (range_refined_to_singleton) VR[10, 10]
I1220 10:36:26.046000 89728 torch/fx/experimental/symbolic_shapes.py:6297] [0/0] eval Eq(s0, 10) [guard added] (_ops.py:722 in __call__), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(s0, 10)"
I1220 10:36:28.339000 89728 torch/fx/experimental/symbolic_shapes.py:4563] [0/0] produce_guards
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2390] [0/0] [__guards] GUARDS:
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards]
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] TREE_GUARD_MANAGER:
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] +- RootGuardManager
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:493 in init_ambient_guards
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | +- GuardManager: source=L['x'], accessed_by=FrameLocalsGuardAccessor(key='x', framelocals_idx=0)
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | +- TYPE_MATCH: ___check_type_id(L['x'], 4610255456) # s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[10], stride=[1]) # s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | +- NO_HASATTR: hasattr(L['x'], '_dynamo_dynamic_indices') == False # s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | +- GuardManager: source=G['torch'], accessed_by=DictGetItemGuardAccessor('torch')
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | | +- ID_MATCH: ___check_obj_id(G['torch'], 4339639664) # y = torch.full((1,), s0) # scratch/test-inductor-fold.py:6 in f
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | | +- GuardManager: source=G['torch'].full, accessed_by=GetAttrGuardAccessor(full)
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].full, 4346797152) # y = torch.full((1,), s0) # scratch/test-inductor-fold.py:6 in f
V1220 10:36:28.340000 89728 torch/_dynamo/guards.py:2347] [0/0] [__guards]
V1220 10:36:29.341000 89728 torch/_dynamo/guards.py:2372] [0/0] [__guards] Guard eval latency = 1.32 us
V1220 10:36:29.347000 89728 torch/_dynamo/guards.py:2817] [0/1] [__recompiles] Recompiling function f in /Users/ryanguo99/Documents/work/scratch/test-inductor-fold.py:3
V1220 10:36:29.347000 89728 torch/_dynamo/guards.py:2817] [0/1] [__recompiles] triggered by the following guard failure(s):
V1220 10:36:29.347000 89728 torch/_dynamo/guards.py:2817] [0/1] [__recompiles] - 0/0: tensor 'L['x']' size mismatch at index 0. expected 10, actual 11
I1220 10:36:29.348000 89728 torch/fx/experimental/symbolic_shapes.py:3198] [0/1] create_env
I1220 10:36:29.349000 89728 torch/fx/experimental/symbolic_shapes.py:4439] [0/1] create_symbol s0 = 11 for L['x'].size()[0] [2, int_oo] s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f (_dynamo/variables/builder.py:2870 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0" or to suppress this message run with TORCHDYNAMO_EXTENDED_ADVICE="0"
I1220 10:36:29.377000 89728 torch/fx/experimental/symbolic_shapes.py:5979] [0/1] set_replacement s0 = 11 (range_refined_to_singleton) VR[11, 11]
I1220 10:36:29.377000 89728 torch/fx/experimental/symbolic_shapes.py:6297] [0/1] eval Eq(s0, 11) [guard added] (_ops.py:722 in __call__), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(s0, 11)"
I1220 10:36:29.390000 89728 torch/fx/experimental/symbolic_shapes.py:4563] [0/1] produce_guards
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2390] [0/1] [__guards] GUARDS:
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards]
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] TREE_GUARD_MANAGER:
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] +- RootGuardManager
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:493 in init_ambient_guards
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | +- GuardManager: source=L['x'], accessed_by=FrameLocalsGuardAccessor(key='x', framelocals_idx=0)
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | | +- TYPE_MATCH: ___check_type_id(L['x'], 4610255456) # s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[11], stride=[1]) # s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | | +- NO_HASATTR: hasattr(L['x'], '_dynamo_dynamic_indices') == False # s0 = x.shape[0] # scratch/test-inductor-fold.py:5 in f
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | | +- GuardManager: source=G['torch'], accessed_by=DictGetItemGuardAccessor('torch')
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | | | +- ID_MATCH: ___check_obj_id(G['torch'], 4339639664) # y = torch.full((1,), s0) # scratch/test-inductor-fold.py:6 in f
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | | | +- GuardManager: source=G['torch'].full, accessed_by=GetAttrGuardAccessor(full)
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].full, 4346797152) # y = torch.full((1,), s0) # scratch/test-inductor-fold.py:6 in f
V1220 10:36:29.391000 89728 torch/_dynamo/guards.py:2347] [0/1] [__guards]
V1220 10:36:30.392000 89728 torch/_dynamo/guards.py:2372] [0/1] [__guards] Guard eval latency = 1.34 us
```
### Error logs
_No response_
### Versions
main d8ea4ce63, python 3.12
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,753,231,897
|
Extend vec backend with BF16 SVE intrinsics
|
Ryo-not-rio
|
closed
|
[
"oncall: distributed",
"module: cpu",
"triaged",
"open source",
"module: arm",
"Merged",
"NNC",
"Reverted",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd",
"ci-no-td",
"arm priority"
] | 36
|
COLLABORATOR
|
- Following the work in https://github.com/pytorch/pytorch/pull/119571, BF16 SVE intrinsics are added to the Vectorized class, providing ~1.7x speedup on `silu` and `softmax`.
- Added bf16 detection in CMake
- Added a guard for native NEON code to prevent compilation errors
@aditew01 @maajidkhann please have a look
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01 @EikanWang @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan @kwen2501 @c-p-i-o @yf225 @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @StrongerXi @ColinPeppler @desertfire
| true
|
2,753,210,034
|
Support multiple prefetches
|
xyg-coder
|
closed
|
[
"oncall: distributed",
"release notes: distributed (fsdp)"
] | 2
|
NONE
|
This will prefetch multiple modules. It's useful to increase the overlap. Although most of the time this will only cause extra prefetches in the first module call.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,753,206,431
|
unflatten isinstance
|
avikchaudhuri
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143676
* __->__ #143664
When we unflatten, the submodules we generate (`InterpreterModule` or `InterpreterModuleDispatcher`) are not related by type to the original submodules `N`. This makes `isinstance(mod, N)` checks fail. Since we do not have the original types after export, the best we can do is expose a `type_name()` method that carries the original type name, which we do carry in `nn_module_stack` entries.
Differential Revision: [D67526542](https://our.internmc.facebook.com/intern/diff/D67526542/)
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,753,204,866
|
[don't merge] update vs2022
|
xuhancn
|
closed
|
[
"module: windows",
"open source",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel",
"intel",
"ciflow/xpu"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,753,204,863
|
fix test_rng bisector test
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143662
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,753,167,317
|
Fix separate in process bisector cache, cleanup on exit
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143662
* __->__ #143661
* #143657
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,753,131,767
|
allow profiling on all threads via experimentalConfig
|
ngimel
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: profiler"
] | 3
|
COLLABORATOR
|
In some situations we want to profile calls coming from all threads (similar to on-demand), not just the thread that started profiling and the spawned threads that would inherit KinetoThreadLocal state.
| true
|
2,753,108,276
|
Log more contextual data when nan is detected under the anomaly mode
|
yunjiangster
|
open
|
[
"module: autograd",
"triaged",
"module: NaNs and Infs",
"actionable"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Currently the anomaly mode only reports a nan is found at a particular op, without showing the full input/output tensors as well as their sizes. This makes it difficult to narrow down the root cause of nan.
PR #143633 addresses this.
### Alternatives
Log the problematic (nan) tensors in the python interface. This seems unnecessary and inefficient given the c++ anomaly mode code is already available.
### Additional context
We are in the process of debugging a nan issue caused by the dummy tensor in the All2All_Seq_Req_Wait backward pass here
https://github.com/pytorch/torchrec/blob/main/torchrec/distributed/comm_ops.py#L110
With PR #143633, it was immediately clear we were looking at the dummy_tensor, which has dimension [1].
```
RuntimeError: Function 'All2All_Seq_Req_WaitBackward' returned nan values in its 0th output; num_outputs = 1; num_inputs = 0; outputs[0].shape = [1, ]; outputs[i] = nan
[ torch.cuda.FloatTensor{1} ]
```
This is the fix PR for dummy_tensor: https://github.com/pytorch/torchrec/pull/2648
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,753,084,377
|
Fix emulate low precision bool inp
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143662
* #143661
* __->__ #143657
Fix for https://github.com/pytorch/pytorch/issues/143502
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,753,061,745
|
RFC: Use plain metal kernel for MPS mul
|
swolchok
|
closed
|
[
"Stale",
"release notes: mps",
"ciflow/mps"
] | 2
|
CONTRIBUTOR
|
I suspect that torchchat MPS generation speed may be limited by dispatch overheads.
Test command: python3 torchchat.py generate llama3.2-1b-base --device mps --dtype float --num-samples=3
Run 3 + total average result (the first run drags the average down, which is why I'm also including run 3 results) before:
```
Time for inference 3: 3.0606 sec total
Time to first token: 0.0307 sec with parallel prefill.
Total throughput: 23.1982 tokens/sec, 0.0431 s/token
First token throughput: 32.5849 tokens/sec, 0.0307 s/token
Next token throughput: 23.1031 tokens/sec, 0.0433 s/token
Bandwidth achieved: 139.05 GB/s
========================================
Warning: Excluding compile in calculations
Average tokens/sec (total): 22.36
Average tokens/sec (first token): 23.41
Average tokens/sec (next tokens): 22.48
```
After (just the mul kernel):
```
Time for inference 3: 3.5690 sec total
Time to first token: 0.0217 sec with parallel prefill.
Total throughput: 23.5357 tokens/sec, 0.0425 s/token
First token throughput: 46.1805 tokens/sec, 0.0217 s/token
Next token throughput: 23.3975 tokens/sec, 0.0427 s/token
Bandwidth achieved: 141.07 GB/s
========================================
Warning: Excluding compile in calculations
Average tokens/sec (total): 22.50
Average tokens/sec (first token): 32.93
Average tokens/sec (next tokens): 22.71
```
After including #143630 and adding a simple add/sub kernel (which required basic mixed dtype support to get through llama3.2 1B inference):
```
Time for inference 3: 3.4373 sec total
Time to first token: 0.0159 sec with parallel prefill.
Total throughput: 22.9830 tokens/sec, 0.0435 s/token
First token throughput: 63.0393 tokens/sec, 0.0159 s/token
Next token throughput: 22.7973 tokens/sec, 0.0439 s/token
Bandwidth achieved: 137.76 GB/s
========================================
Warning: Excluding compile in calculations
Average tokens/sec (total): 23.13
Average tokens/sec (first token): 43.94
Average tokens/sec (next tokens): 23.12
```
prefill continuing to trend way way up. To me, this validates the approach of getting intermediate libraries out of the way and dispatching direct to plain old .metal kernels. Thoughts?
| true
|
2,753,043,413
|
反射填充的反向传播没有确定性实现
|
tyth66
|
open
|
[
"triaged",
"module: determinism"
] | 0
|
NONE
|
### 🐛 Describe the bug
UserWarning: reflection_pad2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ../aten/src/ATen/Context.cpp:91.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
### Versions
Python-3.12.7 torch-2.5.1+cu118
cc @mruberry @kurtamohler
| true
|
2,752,978,774
|
Pytorch 3.13t wheels for release 2.6 - triton dependency
|
atalman
|
closed
|
[
"module: binaries",
"triaged"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
While testing I noticed that wheel constraint does not work :
```
Requires-Dist: pytorch-triton==3.2.0+git35c6c7c6; platform_system == "Linux" and platform_machine == "x86_64" and python_version != "3.13t"
```
Workflow:
https://github.com/pytorch/pytorch/actions/runs/12427438642/job/34700799523#step:15:533
Looks like this is currently not supported :
Related Doc: https://packaging.python.org/en/latest/specifications/dependency-specifiers/
Discussion: https://discuss.python.org/t/environment-marker-for-free-threading/60007/4
Hence I propose following:
For release and nightly Remove triton constrain from 3.13t wheels METADATA:
```
Requires-Dist: pytorch-triton==3.2.0+git35c6c7c6; platform_system == "Linux" and platform_machine == "x86_64" and python_version != "3.13t"
```
When publishing these wheels to pypi, publish them after linux 3.9-3.13 wheels are uploaded as separate step to avoid possible issues with poetry.
### Versions
2.6.0
cc @seemethere @malfet @osalpekar
| true
|
2,752,922,088
|
[BE][Sparse] Get rid of gcc-5 workaround
|
malfet
|
closed
|
[
"Merged",
"release notes: sparse",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Discovered those comments while looking at https://github.com/pytorch/pytorch/pull/143620
| true
|
2,752,890,288
|
Improve performance of casted elementwise add operations
|
doru1004
|
closed
|
[
"triaged",
"open source",
"Stale",
"release notes: cuda"
] | 2
|
CONTRIBUTOR
|
Improve performance of casted elementwise add operations.
| true
|
2,752,889,957
|
`MemoryDenyWriteExecute` in systemd service causes `RuntimeError: could not create a primitive`
|
MatthewCroughan
|
open
|
[
"needs reproduction",
"module: error checking",
"module: convolution",
"triaged",
"module: mkldnn",
"security"
] | 3
|
NONE
|
`MemoryDenyWriteExecute` would be nice to use, but when pytorch is ran in this context, it throws the following, likely due to generating assembly at runtime, where that generated code is both +w and +x, which is generally a security issue. The code should be `-w`'d when it is done generating.
https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html#MemoryDenyWriteExecute=
```
File "/nix/store/4b0mw59pv52w2kvli1hraqcybww0yy0z-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 549, in _conv_forward
return F.conv2d(
^^^^^^^^^
RuntimeError: could not create a primitive
```
A larger trace is below, for the application I'm running in a systemd service (comfyui)
```
model weight dtype torch.float32, manual cast: None
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
loaded completely 9.5367431640625e+25 235.84423828125 True
Requested to load SD1ClipModel
loaded completely 9.5367431640625e+25 235.84423828125 True
Requested to load BaseModel
loaded completely 9.5367431640625e+25 3278.812271118164 True
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
0%| | 0/1 [00:00<?, ?it/s] 0%| | 0/1 [00:00<?, ?it/s]
!!! Exception during processing !!! could not create a primitive
Traceback (most recent call last):
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/execution.py", line 324, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/execution.py", line 199, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/execution.py", line 170, in _map_node_over_list
process_inputs(input_dict, i)
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/execution.py", line 159, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/nodes.py", line 1467, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/nodes.py", line 1434, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 1020, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 918, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 904, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 873, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 857, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 714, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/k_diffusion/sampling.py", line 155, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 384, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 839, in __call__
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 842, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 364, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 200, in calc_cond_batch
return executor.execute(model, conds, x_in, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 313, in _calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/model_base.py", line 128, in apply_model
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/model_base.py", line 157, in _apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 832, in forward
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 874, in _forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 39, in forward_timestep_embed
x = layer(x, emb)
^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 240, in forward
return checkpoint(
^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/util.py", line 191, in checkpoint
return func(*inputs)
^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 253, in _forward
h = self.in_layers(x)
^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfh
packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/container.py", line 250, in forward
input = module(input)
^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ops.py", line 98, in forward
return super().forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 554, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 549, in _conv_forward
return F.conv2d(
```
cc @malfet @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @seemethere @pytorch/pytorch-dev-infra
| true
|
2,752,886,043
|
AttributeError: 'GraphModule' object has no attribute 'xxxxx' when accessing module layers via path string (e.g., layer.1.bn3)
|
Yicooong
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
NONE
|
### 🐛 Describe the bug
I encountered a AttributeError: 'GraphModule' object has no attribute 'xxxxx' when trying to access a layer of a PyTorch model using a string path (e.g., layer.1.bn3). This issue arises when attempting to dynamically access layers in a model using a path that includes both dot notation and numeric indices, particularly for modules like nn.Sequential. This error in `torch._inductor.pattern_matcher.extract_target`
Here’s a simple example to reproduce the issue:
```
getattr(node.graph.owning_module, node.target)
```
This issue occurs when trying to handle models that have layers represented as `nn.Sequential` (which support indexing). The problem arises when a numeric index is encountered in the path (like 1), but the code attempts to access the layer directly using an index on `nn.Module`, which is not supported.
Suggestions for fixing:
```
node_layer = node.target.split(".")
module = node.graph.owning_module
for layer in node_layer:
module = getattr(module, layer)
```
### Error logs
File "/home/xxx/miniconda3/envs/magpy/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 1324, in compile_fx
model_ = _recursive_pre_grad_passes(model_, example_inputs_)
File "/home/xxx/miniconda3/envs/magpy/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 274, in _recursive_pre_grad_passes
return pre_grad_passes(gm, example_inputs)
File "/home/xxx/miniconda3/envs/magpy/lib/python3.9/site-packages/torch/_inductor/fx_passes/pre_grad.py", line 254, in pre_grad_passes
efficient_conv_bn_eval_pass.apply(gm.graph) # type: ignore[arg-type]
File "/home/xxx/miniconda3/envs/magpy/lib/python3.9/site-packages/torch/_inductor/pattern_matcher.py", line 1706, in apply
target = extract_target(node)
File "/home/xxx/miniconda3/envs/magpy/lib/python3.9/site-packages/torch/_inductor/pattern_matcher.py", line 2009, in extract_target
return getattr(node.graph.owning_module, node.target).__class__ # type: ignore[arg-type]
File "/home/xxx/miniconda3/envs/magpy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1931, in __getattr__
raise AttributeError(
AttributeError: 'GraphModule' object has no attribute 'layer4.2.relu'
### Versions
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 24353 100 24353 0 0 33105 0 --:--:-- --:--:-- --:--:-- 33088
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-36-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100S-PCIE-32GB
GPU 1: Tesla V100S-PCIE-32GB
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 7
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 32 MiB (32 instances)
L3 cache: 44 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.11 py39h5eee18b_0
[conda] mkl_random 1.2.8 py39h1128e8f_0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] numpy-base 2.0.1 py39hb5e798b_1
[conda] pytorch 2.5.1 py3.9_cuda11.8_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.0.1+cu118 pypi_0 pypi
[conda] torchaudio 2.5.1 py39_cu118 pytorch
[conda] torchtriton 3.1.0 py39 pytorch
[conda] torchvision 0.15.2+cu118 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,752,816,284
|
Floating Point Exception (core dumped) when running floordiv/remainder/fmod under torch.compile
|
maybeLee
|
open
|
[
"module: crash",
"triaged",
"oncall: pt2",
"oncall: cpu inductor"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It is likely a division-by-zero problem.
Under eager mode, these APIs do not throw the `ZeroDivisionError` exception instead of a floating point error.
Here is the code to reproduce:
```
import torch
@torch.compile
def div(input,value):
return torch.Tensor.floor_divide_(input,value) # change the API to torch.fmod or torch.remainder will also lead to the same error
input = torch.tensor([2,5])
value = torch.tensor([0])
div(input, value)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gitdeb1da1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gitdeb1da1
[pip3] torchaudio==2.5.1+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] magma-cuda124 2.6.1 1 pytorch
[conda] mkl 2025.0.0 h901ac74_941 conda-forge
[conda] mkl-include 2025.0.0 hf2ce2f3_941 conda-forge
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gitdeb1da1 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,752,801,634
|
torch.special.polygamma outputs incorrect when using torch.compile
|
maybeLee
|
closed
|
[
"triage review",
"module: special",
"oncall: pt2",
"module: inductor"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When receiving the following inputs:
```
n = 0
input = torch.tensor(2, dtype=torch.float32)
```
`torch.special.polygamma` (in compile mode) outputs incorrect result (-inf), and this API outputs correct result (0.4228) in eager mode.
Here is the code to reproduce:
```
import torch
import scipy
def polygamma(n, input):
return torch.special.polygamma(n, input)
@torch.compile
def compiled_polygamma(n, input):
return torch.special.polygamma(n, input)
n = 0
input = torch.tensor(2, dtype=torch.float32)
print(f"polygamma in Eager mode: ", polygamma(n, input)) # 0.4228
print(f"polygamma in compiled mode: ", compiled_polygamma(n, input)) # -inf
print(f"Scipy's result: ", scipy.special.polygamma(n, input.item())) # 0.42278433509846713
```
This issue occurs in both CPU and GPU.
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gitdeb1da1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gitdeb1da1
[pip3] torchaudio==2.5.1+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] magma-cuda124 2.6.1 1 pytorch
[conda] mkl 2025.0.0 h901ac74_941 conda-forge
[conda] mkl-include 2025.0.0 hf2ce2f3_941 conda-forge
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gitdeb1da1 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @mruberry @kshitij12345 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,752,719,521
|
A very weird bug involving ddp
|
Wongboo
|
open
|
[
"oncall: distributed",
"module: ddp"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
A very weird bug involving ddp. After exchange order of train code and eval code, the program hangs forever
```python
# eval
@torch.no_grad()
def estimate_loss():
out = {}
model.eval()
for split in ['val']:
losses = torch.zeros(eval_iters)
for k in range(eval_iters):
X, Y = get_batch(split)
with ctx:
# logits, loss = model(X, Y)
# loss = model(X, labels = Y).loss
logits = model(X).logits.view(-1, 50304)
loss = F.cross_entropy(logits, Y.view(-1), ignore_index=-1)
losses[k] = loss.item()
out[split] = losses.mean()
out['train'] = 0.
model.train()
return out
# train code
for micro_step in range(gradient_accumulation_steps):
print(f"{ddp_local_rank}: {micro_step}")
if ddp:
model.require_backward_grad_sync = (micro_step == gradient_accumulation_steps - 1)
with ctx:
logits = model(X).logits.view(-1, 50304)
loss = F.cross_entropy(logits, Y.view(-1), ignore_index=-1)
loss = loss / gradient_accumulation_steps # scale the loss to account for gradient accumulation
X, Y = get_batch('train')
scaler.scale(loss).backward()
if grad_clip != 0.0:
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(model.parameters(), grad_clip)
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad(set_to_none=True)
# eval code
if iter_num % eval_interval == 0 and master_process:
losses = estimate_loss()
```
Whole code can be obtained at [DDP_bug](https://github.com/Wongboo/DDP_bug) running
```
TORCH_DISTRIBUTED_DEBUG=DETAIL torchrun --standalone --nproc_per_node=2 train_eval_llama.py
TORCH_DISTRIBUTED_DEBUG=DETAIL torchrun --standalone --nproc_per_node=2 eval_train_llama.py.
```
The problem is so weird that I worked whole day on it.
When running with `TORCH_DISTRIBUTED_DEBUG=DETAIL`, the program throw error
```
[rank1]: RuntimeError: Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(SequenceNumber=8, OpType=BROADCAST, TensorShape=[76], TensorDtypes=Int, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))), but Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=8, OpType=BROADCAST, TensorShape=[288], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))).Collectives differ in the following aspects: Tensor Tensor shapes: 76vs 288 Tensor Tensor dtypes: Intvs Float
```
Looks like rank1 is running the second train, while rank0 is running the first eval. However, `no_sync()` or `barrier()` cannot prevent this. It is very weird.
And it seems to have a connection with the model type.
### Versions
Collecting environment information...
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 6.0.0 (tags/RELEASE_600/final)
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:23:07) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-192-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2101.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] blas 1.0 mkl conda-forge
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.0 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.5.39 0 nvidia
[conda] cuda-runtime 12.4.0 0 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.4.2.65 0 nvidia
[conda] libcufft 11.2.0.44 0 nvidia
[conda] libcurand 10.3.6.82 0 nvidia
[conda] libcusolver 11.6.0.99 0 nvidia
[conda] libcusparse 12.3.0.142 0 nvidia
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.4.99 0 nvidia
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.26.4 py312heda63a1_0 conda-forge
[conda] pytorch 2.4.0 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtriton 3.0.0 py312 pytorch
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,752,631,036
|
Getting "Could not initialize NNPACK! Reason: Unsupported hardware." warning even though NNPACK is enabled
|
kirillmeisser
|
open
|
[
"needs reproduction",
"triaged",
"module: nnpack"
] | 3
|
NONE
|
### 🐛 Describe the bug
Hi everyone,
I am trying to deploy EasyOCR (an OCR library built with PyTorch) locally on a VM. When executing the following lines:
```
import easyocr
reader = easyocr.Reader(['en'], gpu=False)
result = reader.readtext('test.png')
```
I get the following warning: "Could not initialize NNPACK! Reason: Unsupported hardware.". I am deploying in a CPU only environment, on CPUs with the AVX512 instructions enabled. When the warning is displayed the model takes a lot more time to process and triggers a Timeout. I executed the following command `print(torch.__config__.show())` to see if NNPACK is available at runtime and indeed it is. This is the output right before the inference is processed:
```
PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v3.4.2 (Git Hash 1137e04ec0b5251ca2b4400a4fd3c667ce843d67)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX512
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.4.0, USE_CUDA=0, USE_CUDNN=OFF, USE_CUSPARSELT=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,
```
I am aware that this is not a pure pytorch related issue but from what I've seen this warning comes from the PyTorch side. I don't understand why the warning is triggered, when PyTorch is built with this capability. Any help would be greatly appreciated.
My environment is:
```
easyocr==1.7.2
torch==2.5.1
torchvision==0.20.1
```
### Versions
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.20 (main, Dec 20 2024, 10:20:30) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 40 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: QEMU Virtual CPU version 2.5+
CPU family: 15
Model: 107
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
BogoMIPS: 8983.12
Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl cpuid extd_apicid tsc_known_freq pni ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c hypervisor lahf_lm cmp_legacy abm 3dnowprefetch vmmcall bmi1 avx2 bmi2 avx512f avx512dq avx512cd avx512bw avx512vl
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1 MiB (16 instances)
L1i cache: 1 MiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[conda] Could not collect
| true
|
2,752,630,203
|
typo? Update RELEASE.md
|
andife
|
closed
|
[
"topic: not user facing"
] | 3
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
2,752,569,220
|
Wrong log_softmax output on cuda device float64 torch>=2.4.1
|
jchacks
|
closed
|
[
"module: cuda",
"triaged",
"module: correctness (silent)"
] | 4
|
NONE
|
### 🐛 Describe the bug
I checked the other issues but not sure if they are fully related:
https://github.com/pytorch/pytorch/issues/140222
When applying `F.log_softmax` (and taking the exponential) or `F.softmax` over a large-ish dimension 517 (the size matters) on a cuda device (1080 Ti) using f64 dtype, the result does not sum to 1. I do not have access to another GPU to test if its specific to the 1080 Ti.
To reproduce
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import sys
import platform
if not torch.cuda.is_available():
raise RuntimeError("This issue only happens on gpu")
def print_system_info():
print("\nSystem Information:")
print(f"Python version: {sys.version}")
print(f"PyTorch version: {torch.__version__}")
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA version: {torch.version.cuda}")
print(f"GPU: {torch.cuda.get_device_name()}")
print(f"Platform: {platform.platform()}")
class TinyModel(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(5, 1)
def forward(self, x):
return self.fc(x).squeeze(dim=-1)
def main(device: str, dtype: torch.dtype, dimension_size: int = 517):
print(f"TESTING: {device} {dtype}")
device = torch.device(device)
model = TinyModel().to(device).to(dtype)
optimizer = optim.Adam(model.parameters())
x = (torch.randn((5, dimension_size, 5), dtype=dtype, device=device) - 0.5) * 2
optimizer.zero_grad()
logits = model(x)
print("\nLogits info:")
print(f"Shape: {logits.shape}")
print(f"Device: {logits.device}")
print(f"Dtype: {logits.dtype}")
print(f"Requires grad: {logits.requires_grad}")
# Test different softmax approaches
print("\nSoftmax sums:")
print("Method 1:", F.log_softmax(logits, dim=1).exp().sum(dim=1))
# Try CPU version
cpu_logits = logits.cpu()
print("CPU version:", F.log_softmax(cpu_logits, dim=1).exp().sum(dim=1))
log_probs = logits - torch.logsumexp(logits, dim=1, keepdim=True)
print("Method 2:", log_probs.exp().sum(dim=1))
if __name__ == "__main__":
print_system_info()
# On my setup 517 was the minimum that caused the issue might be different on different machines
dim_size = 517
main("cuda", torch.float64, dim_size) # Doesnt work
# main("cuda", torch.float32, dim_size) # Works
# main("cpu", torch.float64, dim_size) # Works
# main("cpu", torch.float32, dim_size) # Works
```
Outputs (I also tested on cu121):
```
System Information:
Python version: 3.10.15 (main, Oct 8 2024, 00:25:34) [Clang 18.1.8 ]
PyTorch version: 2.5.1+cu118
CUDA available: True
CUDA version: 11.8
GPU: NVIDIA GeForce GTX 1080 Ti
Platform: Linux-6.8.0-50-generic-x86_64-with-glibc2.39
TESTING: cuda torch.float64
Logits info:
Shape: torch.Size([5, 517])
Device: cuda:0
Dtype: torch.float64
Requires grad: True
Softmax sums:
Method 1: tensor([1.0000, 0.6673, 0.9679, 0.6716, 0.9772], device='cuda:0',
dtype=torch.float64, grad_fn=<SumBackward1>)
CPU version: tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000], dtype=torch.float64,
grad_fn=<SumBackward1>)
Method 2: tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000], device='cuda:0',
dtype=torch.float64, grad_fn=<SumBackward1>)
```
Note the first output the log_softmax.exp does not sum to 1.
When running with the other combinations cuda float32, cpu float64 and cpu float32 the issue does not appear...
I did try with different torch versions and the issue did not appear in 2.3.1, but was present in 2.4.1.
2.3.1 output:
```
System Information:
Python version: 3.10.15 (main, Oct 8 2024, 00:25:34) [Clang 18.1.8 ]
PyTorch version: 2.3.1+cu121
CUDA available: True
CUDA version: 12.1
GPU: NVIDIA GeForce GTX 1080 Ti
Platform: Linux-6.8.0-50-generic-x86_64-with-glibc2.39
TESTING: cuda torch.float64
Logits info:
Shape: torch.Size([5, 517])
Device: cuda:0
Dtype: torch.float64
Requires grad: True
Softmax sums:
Method 1: tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000], device='cuda:0',
dtype=torch.float64, grad_fn=<SumBackward1>)
CPU version: tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000], dtype=torch.float64,
grad_fn=<SumBackward1>)
Method 2: tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000], device='cuda:0',
dtype=torch.float64, grad_fn=<SumBackward1>)
```
2.4.1 output:
```
System Information:
Python version: 3.10.15 (main, Oct 8 2024, 00:25:34) [Clang 18.1.8 ]
PyTorch version: 2.4.1+cu121
CUDA available: True
CUDA version: 12.1
GPU: NVIDIA GeForce GTX 1080 Ti
Platform: Linux-6.8.0-50-generic-x86_64-with-glibc2.39
TESTING: cuda torch.float64
Logits info:
Shape: torch.Size([5, 517])
Device: cuda:0
Dtype: torch.float64
Requires grad: True
Softmax sums:
Method 1: tensor([1.0000, 0.7082, 0.9840, 0.7065, 0.9930], device='cuda:0',
dtype=torch.float64, grad_fn=<SumBackward1>)
CPU version: tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000], dtype=torch.float64,
grad_fn=<SumBackward1>)
Method 2: tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000], device='cuda:0',
dtype=torch.float64, grad_fn=<SumBackward1>)
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.10.15 (main, Oct 8 2024, 00:25:34) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-6.8.0-50-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 565.57.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X3D 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 59%
CPU max MHz: 5759.0000
CPU min MHz: 545.0000
BogoMIPS: 8383.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualisation: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 128 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] torch==2.5.1+cu118
[pip3] triton==3.1.0
[conda] Could not collect
cc @ptrblck @msaroufim @eqy
| true
|
2,752,560,865
|
Full bfloat16 ONNX export fails
|
umarbutler
|
closed
|
[
"module: onnx",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
When running the below code:
```python
import torch
import onnxruntime
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# BEGIN CONFIG #
MODEL_DIR = f'roberta-base'
# END CONFIG #
model = AutoModelForSequenceClassification.from_pretrained(MODEL_DIR, attn_implementation = 'sdpa')
model = model.eval()
model = model.to(torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR)
input_ids = [tokenizer.encode('Hello world')] * 128
input_ids = torch.stack([torch.tensor(input) for input in input_ids])
attention_mask = torch.ones_like(input_ids)
torch.onnx.export(
model,
(input_ids, attention_mask),
f = 'model.onnx',
input_names = ['input_ids', 'attention_mask'],
output_names = ['logits'],
dynamic_axes = {
'input_ids': {0: 'batch_size', 1: 'sequence'},
'attention_mask': {0: 'batch_size', 1: 'sequence'},
'logits': {0: 'batch_size', 1: 'sequence'}
},
do_constant_folding = True,
opset_version = 17,
)
ort_session = onnxruntime.InferenceSession(f'model.onnx', providers=['CPUExecutionProvider'])
onnxruntime_outputs = ort_session.run(None, {'input_ids': input_ids.numpy(), 'attention_mask': attention_mask.numpy()})
```
I get the below error:
```
---------------------------------------------------------------------------
NotImplemented Traceback (most recent call last)
Cell In[3], line 33
16 attention_mask = torch.ones_like(input_ids)
18 torch.onnx.export(
19 model,
20 (input_ids, attention_mask),
(...)
30 opset_version = 17,
31 )
---> 33 ort_session = onnxruntime.InferenceSession(f'model.onnx', providers=['CPUExecutionProvider'])
34 onnxruntime_outputs = ort_session.run(None, {'input_ids': input_ids.numpy(), 'attention_mask': attention_mask.numpy()})
File ~/dev/.venv/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:465, in InferenceSession.__init__(self, path_or_bytes, sess_options, providers, provider_options, **kwargs)
462 disabled_optimizers = kwargs.get("disabled_optimizers")
464 try:
--> 465 self._create_inference_session(providers, provider_options, disabled_optimizers)
466 except (ValueError, RuntimeError) as e:
467 if self._enable_fallback:
File ~/dev/.venv/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:537, in InferenceSession._create_inference_session(self, providers, provider_options, disabled_optimizers)
534 disabled_optimizers = set(disabled_optimizers)
536 # initialize the C++ InferenceSession
--> 537 sess.initialize_session(providers, provider_options, disabled_optimizers)
539 self._sess = sess
540 self._sess_options = self._sess.session_options
NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for Add(14) node with name '/roberta/embeddings/Add_1'
```
It seems that converting the model to full bfloat16 causes inference not to work.
### Versions
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.68
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241220
[pip3] torch==2.4.1
[pip3] torchaudio==2.4.1+cu124
[pip3] torchvision==0.19.1+cu124
[pip3] triton==3.0.0
[conda] Could not collect
| true
|
2,752,498,999
|
Fix the build errors in ONEDNN+BLIS Path
|
phanicoder
|
open
|
[
"triaged",
"open source",
"release notes: build"
] | 5
|
NONE
|
Summary:
These changes fix the errors caused while building with ONEDNN+BLIS libblis.so is a single threaded library. IntraOp Parallelism is not realized using this library. Changes are done using to link against libblis-mt.so, which is a multithreaded library.
When the following build options are issued to build PyTorch+ONEDNN+BLIS, errors are seen.
$export BLIS_HOME=path-to-BLIS
$export PATH=$BLIS_HOME/include/blis:$PATH LD_LIBRARY_PATH=$BLIS_HOME/lib:$LD_LIBRARY_PATH $export BLAS=BLIS USE_MKLDNN_CBLAS=ON WITH_BLAS=blis
$python setup.py develop
These changes resolve the build errors.
Fixes #134399
| true
|
2,752,459,860
|
Non_blocking copy behavior on non-cuda/non-privateuse1 accelerator might be unexpected
|
albanD
|
closed
|
[
"triaged",
"module: xpu",
"module: accelerator"
] | 4
|
COLLABORATOR
|
The implementation in https://github.com/pytorch/pytorch/blob/487873f7cafeb0fd390eaefe40496b804bceabbd/aten/src/ATen/native/TensorConversions.cpp#L341-L342 only uses pinned memory for these two devices while I would expect all accelerator to want to do that.
cc @gujinghui @EikanWang @fengyuan14 @guangyey @guangy10 in particular for xpu. Should we move this one to accelerators in general or any concern with doing so?
| true
|
2,752,428,601
|
Unable to export RoBERTa to ONNX
|
umarbutler
|
closed
|
[
"module: onnx",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
When I try exporting a RoBERTa model with torch dynamo, I get this error:
```
Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier.dense.bias', 'classifier.dense.weight', 'classifier.out_proj.bias', 'classifier.out_proj.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/home/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter.py:137: UserWarning: torch.onnx.dynamo_export only implements opset version 18 for now. If you need to use a different opset version, please register them with register_custom_op.
warnings.warn(
/home/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/passes/readability.py:54: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
new_node = self.module.graph.get_attr(normalized_name)
/home/dev/.venv/lib/python3.12/site-packages/torch/fx/graph.py:1545: UserWarning: Node roberta_embeddings_token_type_ids target roberta/embeddings/token_type_ids roberta/embeddings/token_type_ids of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does '
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter.py:1509, in dynamo_export(model, export_options, *model_args, **model_kwargs)
1503 try:
1504 return Exporter(
1505 options=resolved_export_options,
1506 model=model,
1507 model_args=model_args,
1508 model_kwargs=model_kwargs,
-> 1509 ).export()
1510 except Exception as e:
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter.py:1246, in Exporter.export(self)
1243 fx_interpreter = fx_onnx_interpreter.FxOnnxInterpreter(
1244 diagnostic_context=self.options.diagnostic_context
1245 )
-> 1246 onnxscript_graph = fx_interpreter.run(
1247 fx_graph_module=graph_module,
1248 onnxfunction_dispatcher=self.options.onnxfunction_dispatcher,
1249 op_level_debug=self.options.op_level_debug,
1250 )
1252 # NOTE: Filter out the initializers with fake tensors when it's fake_mode exporting.
1253 # Otherwise, the ONNX exporter will fail: RuntimeError: basic_string::_M_construct null
1254 # not valid.
1255 # Concrete data is expected to be filled for those initializers later during `ONNXProgram.save`.
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py:152, in diagnose_call.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
151 finally:
--> 152 ctx.log_and_raise_if_error(diag)
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/context.py:369, in DiagnosticContext.log_and_raise_if_error(self, diagnostic)
368 if diagnostic.source_exception is not None:
--> 369 raise diagnostic.source_exception
370 raise RuntimeErrorWithDiagnostic(diagnostic)
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py:136, in diagnose_call.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
135 try:
--> 136 return_values = fn(*args, **kwargs)
137 with diag.log_section(logging.INFO, "Return values"):
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py:577, in FxOnnxInterpreter.run(self, fx_graph_module, onnxfunction_dispatcher, op_level_debug, parent_onnxscript_graph)
576 for node in fx_graph_module.graph.nodes:
--> 577 self.run_node(
578 node,
579 fx_graph_module,
580 onnxfunction_dispatcher,
581 op_level_debug,
582 onnxscript_graph,
583 onnxscript_tracer,
584 fx_name_to_onnxscript_value,
585 )
587 with diagnostic.log_section(logging.DEBUG, "ONNX Graph:"):
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py:152, in diagnose_call.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
151 finally:
--> 152 ctx.log_and_raise_if_error(diag)
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/context.py:369, in DiagnosticContext.log_and_raise_if_error(self, diagnostic)
368 if diagnostic.source_exception is not None:
--> 369 raise diagnostic.source_exception
370 raise RuntimeErrorWithDiagnostic(diagnostic)
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py:136, in diagnose_call.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
135 try:
--> 136 return_values = fn(*args, **kwargs)
137 with diag.log_section(logging.INFO, "Return values"):
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py:482, in FxOnnxInterpreter.run_node(self, node, fx_graph_module, onnxfunction_dispatcher, op_level_debug, onnxscript_graph, onnxscript_tracer, fx_name_to_onnxscript_value)
481 elif node.op == "call_module":
--> 482 self.call_module(
483 node,
484 onnxscript_graph,
485 fx_name_to_onnxscript_value,
486 onnxscript_tracer,
487 fx_graph_module,
488 onnxfunction_dispatcher,
489 op_level_debug,
490 )
491 elif node.op == "output":
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py:811, in FxOnnxInterpreter.call_module(self, node, parent_onnxscript_graph, fx_name_to_onnxscript_value, tracer, root_fx_graph_module, onnxfunction_dispatcher, op_level_debug)
807 assert isinstance(
808 sub_module, torch.fx.GraphModule
809 ), f"sub_module must be a torch.fx.GraphModule, not {type(sub_module)} for node {node}."
--> 811 sub_onnxscript_graph = self.run(
812 sub_module, onnxfunction_dispatcher, op_level_debug, parent_onnxscript_graph
813 )
815 onnx_args, _ = _wrap_fx_args_as_onnxscript_args(
816 list(node.args), {}, fx_name_to_onnxscript_value, tracer
817 )
[... skipping similar frames: DiagnosticContext.log_and_raise_if_error at line 369 (1 times), diagnose_call.<locals>.decorator.<locals>.wrapper at line 152 (1 times), diagnose_call.<locals>.decorator.<locals>.wrapper at line 136 (1 times)]
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py:577, in FxOnnxInterpreter.run(self, fx_graph_module, onnxfunction_dispatcher, op_level_debug, parent_onnxscript_graph)
576 for node in fx_graph_module.graph.nodes:
--> 577 self.run_node(
578 node,
579 fx_graph_module,
580 onnxfunction_dispatcher,
581 op_level_debug,
582 onnxscript_graph,
583 onnxscript_tracer,
584 fx_name_to_onnxscript_value,
585 )
587 with diagnostic.log_section(logging.DEBUG, "ONNX Graph:"):
[... skipping similar frames: DiagnosticContext.log_and_raise_if_error at line 369 (1 times), diagnose_call.<locals>.decorator.<locals>.wrapper at line 152 (1 times), diagnose_call.<locals>.decorator.<locals>.wrapper at line 136 (1 times)]
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py:482, in FxOnnxInterpreter.run_node(self, node, fx_graph_module, onnxfunction_dispatcher, op_level_debug, onnxscript_graph, onnxscript_tracer, fx_name_to_onnxscript_value)
481 elif node.op == "call_module":
--> 482 self.call_module(
483 node,
484 onnxscript_graph,
485 fx_name_to_onnxscript_value,
486 onnxscript_tracer,
487 fx_graph_module,
488 onnxfunction_dispatcher,
489 op_level_debug,
490 )
491 elif node.op == "output":
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py:811, in FxOnnxInterpreter.call_module(self, node, parent_onnxscript_graph, fx_name_to_onnxscript_value, tracer, root_fx_graph_module, onnxfunction_dispatcher, op_level_debug)
807 assert isinstance(
808 sub_module, torch.fx.GraphModule
809 ), f"sub_module must be a torch.fx.GraphModule, not {type(sub_module)} for node {node}."
--> 811 sub_onnxscript_graph = self.run(
812 sub_module, onnxfunction_dispatcher, op_level_debug, parent_onnxscript_graph
813 )
815 onnx_args, _ = _wrap_fx_args_as_onnxscript_args(
816 list(node.args), {}, fx_name_to_onnxscript_value, tracer
817 )
[... skipping similar frames: DiagnosticContext.log_and_raise_if_error at line 369 (6 times), diagnose_call.<locals>.decorator.<locals>.wrapper at line 152 (6 times), diagnose_call.<locals>.decorator.<locals>.wrapper at line 136 (6 times), FxOnnxInterpreter.run at line 577 (3 times), FxOnnxInterpreter.call_module at line 811 (2 times), FxOnnxInterpreter.run_node at line 482 (2 times)]
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py:482, in FxOnnxInterpreter.run_node(self, node, fx_graph_module, onnxfunction_dispatcher, op_level_debug, onnxscript_graph, onnxscript_tracer, fx_name_to_onnxscript_value)
481 elif node.op == "call_module":
--> 482 self.call_module(
483 node,
484 onnxscript_graph,
485 fx_name_to_onnxscript_value,
486 onnxscript_tracer,
487 fx_graph_module,
488 onnxfunction_dispatcher,
489 op_level_debug,
490 )
491 elif node.op == "output":
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py:811, in FxOnnxInterpreter.call_module(self, node, parent_onnxscript_graph, fx_name_to_onnxscript_value, tracer, root_fx_graph_module, onnxfunction_dispatcher, op_level_debug)
807 assert isinstance(
808 sub_module, torch.fx.GraphModule
809 ), f"sub_module must be a torch.fx.GraphModule, not {type(sub_module)} for node {node}."
--> 811 sub_onnxscript_graph = self.run(
812 sub_module, onnxfunction_dispatcher, op_level_debug, parent_onnxscript_graph
813 )
815 onnx_args, _ = _wrap_fx_args_as_onnxscript_args(
816 list(node.args), {}, fx_name_to_onnxscript_value, tracer
817 )
[... skipping similar frames: DiagnosticContext.log_and_raise_if_error at line 369 (1 times), diagnose_call.<locals>.decorator.<locals>.wrapper at line 152 (1 times), diagnose_call.<locals>.decorator.<locals>.wrapper at line 136 (1 times)]
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py:577, in FxOnnxInterpreter.run(self, fx_graph_module, onnxfunction_dispatcher, op_level_debug, parent_onnxscript_graph)
576 for node in fx_graph_module.graph.nodes:
--> 577 self.run_node(
578 node,
579 fx_graph_module,
580 onnxfunction_dispatcher,
581 op_level_debug,
582 onnxscript_graph,
583 onnxscript_tracer,
584 fx_name_to_onnxscript_value,
585 )
587 with diagnostic.log_section(logging.DEBUG, "ONNX Graph:"):
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py:152, in diagnose_call.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
151 finally:
--> 152 ctx.log_and_raise_if_error(diag)
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/context.py:369, in DiagnosticContext.log_and_raise_if_error(self, diagnostic)
368 if diagnostic.source_exception is not None:
--> 369 raise diagnostic.source_exception
370 raise RuntimeErrorWithDiagnostic(diagnostic)
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py:136, in diagnose_call.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
135 try:
--> 136 return_values = fn(*args, **kwargs)
137 with diag.log_section(logging.INFO, "Return values"):
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py:471, in FxOnnxInterpreter.run_node(self, node, fx_graph_module, onnxfunction_dispatcher, op_level_debug, onnxscript_graph, onnxscript_tracer, fx_name_to_onnxscript_value)
470 elif node.op == "call_function":
--> 471 self.call_function(
472 node,
473 onnxscript_tracer,
474 fx_name_to_onnxscript_value,
475 onnxfunction_dispatcher,
476 op_level_debug,
477 fx_graph_module,
478 )
479 elif node.op == "call_method":
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/fx_onnx_interpreter.py:693, in FxOnnxInterpreter.call_function(self, node, onnxscript_tracer, fx_name_to_onnxscript_value, onnxfunction_dispatcher, op_level_debug, fx_graph_module)
691 # Dispatch to ONNX op through OpShema. The input argument dtypes are compared to
692 # function signature in OpSchema, and find the best matched overload.
--> 693 symbolic_fn = onnxfunction_dispatcher.dispatch(
694 node=node,
695 onnx_args=onnx_args,
696 onnx_kwargs=onnx_kwargs,
697 diagnostic_context=self.diagnostic_context,
698 )
699 with onnxscript.evaluator.default_as(onnxscript_tracer):
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py:143, in OnnxFunctionDispatcher.dispatch(self, node, onnx_args, onnx_kwargs, diagnostic_context)
141 # If there are overloaded functions available, we will find one that perfect or
142 # nearest matches the given arguments and keyword arguments
--> 143 return self._find_the_perfect_or_nearest_match_onnxfunction(
144 node,
145 default_and_custom_functions,
146 onnx_args,
147 onnx_kwargs,
148 diagnostic_context,
149 )
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py:152, in diagnose_call.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
151 finally:
--> 152 ctx.log_and_raise_if_error(diag)
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/context.py:369, in DiagnosticContext.log_and_raise_if_error(self, diagnostic)
368 if diagnostic.source_exception is not None:
--> 369 raise diagnostic.source_exception
370 raise RuntimeErrorWithDiagnostic(diagnostic)
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py:136, in diagnose_call.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
135 try:
--> 136 return_values = fn(*args, **kwargs)
137 with diag.log_section(logging.INFO, "Return values"):
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py:239, in OnnxFunctionDispatcher._find_the_perfect_or_nearest_match_onnxfunction(self, node, default_and_custom_functions, onnx_args, onnx_kwargs, diagnostic_context)
238 # NOTE: 1. If the perfect match is found, return the function
--> 239 if function_opschema.perfect_match_inputs(
240 diagnostic, onnx_args, onnx_kwargs
241 ):
242 return symbolic_function.onnx_function
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py:646, in _OnnxSchemaChecker.perfect_match_inputs(self, diagnostic, args, kwargs)
643 for schema_input, torch_input in zip(
644 self.op_schema.inputs, function_inputs
645 ):
--> 646 torch_input_compatible_types = _find_onnx_data_type(torch_input)
647 allowed_types = self.type_constraints[schema_input.type_str]
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/onnxfunction_dispatcher.py:907, in _find_onnx_data_type(torch_input)
905 return set()
--> 907 raise RuntimeError(f"Unknown input type from input: {torch_input}")
RuntimeError: Unknown input type from input: masked_fill
The above exception was the direct cause of the following exception:
OnnxExporterError Traceback (most recent call last)
Cell In[13], line 16
13 input_ids = torch.stack([torch.tensor(input) for input in input_ids])
14 attention_mask = torch.ones_like(input_ids)
---> 16 torch.onnx.dynamo_export(
17 model,
18 input_ids,
19 attention_mask,
20 export_options = torch.onnx.ExportOptions(
21 dynamic_shapes = True,
22 op_level_debug = False,
23 )
24 )
File ~/dev/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter.py:1520, in dynamo_export(model, export_options, *model_args, **model_kwargs)
1512 resolved_export_options.diagnostic_context.dump(sarif_report_path)
1513 message = (
1514 f"Failed to export the model to ONNX. Generating SARIF report at '{sarif_report_path}'. "
1515 "SARIF is a standard format for the output of static analysis tools. "
(...)
1518 f"Please report a bug on PyTorch Github: {_PYTORCH_GITHUB_ISSUES_URL}"
1519 )
-> 1520 raise OnnxExporterError(
1521 ONNXProgram._from_failure(e, resolved_export_options.diagnostic_context),
1522 message,
1523 ) from e
OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at 'report_dynamo_export.sarif'. SARIF is a standard format for the output of static analysis tools. SARIF logs can be loaded in VS Code SARIF viewer extension, or SARIF web viewer (https://microsoft.github.io/sarif-web-component/). Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
```
Here is a minimal reproducible example:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# BEGIN CONFIG #
MODEL_DIR = f'roberta-base'
# END CONFIG #
model = AutoModelForSequenceClassification.from_pretrained(MODEL_DIR, attn_implementation = 'sdpa')
model = model.eval()
tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR)
input_ids = [tokenizer.encode('Hello world')] * 128
input_ids = torch.stack([torch.tensor(input) for input in input_ids])
attention_mask = torch.ones_like(input_ids)
torch.onnx.dynamo_export(
model,
input_ids,
attention_mask,
export_options = torch.onnx.ExportOptions(
dynamic_shapes = True,
op_level_debug = False,
)
)
```
The regular TorchScript export is not any faster than `transformers` so I was hoping I could try out torch dynamo and it'd be faster.
### Versions
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.68
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241220
[pip3] torch==2.4.1
[pip3] torchaudio==2.4.1+cu124
[pip3] torchvision==0.19.1+cu124
[pip3] triton==3.0.0
[conda] Could not collect
| true
|
2,752,311,937
|
Fix unused-variable issues in caffe2
|
r-barnes
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: sparse",
"ci-no-td"
] | 30
|
CONTRIBUTOR
|
Summary:
LLVM-15 has a warning `-Wunused-variable` which we treat as an error because it's so often diagnostic of a code issue. Unused variables can compromise readability or, worse, performance.
This diff either (a) removes an unused variable and, possibly, it's associated code or (b) qualifies the variable with `[[maybe_unused]]`.
- If you approve of this diff, please use the "Accept & Ship" button :-)
Test Plan: Sandcastle
| true
|
2,752,242,541
|
[inductor] [cpp] Support vectorization for score and mask in FlexAttention CPU
|
chunyuan-w
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143638
## Description
We generate vectorized kernel for score and mask in FlexAttention with this PR.
## Modification
The main change include:
- For the input and output buffer to the mask and score function, instead of passing scalars, we pass tensors to it.
- For the mask function, the original function which works on a scalar only includes the logic of calculating the mask value. The PR added the logic of applying the mark to the qk_data tensor into the graph and then leverage the CPP backend to generate vectorized kernels.
The original mask graph:
```python
def mask_fn(b, h, q_idx, kv_idx):
mask = q_idx >= kv_idx
return mask
```
The converted_mask_graph should be:
```python
def converted_mask_fn(qk_data, b, h, q_idx, kv_idx):
mask = q_idx >= kv_idx
qk_data = torch.where(mask, qk_data, torch.full_like(qk_data, -float("inf")))
return qk_data
```
## Benchmark
For q, k, v of shape: `[1, 32, 1024, 128]`, using 40 CPU cores, we observe over 20x speedup compared with the non vectorized version for both `is_causal` = `False` and `True`.
## Test plan
The existing FlexAttention UTs (`test/inductor/test_flex_attention.py`, `test/inductor/test_flex_decoding.py`) can cover the change in this PR.
## Output code
**Code before this PR is in scalar version:**
```cpp
// apply score mod function
for (int64_t row = 0; row < cur_qSplitSize; ++row) {
for (int64_t col = 0; col < cur_kvSplitSize; col++) {
std::vector<int64_t> b_idx = {i};
std::vector<int64_t> h_idx = {j};
std::vector<int64_t> q_idx = {m+row};
int64_t phisical_kv_idx = n+col;
if (use_kv_indice) {
phisical_kv_idx= *kv_logical_data * kvBlockSize + col;
}
std::vector<int64_t> kv_idx = {phisical_kv_idx};
accum_t* in_ptr0 = qk_data + row * cur_kvSplitSize + col;
auto in_ptr1 = b_idx.data();
auto in_ptr2 = h_idx.data();
auto in_ptr3 = q_idx.data();
auto in_ptr4 = kv_idx.data();
accum_t* out_ptr0 = in_ptr0;
{
{
{
auto tmp0 = in_ptr0[static_cast<int64_t>(0L)];
out_ptr0[static_cast<int64_t>(0L)] = tmp0;
}
}
}
}
}
// Apply block mask, fill unused with -inf
for (int64_t row = 0; row < cur_qSplitSize; ++row) {
for (int64_t col = 0; col < cur_kvSplitSize; col++) {
std::vector<int64_t> b_idx = {i};
std::vector<int64_t> h_idx = {j};
std::vector<int64_t> q_idx = {m+row};
int64_t phisical_kv_idx = n+col;
if (use_kv_indice) {
phisical_kv_idx= *kv_logical_data * kvBlockSize + col;
}
std::vector<int64_t> kv_idx = {phisical_kv_idx};
accum_t* qk_block = qk_data + row * cur_kvSplitSize + col;
auto in_ptr1 = b_idx.data();
auto in_ptr2 = h_idx.data();
auto in_ptr3 = q_idx.data();
auto in_ptr4 = kv_idx.data();
std::vector<int64_t> temp = {0};
int64_t* out_ptr1 = temp.data();
{
{
{
auto tmp0 = static_cast<bool>(true);
out_ptr1[static_cast<int64_t>(0L)] = tmp0;
}
}
}
*qk_block = *out_ptr1 != 0
? *qk_block
: -std::numeric_limits<accum_t>::infinity();
}
}
```
**Code after this PR will be vectorized:**
```cpp
accum_t* in_ptr0 = qk_data;
auto in_ptr1 = b_idx.data();
auto in_ptr2 = h_idx.data();
auto in_ptr3 = q_idx.data();
auto in_ptr4 = kv_idx.data();
// apply score mod function
{
accum_t* out_ptr0 = in_ptr0;
{
#pragma GCC ivdep
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(cur_qSplitSize); x0+=static_cast<int64_t>(1L))
{
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(cur_kvSplitSize); x1+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x1 >= static_cast<int64_t>(0) && x1 < static_cast<int64_t>(16L*(c10::div_floor_integer(static_cast<int64_t>(cur_kvSplitSize), static_cast<int64_t>(16L))))))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x1 + cur_kvSplitSize*x0), static_cast<int64_t>(16));
tmp0.store(out_ptr0 + static_cast<int64_t>(x1 + cur_kvSplitSize*x0));
}
if(C10_UNLIKELY(x1 >= static_cast<int64_t>(16L*(c10::div_floor_integer(static_cast<int64_t>(cur_kvSplitSize), static_cast<int64_t>(16L)))) && x1 < static_cast<int64_t>(cur_kvSplitSize)))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x1 + cur_kvSplitSize*x0), static_cast<int64_t>(cur_kvSplitSize + ((-16L)*(c10::div_floor_integer(static_cast<int64_t>(cur_kvSplitSize), static_cast<int64_t>(16L))))));
tmp0.store(out_ptr0 + static_cast<int64_t>(x1 + cur_kvSplitSize*x0), static_cast<int64_t>(cur_kvSplitSize + ((-16L)*(c10::div_floor_integer(static_cast<int64_t>(cur_kvSplitSize), static_cast<int64_t>(16L))))));
}
}
}
}
}
}
// Apply block mask, fill unused with -inf
{
accum_t* out_ptr1 = in_ptr0;
{
#pragma GCC ivdep
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(cur_qSplitSize); x0+=static_cast<int64_t>(1L))
{
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(cur_kvSplitSize); x1+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x1 >= static_cast<int64_t>(0) && x1 < static_cast<int64_t>(16L*(c10::div_floor_integer(static_cast<int64_t>(cur_kvSplitSize), static_cast<int64_t>(16L))))))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x1 + cur_kvSplitSize*x0), static_cast<int64_t>(16));
auto tmp1 = static_cast<bool>(true);
auto tmp2 = -std::numeric_limits<float>::infinity();
auto tmp3 = at::vec::VecMask<float,1>::from(tmp1);
auto tmp4 = at::vec::Vectorized<float>(tmp2);
auto tmp5 = decltype(tmp0)::blendv(tmp4, tmp0, tmp3.template cast<float,1>());
tmp5.store(out_ptr1 + static_cast<int64_t>(x1 + cur_kvSplitSize*x0));
}
if(C10_UNLIKELY(x1 >= static_cast<int64_t>(16L*(c10::div_floor_integer(static_cast<int64_t>(cur_kvSplitSize), static_cast<int64_t>(16L)))) && x1 < static_cast<int64_t>(cur_kvSplitSize)))
{
for (int64_t x1_tail = static_cast<int64_t>(16L*(c10::div_floor_integer(static_cast<int64_t>(cur_kvSplitSize), static_cast<int64_t>(16L))));x1_tail < static_cast<int64_t>(cur_kvSplitSize); x1_tail++)
{
auto tmp0 = in_ptr0[static_cast<int64_t>(x1_tail + cur_kvSplitSize*x0)];
auto tmp1 = static_cast<bool>(true);
auto tmp2 = -std::numeric_limits<float>::infinity();
auto tmp3 = tmp1 ? tmp0 : tmp2;
out_ptr1[static_cast<int64_t>(x1_tail + cur_kvSplitSize*x0)] = tmp3;
}
}
}
}
}
}
}
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,752,143,348
|
[Easy] Fix todo by enable tests for cuda
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fix TODO in `test_tensor_creation_ops.py` file:
```python
# TODO: update to work on CUDA, too
```
**Test Result**
```bash
$ pytest test/test_tensor_creation_ops.py
```

```bash
$ lintrunner
```

cc @ezyang @albanD
| true
|
2,752,090,735
|
Add where_ ops
|
zeshengzong
|
open
|
[
"open source"
] | 6
|
CONTRIBUTOR
|
Fixes #28329
| true
|
2,752,084,307
|
[Inductor][CPP] Fix bitwise shift with corner inputs
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143635
**Summary**
Fix issue https://github.com/pytorch/pytorch/issues/143555 and https://github.com/pytorch/pytorch/issues/143566, we can align the implementation with Eager: https://github.com/pytorch/pytorch/blob/29b586bbad98dbc3d9ced980ccbdd8125d90a04d/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp#L501 at these corner inputs.
**Test Plan**
```
python test/inductor/test_cpu_repro.py -k test_bitwise_shift_corner_inputs
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,752,044,098
|
Add mps to GPU_TYPES
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Because it is a GPU, but don't require a triton, as it does not need one
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,752,028,982
|
log more context to anomaly nan checks
|
yunjiangster
|
closed
|
[
"fb-exported",
"Stale",
"topic: not user facing"
] | 7
|
NONE
|
Test Plan: buck tests
Differential Revision: D67321092
| true
|
2,752,014,181
|
Compiled forward pass output corrupted when using @torch.no_grad
|
ae99
|
closed
|
[
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
Minimal reproduction:
```
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self, n_state: int = 8):
super().__init__()
self.embed = nn.Embedding(32, n_state)
def forward(self, inputs):
padding = torch.zeros((1, 1), device=inputs.device, dtype=inputs.dtype)
padded = torch.cat((padding, inputs), dim=0)
return torch.stack((self.embed(padded), self.embed(padded)))
model = Model().to("cuda")
inputs = torch.randint(0, 32, (1, 1)).to("cuda")
model = torch.compile(model)
with torch.no_grad():
x1 = model(inputs)
x2 = model(inputs)
print(torch.allclose(x1, x2))
x1, x2
```
Notice the entire output of the second `embed` is zero'd out in the no_grad path, `x1`. This is not the case for `x2` however.
Seems various operations (mutations for example) also trigger this, this specific reproduction seemed simplest to me.
### Error logs
_No response_
### Versions
```
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.4 (main, Aug 7 2024, 15:25:40) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900KS
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 72%
CPU max MHz: 5500.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
...
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] open_clip_torch==2.29.0
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] triton==3.1.0
...
```
cc @chauhang @penguinwu
| true
|
2,751,992,173
|
[wip] kick off kernel compile early
|
eellison
|
closed
|
[
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143631
* #143408
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,751,959,756
|
Attempt to speed up MPS getTensorStringKey
|
swolchok
|
closed
|
[
"fb-exported",
"Stale",
"ciflow/trunk",
"release notes: mps",
"ciflow/mps"
] | 4
|
CONTRIBUTOR
|
I saw while profiling torchchat's MPS mode that this function was unexpectedly hot. It does a bunch of unnecessary allocation, so let's try fixing that.
Unfortunately I have not tested this; I was able to debug it causing a crash, but I can't complete MPS inference (with the new torchao quantized mps ops) without a segfault using my own build of PyTorch, even without this patch. I badly need a walkthrough on how to use my own build of PyTorch (and other torchchat dependencies) with torchchat.
| true
|
2,751,955,158
|
Add support for bfloat16 atomic adds in fbcode
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
CONTRIBUTOR
|
Reland https://github.com/pytorch/pytorch/pull/141857 and fallback on A100 which doesn't have bfloat16 atomic add instrs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,751,941,732
|
Fix false positive from f-strings in set_linter
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143628
This linter was going crazy in python 3.12, example:
```py
$ python3 tools/linter/adapters/set_linter.py torch/_inductor/runtime/triton_heuristics.py
torch/_inductor/runtime/triton_heuristics.py:192:25: Builtin `set` is deprecated
190 | args_str += ", ".join(call_args)
191 | for k, v in call_kwargs.items():
192 | args_str += f", {k}={v}"
^
193 |
194 | abs_path = os.path.abspath(sys.argv[0])
torch/_inductor/runtime/triton_heuristics.py:192:27: Builtin `set` is deprecated
190 | args_str += ", ".join(call_args)
191 | for k, v in call_kwargs.items():
192 | args_str += f", {k}={v}"
^
193 |
194 | abs_path = os.path.abspath(sys.argv[0])
torch/_inductor/runtime/triton_heuristics.py:192:29: Builtin `set` is deprecated
190 | args_str += ", ".join(call_args)
191 | for k, v in call_kwargs.items():
192 | args_str += f", {k}={v}"
^
193 |
194 | abs_path = os.path.abspath(sys.argv[0])
torch/_inductor/runtime/triton_heuristics.py:192:31: Builtin `set` is deprecated
190 | args_str += ", ".join(call_args)
191 | for k, v in call_kwargs.items():
192 | args_str += f", {k}={v}"
^
193 |
194 | abs_path = os.path.abspath(sys.argv[0])
torch/_inductor/runtime/triton_heuristics.py:195:17: Builtin `set` is deprecated
193 |
194 | abs_path = os.path.abspath(sys.argv[0])
195 | with open(f"{abs_path}.launch_params", "a") as f:
^
196 | f.write(f"{kernel_name} | {args_str}\n")
197 |
torch/_inductor/runtime/triton_heuristics.py:195:26: Builtin `set` is deprecated
193 |
194 | abs_path = os.path.abspath(sys.argv[0])
195 | with open(f"{abs_path}.launch_params", "a") as f:
^
196 | f.write(f"{kernel_name} | {args_str}\n")
197 |
torch/_inductor/runtime/triton_heuristics.py:196:19: Builtin `set` is deprecated
194 | abs_path = os.path.abspath(sys.argv[0])
195 | with open(f"{abs_path}.launch_params", "a") as f:
196 | f.write(f"{kernel_name} | {args_str}\n")
^
197 |
198 |
torch/_inductor/runtime/triton_heuristics.py:196:31: Builtin `set` is deprecated
194 | abs_path = os.path.abspath(sys.argv[0])
195 | with open(f"{abs_path}.launch_params", "a") as f:
196 | f.write(f"{kernel_name} | {args_str}\n")
^
197 |
198 |
torch/_inductor/runtime/triton_heuristics.py:196:35: Builtin `set` is deprecated
194 | abs_path = os.path.abspath(sys.argv[0])
195 | with open(f"{abs_path}.launch_params", "a") as f:
196 | f.write(f"{kernel_name} | {args_str}\n")
^
197 |
198 |
torch/_inductor/runtime/triton_heuristics.py:196:44: Builtin `set` is deprecated
194 | abs_path = os.path.abspath(sys.argv[0])
195 | with open(f"{abs_path}.launch_params", "a") as f:
196 | f.write(f"{kernel_name} | {args_str}\n")
^
197 |
198 |
torch/_inductor/runtime/triton_heuristics.py:729:26: Builtin `set` is deprecated
727 | exec(
728 | f"""
729 | def launcher({', '.join(def_args)}, grid, stream):
^
730 | if callable(grid):
731 | grid_0, grid_1, grid_2 = grid(grid_meta)
torch/_inductor/runtime/triton_heuristics.py:729:46: Builtin `set` is deprecated
727 | exec(
728 | f"""
729 | def launcher({', '.join(def_args)}, grid, stream):
^
730 | if callable(grid):
731 | grid_0, grid_1, grid_2 = grid(grid_meta)
torch/_inductor/runtime/triton_heuristics.py:735:24: Builtin `set` is deprecated
733 | grid_0, grid_1, grid_2 = grid
734 |
735 | args = {', '.join(call_args)},
^
736 | launch_args = get_launch_args(
737 | grid, grid_0, grid_1, grid_2, stream, function,
torch/_inductor/runtime/triton_heuristics.py:735:45: Builtin `set` is deprecated
733 | grid_0, grid_1, grid_2 = grid
734 |
735 | args = {', '.join(call_args)},
^
736 | launch_args = get_launch_args(
737 | grid, grid_0, grid_1, grid_2, stream, function,
torch/_inductor/runtime/triton_heuristics.py:1144:20: Builtin `set` is deprecated
1142 | cur_file = inspect.stack()[1].filename
1143 | summary_str = (
1144 | f"SUMMARY ({cur_file})\n"
^
1145 | f"{overall_time:.2f}ms \t {overall_gb:.2f} GB\t {overall_gb / (overall_time / 1e3):.2f}GB/s"
1146 | )
torch/_inductor/runtime/triton_heuristics.py:1144:29: Builtin `set` is deprecated
1142 | cur_file = inspect.stack()[1].filename
1143 | summary_str = (
1144 | f"SUMMARY ({cur_file})\n"
^
1145 | f"{overall_time:.2f}ms \t {overall_gb:.2f} GB\t {overall_gb / (overall_time / 1e3):.2f}GB/s"
1146 | )
torch/_inductor/runtime/triton_heuristics.py:1162:61: Builtin `set` is deprecated
1160 | )
1161 | file.write("====================\n")
1162 | file.write(f"TRITON KERNELS BANDWIDTH INFO ({cur_file})\n")
^
1163 | for ms, num_gb, gb_per_s, kernel_name in sorted_calls:
1164 | # also display the runtime percentage for each kernel
torch/_inductor/runtime/triton_heuristics.py:1162:70: Builtin `set` is deprecated
1160 | )
1161 | file.write("====================\n")
1162 | file.write(f"TRITON KERNELS BANDWIDTH INFO ({cur_file})\n")
^
1163 | for ms, num_gb, gb_per_s, kernel_name in sorted_calls:
1164 | # also display the runtime percentage for each kernel
torch/_inductor/runtime/triton_heuristics.py:1166:36: Builtin `set` is deprecated
1164 | # also display the runtime percentage for each kernel
1165 | percentage = f"{ms / overall_time * 100:.2f}%"
1166 | suffix = f" \t {percentage} \t {kernel_name}"
^
1167 | bw_info_str = create_bandwidth_info_str(
1168 | ms,
torch/_inductor/runtime/triton_heuristics.py:1166:47: Builtin `set` is deprecated
1164 | # also display the runtime percentage for each kernel
1165 | percentage = f"{ms / overall_time * 100:.2f}%"
1166 | suffix = f" \t {percentage} \t {kernel_name}"
^
1167 | bw_info_str = create_bandwidth_info_str(
1168 | ms,
torch/_inductor/runtime/triton_heuristics.py:1166:52: Builtin `set` is deprecated
1164 | # also display the runtime percentage for each kernel
1165 | percentage = f"{ms / overall_time * 100:.2f}%"
1166 | suffix = f" \t {percentage} \t {kernel_name}"
^
1167 | bw_info_str = create_bandwidth_info_str(
1168 | ms,
torch/_inductor/runtime/triton_heuristics.py:1166:64: Builtin `set` is deprecated
1164 | # also display the runtime percentage for each kernel
1165 | percentage = f"{ms / overall_time * 100:.2f}%"
1166 | suffix = f" \t {percentage} \t {kernel_name}"
^
1167 | bw_info_str = create_bandwidth_info_str(
1168 | ms,
torch/_inductor/runtime/triton_heuristics.py:1175:30: Builtin `set` is deprecated
1173 | )
1174 | file.write(bw_info_str + "\n")
1175 | file.write(f"{summary_str}\n\n")
^
1176 | except Exception as e:
1177 | log.warning(
torch/_inductor/runtime/triton_heuristics.py:1175:42: Builtin `set` is deprecated
1173 | )
1174 | file.write(bw_info_str + "\n")
1175 | file.write(f"{summary_str}\n\n")
^
1176 | except Exception as e:
1177 | log.warning(
torch/_inductor/runtime/triton_heuristics.py:1205:29: Builtin `set` is deprecated
1203 | else:
1204 | possible_names = _find_names(self)
1205 | kernel_name = f"{max(possible_names, key=len)}"
^
1206 | if not re.match(self.regex_filter, kernel_name):
1207 | return
torch/_inductor/runtime/triton_heuristics.py:1205:58: Builtin `set` is deprecated
1203 | else:
1204 | possible_names = _find_names(self)
1205 | kernel_name = f"{max(possible_names, key=len)}"
^
1206 | if not re.match(self.regex_filter, kernel_name):
1207 | return
torch/_inductor/runtime/triton_heuristics.py:1241:60: Builtin `set` is deprecated
1239 | "%s",
1240 | create_bandwidth_info_str(
1241 | ms, num_gb, gb_per_s, suffix=f" \t {kernel_name}"
^
1242 | ),
1243 | )
torch/_inductor/runtime/triton_heuristics.py:1241:72: Builtin `set` is deprecated
1239 | "%s",
1240 | create_bandwidth_info_str(
1241 | ms, num_gb, gb_per_s, suffix=f" \t {kernel_name}"
^
1242 | ),
1243 | )
torch/_inductor/runtime/triton_heuristics.py:1256:15: Builtin `set` is deprecated
1254 | for cfg in configs:
1255 | hasher.update(
1256 | f"{sorted(cfg.kwargs.items())} {cfg.num_warps} {cfg.num_stages}\n".encode()
^
1257 | )
1258 | return hasher.hexdigest()
torch/_inductor/runtime/triton_heuristics.py:1256:42: Builtin `set` is deprecated
1254 | for cfg in configs:
1255 | hasher.update(
1256 | f"{sorted(cfg.kwargs.items())} {cfg.num_warps} {cfg.num_stages}\n".encode()
^
1257 | )
1258 | return hasher.hexdigest()
torch/_inductor/runtime/triton_heuristics.py:1256:44: Builtin `set` is deprecated
1254 | for cfg in configs:
1255 | hasher.update(
1256 | f"{sorted(cfg.kwargs.items())} {cfg.num_warps} {cfg.num_stages}\n".encode()
^
1257 | )
1258 | return hasher.hexdigest()
torch/_inductor/runtime/triton_heuristics.py:1256:58: Builtin `set` is deprecated
1254 | for cfg in configs:
1255 | hasher.update(
1256 | f"{sorted(cfg.kwargs.items())} {cfg.num_warps} {cfg.num_stages}\n".encode()
^
1257 | )
1258 | return hasher.hexdigest()
torch/_inductor/runtime/triton_heuristics.py:1256:60: Builtin `set` is deprecated
1254 | for cfg in configs:
1255 | hasher.update(
1256 | f"{sorted(cfg.kwargs.items())} {cfg.num_warps} {cfg.num_stages}\n".encode()
^
1257 | )
1258 | return hasher.hexdigest()
torch/_inductor/runtime/triton_heuristics.py:1256:75: Builtin `set` is deprecated
1254 | for cfg in configs:
1255 | hasher.update(
1256 | f"{sorted(cfg.kwargs.items())} {cfg.num_warps} {cfg.num_stages}\n".encode()
^
1257 | )
1258 | return hasher.hexdigest()
torch/_inductor/runtime/triton_heuristics.py:1377:23: Builtin `set` is deprecated
1375 | if numel is None:
1376 | continue
1377 | block = cfg[f"{label}BLOCK"]
^
1378 | if numel == 1:
1379 | assert block == 1, (
torch/_inductor/runtime/triton_heuristics.py:1377:29: Builtin `set` is deprecated
1375 | if numel is None:
1376 | continue
1377 | block = cfg[f"{label}BLOCK"]
^
1378 | if numel == 1:
1379 | assert block == 1, (
torch/_inductor/runtime/triton_heuristics.py:1381:24: Builtin `set` is deprecated
1379 | assert block == 1, (
1380 | f"TritonKernel.indexing assumes numel == 1 => BLOCK == 1"
1381 | f" but {label.lower()}numel=={numel} and {label}BLOCK={block} (cfg={cfg})."
^
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
torch/_inductor/runtime/triton_heuristics.py:1381:38: Builtin `set` is deprecated
1379 | assert block == 1, (
1380 | f"TritonKernel.indexing assumes numel == 1 => BLOCK == 1"
1381 | f" but {label.lower()}numel=={numel} and {label}BLOCK={block} (cfg={cfg})."
^
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
torch/_inductor/runtime/triton_heuristics.py:1381:46: Builtin `set` is deprecated
1379 | assert block == 1, (
1380 | f"TritonKernel.indexing assumes numel == 1 => BLOCK == 1"
1381 | f" but {label.lower()}numel=={numel} and {label}BLOCK={block} (cfg={cfg})."
^
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
torch/_inductor/runtime/triton_heuristics.py:1381:52: Builtin `set` is deprecated
1379 | assert block == 1, (
1380 | f"TritonKernel.indexing assumes numel == 1 => BLOCK == 1"
1381 | f" but {label.lower()}numel=={numel} and {label}BLOCK={block} (cfg={cfg})."
^
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
torch/_inductor/runtime/triton_heuristics.py:1381:58: Builtin `set` is deprecated
1379 | assert block == 1, (
1380 | f"TritonKernel.indexing assumes numel == 1 => BLOCK == 1"
1381 | f" but {label.lower()}numel=={numel} and {label}BLOCK={block} (cfg={cfg})."
^
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
torch/_inductor/runtime/triton_heuristics.py:1381:64: Builtin `set` is deprecated
1379 | assert block == 1, (
1380 | f"TritonKernel.indexing assumes numel == 1 => BLOCK == 1"
1381 | f" but {label.lower()}numel=={numel} and {label}BLOCK={block} (cfg={cfg})."
^
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
torch/_inductor/runtime/triton_heuristics.py:1381:71: Builtin `set` is deprecated
1379 | assert block == 1, (
1380 | f"TritonKernel.indexing assumes numel == 1 => BLOCK == 1"
1381 | f" but {label.lower()}numel=={numel} and {label}BLOCK={block} (cfg={cfg})."
^
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
torch/_inductor/runtime/triton_heuristics.py:1381:77: Builtin `set` is deprecated
1379 | assert block == 1, (
1380 | f"TritonKernel.indexing assumes numel == 1 => BLOCK == 1"
1381 | f" but {label.lower()}numel=={numel} and {label}BLOCK={block} (cfg={cfg})."
^
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
torch/_inductor/runtime/triton_heuristics.py:1381:84: Builtin `set` is deprecated
1379 | assert block == 1, (
1380 | f"TritonKernel.indexing assumes numel == 1 => BLOCK == 1"
1381 | f" but {label.lower()}numel=={numel} and {label}BLOCK={block} (cfg={cfg})."
^
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
torch/_inductor/runtime/triton_heuristics.py:1381:88: Builtin `set` is deprecated
1379 | assert block == 1, (
1380 | f"TritonKernel.indexing assumes numel == 1 => BLOCK == 1"
1381 | f" but {label.lower()}numel=={numel} and {label}BLOCK={block} (cfg={cfg})."
^
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
torch/_inductor/runtime/triton_heuristics.py:1384:52: Builtin `set` is deprecated
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
1384 | max_block_str = f'config.triton.max_block["{label}"]'
^
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
torch/_inductor/runtime/triton_heuristics.py:1384:58: Builtin `set` is deprecated
1382 | )
1383 | max_block = TRITON_MAX_BLOCK[label]
1384 | max_block_str = f'config.triton.max_block["{label}"]'
^
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
torch/_inductor/runtime/triton_heuristics.py:1386:45: Builtin `set` is deprecated
1384 | max_block_str = f'config.triton.max_block["{label}"]'
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
^
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
1388 | )
torch/_inductor/runtime/triton_heuristics.py:1386:51: Builtin `set` is deprecated
1384 | max_block_str = f'config.triton.max_block["{label}"]'
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
^
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
1388 | )
torch/_inductor/runtime/triton_heuristics.py:1386:66: Builtin `set` is deprecated
1384 | max_block_str = f'config.triton.max_block["{label}"]'
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
^
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
1388 | )
torch/_inductor/runtime/triton_heuristics.py:1386:80: Builtin `set` is deprecated
1384 | max_block_str = f'config.triton.max_block["{label}"]'
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
^
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
1388 | )
torch/_inductor/runtime/triton_heuristics.py:1387:20: Builtin `set` is deprecated
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
^
1388 | )
1389 |
torch/_inductor/runtime/triton_heuristics.py:1387:26: Builtin `set` is deprecated
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
^
1388 | )
1389 |
torch/_inductor/runtime/triton_heuristics.py:1387:33: Builtin `set` is deprecated
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
^
1388 | )
1389 |
torch/_inductor/runtime/triton_heuristics.py:1387:39: Builtin `set` is deprecated
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
^
1388 | )
1389 |
torch/_inductor/runtime/triton_heuristics.py:1387:45: Builtin `set` is deprecated
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
^
1388 | )
1389 |
torch/_inductor/runtime/triton_heuristics.py:1387:59: Builtin `set` is deprecated
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
^
1388 | )
1389 |
torch/_inductor/runtime/triton_heuristics.py:1387:61: Builtin `set` is deprecated
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
^
1388 | )
1389 |
torch/_inductor/runtime/triton_heuristics.py:1387:71: Builtin `set` is deprecated
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
^
1388 | )
1389 |
torch/_inductor/runtime/triton_heuristics.py:1387:78: Builtin `set` is deprecated
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
^
1388 | )
1389 |
torch/_inductor/runtime/triton_heuristics.py:1387:82: Builtin `set` is deprecated
1385 | assert max_block % block == 0, (
1386 | f"TritonKernel.indexing assumes {label}BLOCK divides {max_block_str}"
1387 | f" but {label}BLOCK={block} and {max_block_str}={max_block} (cfg={cfg})."
^
1388 | )
1389 |
torch/_inductor/runtime/triton_heuristics.py:1402:19: Builtin `set` is deprecated
1400 | assert (
1401 | val <= max_block
1402 | ), f"'{var}' too large. Maximum: {max_block}. Actual: {val}."
^
1403 |
1404 |
torch/_inductor/runtime/triton_heuristics.py:1402:23: Builtin `set` is deprecated
1400 | assert (
1401 | val <= max_block
1402 | ), f"'{var}' too large. Maximum: {max_block}. Actual: {val}."
^
1403 |
1404 |
torch/_inductor/runtime/triton_heuristics.py:1402:46: Builtin `set` is deprecated
1400 | assert (
1401 | val <= max_block
1402 | ), f"'{var}' too large. Maximum: {max_block}. Actual: {val}."
^
1403 |
1404 |
torch/_inductor/runtime/triton_heuristics.py:1402:56: Builtin `set` is deprecated
1400 | assert (
1401 | val <= max_block
1402 | ), f"'{var}' too large. Maximum: {max_block}. Actual: {val}."
^
1403 |
1404 |
torch/_inductor/runtime/triton_heuristics.py:1402:67: Builtin `set` is deprecated
1400 | assert (
1401 | val <= max_block
1402 | ), f"'{var}' too large. Maximum: {max_block}. Actual: {val}."
^
1403 |
1404 |
torch/_inductor/runtime/triton_heuristics.py:1402:71: Builtin `set` is deprecated
1400 | assert (
1401 | val <= max_block
1402 | ), f"'{var}' too large. Maximum: {max_block}. Actual: {val}."
^
1403 |
1404 |
torch/_inductor/runtime/triton_heuristics.py:1551:21: Builtin `set` is deprecated
1549 | rnumels = {}
1550 | for idx in range(num_reduction_dims - 1, -1, -1):
1551 | prefix = f"r{idx}_"
^
1552 | max_size = min(size_hints[prefix], TRITON_MAX_BLOCK[prefix.upper()])
1553 | dim = min(max_size, remaining)
torch/_inductor/runtime/triton_heuristics.py:1551:25: Builtin `set` is deprecated
1549 | rnumels = {}
1550 | for idx in range(num_reduction_dims - 1, -1, -1):
1551 | prefix = f"r{idx}_"
^
1552 | max_size = min(size_hints[prefix], TRITON_MAX_BLOCK[prefix.upper()])
1553 | dim = min(max_size, remaining)
torch/_inductor/runtime/triton_heuristics.py:1556:34: Builtin `set` is deprecated
1554 | assert (
1555 | remaining % dim == 0
1556 | ), f"Expected dimension '{dim}' to divide remaining size '{remaining}'"
^
1557 | rnumels[prefix] = dim
1558 | remaining //= dim
torch/_inductor/runtime/triton_heuristics.py:1556:38: Builtin `set` is deprecated
1554 | assert (
1555 | remaining % dim == 0
1556 | ), f"Expected dimension '{dim}' to divide remaining size '{remaining}'"
^
1557 | rnumels[prefix] = dim
1558 | remaining //= dim
torch/_inductor/runtime/triton_heuristics.py:1556:67: Builtin `set` is deprecated
1554 | assert (
1555 | remaining % dim == 0
1556 | ), f"Expected dimension '{dim}' to divide remaining size '{remaining}'"
^
1557 | rnumels[prefix] = dim
1558 | remaining //= dim
torch/_inductor/runtime/triton_heuristics.py:1556:77: Builtin `set` is deprecated
1554 | assert (
1555 | remaining % dim == 0
1556 | ), f"Expected dimension '{dim}' to divide remaining size '{remaining}'"
^
1557 | rnumels[prefix] = dim
1558 | remaining //= dim
torch/_inductor/runtime/triton_heuristics.py:1564:38: Builtin `set` is deprecated
1562 | assert (
1563 | r == final_numel
1564 | ), f"Expected ND reduction size ({rnumels}) to have {r} elements."
^
1565 | assert all(
1566 | rnumels[prefix] <= size_hints[prefix] for prefix in rnumels
torch/_inductor/runtime/triton_heuristics.py:1564:46: Builtin `set` is deprecated
1562 | assert (
1563 | r == final_numel
1564 | ), f"Expected ND reduction size ({rnumels}) to have {r} elements."
^
1565 | assert all(
1566 | rnumels[prefix] <= size_hints[prefix] for prefix in rnumels
torch/_inductor/runtime/triton_heuristics.py:1564:57: Builtin `set` is deprecated
1562 | assert (
1563 | r == final_numel
1564 | ), f"Expected ND reduction size ({rnumels}) to have {r} elements."
^
1565 | assert all(
1566 | rnumels[prefix] <= size_hints[prefix] for prefix in rnumels
torch/_inductor/runtime/triton_heuristics.py:1564:59: Builtin `set` is deprecated
1562 | assert (
1563 | r == final_numel
1564 | ), f"Expected ND reduction size ({rnumels}) to have {r} elements."
^
1565 | assert all(
1566 | rnumels[prefix] <= size_hints[prefix] for prefix in rnumels
torch/_inductor/runtime/triton_heuristics.py:1567:37: Builtin `set` is deprecated
1565 | assert all(
1566 | rnumels[prefix] <= size_hints[prefix] for prefix in rnumels
1567 | ), f"rnumels exceed size_hints. {rnumels} > {size_hints}"
^
1568 |
1569 | return rnumels
torch/_inductor/runtime/triton_heuristics.py:1567:45: Builtin `set` is deprecated
1565 | assert all(
1566 | rnumels[prefix] <= size_hints[prefix] for prefix in rnumels
1567 | ), f"rnumels exceed size_hints. {rnumels} > {size_hints}"
^
1568 |
1569 | return rnumels
torch/_inductor/runtime/triton_heuristics.py:1567:49: Builtin `set` is deprecated
1565 | assert all(
1566 | rnumels[prefix] <= size_hints[prefix] for prefix in rnumels
1567 | ), f"rnumels exceed size_hints. {rnumels} > {size_hints}"
^
1568 |
1569 | return rnumels
torch/_inductor/runtime/triton_heuristics.py:1567:60: Builtin `set` is deprecated
1565 | assert all(
1566 | rnumels[prefix] <= size_hints[prefix] for prefix in rnumels
1567 | ), f"rnumels exceed size_hints. {rnumels} > {size_hints}"
^
1568 |
1569 | return rnumels
torch/_inductor/runtime/triton_heuristics.py:1746:49: Builtin `set` is deprecated
1744 |
1745 | if not configs:
1746 | raise NotImplementedError(f"size_hints: {size_hints}")
^
1747 | return cached_autotune(
1748 | size_hints,
torch/_inductor/runtime/triton_heuristics.py:1746:60: Builtin `set` is deprecated
1744 |
1745 | if not configs:
1746 | raise NotImplementedError(f"size_hints: {size_hints}")
^
1747 | return cached_autotune(
1748 | size_hints,
torch/_inductor/runtime/triton_heuristics.py:1928:32: Builtin `set` is deprecated
1926 | for prefix in size_hints:
1927 | if prefix_is_reduction(prefix):
1928 | c.kwargs.pop(f"{prefix.upper()}BLOCK")
^
1929 |
1930 | if disable_pointwise_autotuning(inductor_meta):
torch/_inductor/runtime/triton_heuristics.py:1928:47: Builtin `set` is deprecated
1926 | for prefix in size_hints:
1927 | if prefix_is_reduction(prefix):
1928 | c.kwargs.pop(f"{prefix.upper()}BLOCK")
^
1929 |
1930 | if disable_pointwise_autotuning(inductor_meta):
torch/_inductor/runtime/triton_heuristics.py:1975:49: Builtin `set` is deprecated
1973 | assert triton_meta is not None
1974 | if len(size_hints) != 2:
1975 | raise NotImplementedError(f"size_hints: {size_hints}")
^
1976 |
1977 | configs = _reduction_configs(size_hints=size_hints, inductor_meta=inductor_meta)
torch/_inductor/runtime/triton_heuristics.py:1975:60: Builtin `set` is deprecated
1973 | assert triton_meta is not None
1974 | if len(size_hints) != 2:
1975 | raise NotImplementedError(f"size_hints: {size_hints}")
^
1976 |
1977 | configs = _reduction_configs(size_hints=size_hints, inductor_meta=inductor_meta)
torch/_inductor/runtime/triton_heuristics.py:2082:56: Builtin `set` is deprecated
2080 | xnumel, ynumel, znumel = numels[2], numels[1], numels[0]
2081 | else:
2082 | raise AssertionError(f"invalid size for numels {len(numels)}")
^
2083 |
2084 | def get_grid_dim(numel, block):
torch/_inductor/runtime/triton_heuristics.py:2082:68: Builtin `set` is deprecated
2080 | xnumel, ynumel, znumel = numels[2], numels[1], numels[0]
2081 | else:
2082 | raise AssertionError(f"invalid size for numels {len(numels)}")
^
2083 |
2084 | def get_grid_dim(numel, block):
torch/_inductor/runtime/triton_heuristics.py:2104:57: Builtin `set` is deprecated
2102 | torch._check(
2103 | y_grid <= max_y_grid,
2104 | lambda: f"Generated y grid beyond 2^16 ({y_grid}) not supported with z dimension present. File issue",
^
2105 | )
2106 |
torch/_inductor/runtime/triton_heuristics.py:2104:64: Builtin `set` is deprecated
2102 | torch._check(
2103 | y_grid <= max_y_grid,
2104 | lambda: f"Generated y grid beyond 2^16 ({y_grid}) not supported with z dimension present. File issue",
^
2105 | )
2106 |
torch/_inductor/runtime/triton_heuristics.py:2113:43: Builtin `set` is deprecated
2111 | )
2112 |
2113 | setattr(grid_fn, "grid_fn_str", f"grid{numels}") # noqa: B010
^
2114 |
2115 | return grid_fn
torch/_inductor/runtime/triton_heuristics.py:2113:50: Builtin `set` is deprecated
2111 | )
2112 |
2113 | setattr(grid_fn, "grid_fn_str", f"grid{numels}") # noqa: B010
^
2114 |
2115 | return grid_fn
torch/_inductor/runtime/triton_heuristics.py:2122:48: Builtin `set` is deprecated
2120 | return (meta["RSPLIT"], ceildiv(xnumel, meta.get("XBLOCK", 1)), 1)
2121 |
2122 | grid_fn_str = f"cooperative_reduction_grid({xnumel})"
^
2123 | setattr(grid_fn, "grid_fn_str", grid_fn_str) # noqa: B010
2124 | return grid_fn
torch/_inductor/runtime/triton_heuristics.py:2122:55: Builtin `set` is deprecated
2120 | return (meta["RSPLIT"], ceildiv(xnumel, meta.get("XBLOCK", 1)), 1)
2121 |
2122 | grid_fn_str = f"cooperative_reduction_grid({xnumel})"
^
2123 | setattr(grid_fn, "grid_fn_str", grid_fn_str) # noqa: B010
2124 | return grid_fn
torch/_inductor/runtime/triton_heuristics.py:2135:54: Builtin `set` is deprecated
2133 | coop_grid = cooperative_reduction_grid(xnumel)
2134 | normal_grid = grid(xnumel)
2135 | grid_fn_str = f"maybe_cooperative_reduction_grid({xnumel})"
^
2136 | setattr(grid_fn, "grid_fn_str", grid_fn_str) # noqa: B010
2137 | return grid_fn
torch/_inductor/runtime/triton_heuristics.py:2135:61: Builtin `set` is deprecated
2133 | coop_grid = cooperative_reduction_grid(xnumel)
2134 | normal_grid = grid(xnumel)
2135 | grid_fn_str = f"maybe_cooperative_reduction_grid({xnumel})"
^
2136 | setattr(grid_fn, "grid_fn_str", grid_fn_str) # noqa: B010
2137 | return grid_fn
torch/_inductor/runtime/triton_heuristics.py:2145:37: Builtin `set` is deprecated
2143 | return (ceildiv(rnumel, meta.get("R0_BLOCK", 1)), xnumel, 1)
2144 |
2145 | grid_fn_str = f"split_scan_grid({xnumel}, {rnumel})"
^
2146 | setattr(grid_fn, "grid_fn_str", grid_fn_str) # noqa: B010
2147 |
torch/_inductor/runtime/triton_heuristics.py:2145:44: Builtin `set` is deprecated
2143 | return (ceildiv(rnumel, meta.get("R0_BLOCK", 1)), xnumel, 1)
2144 |
2145 | grid_fn_str = f"split_scan_grid({xnumel}, {rnumel})"
^
2146 | setattr(grid_fn, "grid_fn_str", grid_fn_str) # noqa: B010
2147 |
torch/_inductor/runtime/triton_heuristics.py:2145:47: Builtin `set` is deprecated
2143 | return (ceildiv(rnumel, meta.get("R0_BLOCK", 1)), xnumel, 1)
2144 |
2145 | grid_fn_str = f"split_scan_grid({xnumel}, {rnumel})"
^
2146 | setattr(grid_fn, "grid_fn_str", grid_fn_str) # noqa: B010
2147 |
torch/_inductor/runtime/triton_heuristics.py:2145:54: Builtin `set` is deprecated
2143 | return (ceildiv(rnumel, meta.get("R0_BLOCK", 1)), xnumel, 1)
2144 |
2145 | grid_fn_str = f"split_scan_grid({xnumel}, {rnumel})"
^
2146 | setattr(grid_fn, "grid_fn_str", grid_fn_str) # noqa: B010
2147 |
torch/_inductor/runtime/triton_heuristics.py:2173:42: Builtin `set` is deprecated
2171 | assert (
2172 | min_blocks_d is None or min_blocks == min_blocks_d
2173 | ), f"inconsistent min_blocks {min_blocks} vs x grid {numels[-1]}"
^
2174 | else:
2175 | # sequential dispatch
torch/_inductor/runtime/triton_heuristics.py:2173:53: Builtin `set` is deprecated
2171 | assert (
2172 | min_blocks_d is None or min_blocks == min_blocks_d
2173 | ), f"inconsistent min_blocks {min_blocks} vs x grid {numels[-1]}"
^
2174 | else:
2175 | # sequential dispatch
torch/_inductor/runtime/triton_heuristics.py:2173:66: Builtin `set` is deprecated
2171 | assert (
2172 | min_blocks_d is None or min_blocks == min_blocks_d
2173 | ), f"inconsistent min_blocks {min_blocks} vs x grid {numels[-1]}"
^
2174 | else:
2175 | # sequential dispatch
torch/_inductor/runtime/triton_heuristics.py:2173:77: Builtin `set` is deprecated
2171 | assert (
2172 | min_blocks_d is None or min_blocks == min_blocks_d
2173 | ), f"inconsistent min_blocks {min_blocks} vs x grid {numels[-1]}"
^
2174 | else:
2175 | # sequential dispatch
```
| true
|
2,751,922,300
|
[caffe2] Add AVX512 support for box_cox operator
|
efiks
|
closed
|
[
"caffe2",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Summary:
Reuse templetized implementation of box_cox caffe2 operator.
* Duplicate .cc file of AVX2
* change intrinsics functions to use AVX512 instructions
* override templates
* extend the caller to use new methods
* guard AVX512 with a gflag to allow smooth transition
Differential Revision: D67433457
| true
|
2,751,885,587
|
[dynamo] Add types to exc.py
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143626
* #143610
* #143552
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,751,863,593
|
fix typo in autocast header
|
williamwen42
|
closed
|
[
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143625
* #143592
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,751,860,673
|
pytorch v2.5.1 build for nvidia jetson orin nano 8GB
|
lida2003
|
closed
|
[
"module: build",
"module: cuda",
"triaged",
"module: jetson"
] | 3
|
NONE
|
### 🐛 Describe the bug
pytorch v2.5.1 build for nvidia jetson orin 8GB
- Previous discussion here FYI: https://forums.developer.nvidia.com/t/request-build-script-for-pytorch-or-up-to-date-pytorh-binary-release-supporting-jetson-boards-running-l4t35-6-ubuntu20-04/316972
```
Software part of jetson-stats 4.2.12 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Orin Nano Developer Kit - Jetpack 5.1.4 [L4T 35.6.0]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
- P-Number: p3767-0005
- Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
- Distribution: Ubuntu 20.04 focal
- Release: 5.10.216-tegra
jtop:
- Version: 4.2.12
- Service: Active
Libraries:
- CUDA: 11.4.315
- cuDNN: 8.6.0.166
- TensorRT: 8.5.2.2
- VPI: 2.4.8
- OpenCV: 4.9.0 - with CUDA: YES
DeepStream C/C++ SDK version: 6.3
Python Environment:
Python 3.8.10
GStreamer: YES (1.16.3)
NVIDIA CUDA: YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
OpenCV version: 4.9.0 CUDA True
YOLO version: 8.3.33
Torch version: 2.1.0a0+41361538.nv23.06
Torchvision version: 0.16.1+fdea156
DeepStream SDK version: 1.1.8
```
### Error logs
```
/home/daniel/Work/pytorch/aten/src/ATen/cuda/cub.cuh(63): error: class "cub::FpLimits<c10::BFloat16>" cannot be specialized in the current scope
/home/daniel/Work/pytorch/aten/src/ATen/cuda/cub.cuh(77): error: class "cub::NumericTraits<c10::BFloat16>" cannot be specialized in the current scope
/home/daniel/Work/pytorch/aten/src/ATen/cuda/cub.cuh(88): error: namespace "at_cuda_detail" has no member "cub"
/home/daniel/Work/pytorch/aten/src/ATen/cuda/cub-RadixSortKeys.cu(24): error: name followed by "::" must be a class or namespace name
/home/daniel/Work/pytorch/aten/src/ATen/cuda/cub-RadixSortKeys.cu(24): error: name followed by "::" must be a class or namespace name
/home/daniel/Work/pytorch/aten/src/ATen/cuda/cub-RadixSortKeys.cu(33): error: name followed by "::" must be a class or namespace name
/home/daniel/Work/pytorch/aten/src/ATen/cuda/cub-RadixSortKeys.cu(33): error: name followed by "::" must be a class or namespace name
```
Detailed log attached here: [log.txt](https://github.com/user-attachments/files/18205588/log.txt)
### Versions
```
daniel@daniel-nvidia:~/Work/pytorch$ python3 collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 7 2024, 13:10:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.216-tegra-aarch64-with-glibc2.29
Is CUDA available: N/A
CUDA runtime version: 11.4.315
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Thread(s) per core: 1
Core(s) per socket: 3
Socket(s): 2
Vendor ID: ARM
Model: 1
Model name: ARMv8 Processor rev 1 (v8l)
Stepping: r0p1
CPU max MHz: 1510.4000
CPU min MHz: 115.2000
BogoMIPS: 62.50
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 1.5 MiB
L3 cache: 2 MiB
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, but not BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp uscat ilrcpc flagm
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.3.12
[pip3] onnxruntime==1.16.3
[pip3] onnxruntime-gpu==1.17.0
[pip3] onnxslim==0.1.36
[pip3] optree==0.13.1
[pip3] torch==2.1.0a0+41361538.nv23.6
[pip3] torch2trt==0.5.0
[pip3] torchvision==0.16.1
[conda] Could not collect
```
cc @malfet @seemethere @ptrblck @msaroufim @eqy @puririshi98
| true
|
2,751,854,398
|
Revert "refactor tensorify restart logic to use sources (#141517)"
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143623
This reverts commit 30d8b30db7eaaa254d97077ac6515cdc4568fd6d.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,751,835,609
|
[triton pin 3.2] Cherry pick additional device context fix
|
bertmaher
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143622
Summary:
* https://github.com/triton-lang/triton/pull/5276
| true
|
2,751,834,296
|
[inductor][cpu]text-classification+albert-base-v1 failure in prepare_pt2e
|
zxd1997066
|
closed
|
[
"oncall: quantization"
] | 11
|
CONTRIBUTOR
|
### 🐛 Describe the bug
After changing to torch.export.export_for_training from capture_pre_autograd_graph, the following failure occurs.
```
Traceback (most recent call last):
File "/workspace/pytorch/./transformers/examples/pytorch/text-classification/run_glue.py", line 652, in <module>
main()
File "/workspace/pytorch/./transformers/examples/pytorch/text-classification/run_glue.py", line 590, in main
metrics = trainer.evaluate(eval_dataset=eval_dataset)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3109, in evaluate
output = eval_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3237, in evaluation_loop
else self.accelerator.prepare_model(model, evaluation_mode=True)
File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 1457, in prepare_model
prepared_model = prepare_pt2e(exported_model, quantizer)
File "/workspace/pytorch/torch/ao/quantization/quantize_pt2e.py", line 97, in prepare_pt2e
quantizer.annotate(model)
File "/workspace/pytorch/torch/ao/quantization/quantizer/x86_inductor_quantizer.py", line 705, in annotate
self._annotate_with_config(
File "/workspace/pytorch/torch/ao/quantization/quantizer/x86_inductor_quantizer.py", line 738, in _annotate_with_config
self._annotate_linear_fusion_pattern(model, quantization_config, filter_fn)
File "/workspace/pytorch/torch/ao/quantization/quantizer/x86_inductor_quantizer.py", line 1022, in _annotate_linear_fusion_pattern
self._annotate_linear_binary_unary(model, quantization_config, filter_fn)
File "/workspace/pytorch/torch/ao/quantization/quantizer/x86_inductor_quantizer.py", line 1527, in _annotate_linear_binary_unary
linear_node, binary_node = self._get_output_nodes_of_partitions(
File "/workspace/pytorch/torch/ao/quantization/quantizer/x86_inductor_quantizer.py", line 646, in _get_output_nodes_of_partitions
raise ValueError("Input partition has more than one output node")
ValueError: Input partition has more than one output node
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspace/pytorch/numa_launcher.py", line 805, in <module>
main()
File "/workspace/pytorch/numa_launcher.py", line 800, in main
launcher.launch(args)
File "/workspace/pytorch/numa_launcher.py", line 481, in launch
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd_s)
subprocess.CalledProcessError: Command 'numactl -C 0-31 -m 0 /opt/conda/bin/python -u ./transformers/examples/pytorch/text-classification/run_glue.py --model_name_or_path albert-base-v1 --task_name MRPC --do_eval --max_seq_length 16 --learning_rate 2e-5 --overwrite_output_dir --output_dir /tmp/tmp_huggingface/ --torch_compile --torch_compile_quant ptq_dynamic --report_to=none --per_device_eval_batch_size 64' returned non-zero exit status 1.
```
this error can be reproduced with following cmd after install pytorch:
```
git clone -b test https://github.com/zxd1997066/transformers --depth=1 && cd transformers && \
python setup.py bdist_wheel && pip install --force-reinstall dist/*.whl && cd ..
git clone -b test https://github.com/zxd1997066/accelerate.git && cd accelerate && \
python setup.py bdist_wheel && pip install --no-deps --force-reinstall dist/*.whl && cd ..
pip install -r transformers/examples/pytorch/text-classification/requirements.txt
wget https://github.com/chuanqi129/inductor-tools/raw/xiangdong/accuracy/scripts/modelbench/quant/numa_launcher.py
wget https://github.com/chuanqi129/inductor-tools/raw/xiangdong/accuracy/scripts/modelbench/quant/hf_quant_test.sh
bash hf_quant_test.sh key torch_compile_quant
```
torch.export.export_for_training is using here https://github.com/zxd1997066/accelerate/blob/test/src/accelerate/accelerator.py#L1450
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>SW</th>
<th>Branch</th>
<th>Target commit</th>
<th>Refer commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pytorch</td>
<td>nightly</td>
<td>a5fc054</td>
<td>aa019ef</td>
</tr>
<tr>
<td>torchaudio</td>
<td>nightly</td>
<td>a6b0a14</td>
<td>332760d</td>
</tr>
<tr>
<td>torchtext</td>
<td>nightly</td>
<td>b0ebddc</td>
<td>b0ebddc</td>
</tr>
<tr>
<td>torchvision</td>
<td>nightly</td>
<td>d23a6e1</td>
<td>d23a6e1</td>
</tr>
<tr>
<td>torchdata</td>
<td>nightly</td>
<td>11bb5b8</td>
<td>11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>nightly</td>
<td>fea73cb</td>
<td>fea73cb</td>
</tr>
</tbody>
</table>
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @chuanqi129
| true
|
2,751,821,390
|
Apply clang-format for ATen/core/dispatch headers
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Code change via add path config in `.lintrunner.toml` file and running
```bash
$ lintrunner -a --take CLANGFORMAT --all-files
```
cc @ezyang
| true
|
2,751,801,509
|
Revert D67066706
|
bobrenjc93
|
closed
|
[
"fb-exported",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Summary:
This diff reverts D67066706
verified that this causes divergence in S477892
Test Plan: NA
Differential Revision: D67499773
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,751,795,868
|
[caffe2] Add ISA selection
|
efiks
|
closed
|
[
"caffe2",
"fb-exported",
"Stale"
] | 5
|
CONTRIBUTOR
|
Differential Revision: D67499220
| true
|
2,751,772,874
|
[Inductor] Constrain the shape of other tensor for Conv/Linear + broa…
|
jiayisunx
|
closed
|
[
"module: cpu",
"open source",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
…dcast add fusion. (#141759)
Fix https://github.com/pytorch/pytorch/issues/141671.
Summary:
The performance regression of these two timm_models is caused by Conv/Linear + broadcast add fusion run into oneDNN ref path. This PR constrains the shape of other tensor for Conv/Linear + broadcast add fusion to fix this issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141759
Approved by: https://github.com/jgong5, https://github.com/leslie-fang-intel, https://github.com/jansel
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.