id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,974,435,833
|
Training/Fine-tuning fails with PyTorch 2.8 + 4x 5090 GPUs using DDP/FSDP/DeepSpeed
|
felixliufei
|
open
|
[
"oncall: distributed",
"triaged",
"module: ddp",
"module: fsdp"
] | 6
|
NONE
|
### 🐛 Describe the bug
Hi everyone,
I seem to have hit a roadblock and could use some help or clarification.
Environment:
* PyTorch Version: 2.8 (Is this correct? Please confirm the exact version)
* GPUs: 4 x NVIDIA 5090
* Parallelism Strategy Tried: DistributedDataParallel (DDP), FullyShardedDataParallel (FSDP), DeepSpeed
* Task: Training / Fine-tuning (Inference works fine)
* Other relevant environment details (Please add if possible):
* Operating System: [Ubuntu 22.04]
* CUDA Version: [12.8]
* NVIDIA Driver Version: [570]
* Python Version: [3.10]
Problem Description:
I am currently unable to successfully run training or fine-tuning jobs when using data parallelism on a system equipped with 4 NVIDIA 5090 GPUs and PyTorch 2.8. I have attempted to use standard DistributedDataParallel (DDP), FullyShardedDataParallel (FSDP), and also integrated DeepSpeed, but all attempts fail during the training/fine-tuning phase.
Interestingly, running inference tasks on the same multi-GPU setup works without issues. The problem appears specifically related to the training/fine-tuning process combined with data parallelism libraries.
Question:
Is there a known limitation or incompatibility with PyTorch 2.8 (or the associated libraries like DDP, FSDP, DeepSpeed) that prevents data parallel training/fine-tuning on a 4x NVIDIA 5090 configuration? Or could there be other configuration issues I might be overlooking?
Any insights, confirmation of compatibility, or suggestions for troubleshooting would be greatly appreciated. If specific error messages or a minimal reproducible code example would be helpful, please let me know, and I can try to provide them.
Thanks for your help
### Versions
wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360
| true
|
2,974,211,661
|
GeForce RTX 5090 D with CUDA capability sm_120 is not compatible with the current PyTorch installation.
|
monkeycc
|
closed
|
[
"module: binaries",
"module: cuda",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
```
import torch
# Check if CUDA is recognized by PyTorch
print("Is CUDA available:", torch.cuda.is_available())
# Output the number of GPU devices and their names (if available)
if torch.cuda.is_available():
print("Number of GPU devices:", torch.cuda.device_count())
print("Current GPU device:", torch.cuda.current_device())
print("Device name:", torch.cuda.get_device_name(0))
else:
print("No GPU detected, using CPU mode")
# Check PyTorch version and the CUDA version it was compiled with
print("PyTorch version:", torch.__version__)
print("PyTorch supported CUDA version:", torch.version.cuda)
# Check if the current PyTorch supports the installed CUDA driver
print("CUDA driver version:", torch.cuda.get_device_properties(0).major, ".", torch.cuda.get_device_properties(0).minor)
```
```log
Is CUDA available: True
Number of GPU devices: 1
/home/mm/anaconda3/envs/py312/lib/python3.12/site-packages/torch/cuda/__init__.py:235: UserWarning:
NVIDIA GeForce RTX 5090 D with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5090 D GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(
Current GPU device: 0
Device name: NVIDIA GeForce RTX 5090 D
PyTorch version: 2.6.0+cu126
PyTorch supported CUDA version: 12.6
CUDA driver version: 12 . 0
```
### Versions
```
Collecting environment information...
/home/mm/anaconda3/envs/py312/lib/python3.12/site-packages/torch/cuda/__init__.py:235: UserWarning:
NVIDIA GeForce RTX 5090 D with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5090 D GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.10 (x86_64)
GCC version: (Ubuntu 14.2.0-4ubuntu2) 14.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.40
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 5090 D
Nvidia driver version: 570.124.04
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 24
在线 CPU 列表: 0-23
厂商 ID: GenuineIntel
型号名称: Intel(R) Core(TM) Ultra 9 285K
CPU 系列: 6
型号: 198
每个核的线程数: 1
每个座的核数: 24
座: 1
步进: 2
CPU(s) scaling MHz: 35%
CPU 最大 MHz: 5200.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 7372.80
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni lam wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 768 KiB (20 instances)
L1i 缓存: 1.3 MiB (20 instances)
L2 缓存: 40 MiB (12 instances)
L3 缓存: 36 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0+cu126 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @seemethere @malfet @osalpekar @atalman @ptrblck @msaroufim @eqy
| true
|
2,974,003,987
|
[BE][CI][Easy] Run `lintrunner` on generated `.pyi` stub files
|
XuehaiPan
|
open
|
[
"module: typing",
"module: ci",
"module: lint",
"open source",
"better-engineering",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150732
* #150731
* #150730
* #150626
* #150729
* #150728
* #150727
* #150726
cc @ezyang @malfet @xuzhao9 @gramster @seemethere @pytorch/pytorch-dev-infra
| true
|
2,974,003,923
|
[BE] Resolve lint errors in `.pyi` stub files
|
XuehaiPan
|
open
|
[
"module: typing",
"module: lint",
"open source",
"better-engineering",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150732
* __->__ #150731
* #150730
* #150626
* #150729
* #150728
* #150727
* #150726
cc @ezyang @malfet @xuzhao9 @gramster
| true
|
2,974,003,881
|
[BE] Ensure generated stub files by `gen_pyi` are properly formatted
|
XuehaiPan
|
open
|
[
"open source",
"better-engineering",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150732
* #150731
* __->__ #150730
* #150626
* #150729
* #150728
* #150727
* #150726
| true
|
2,974,003,819
|
[BE] Add `__all__` to `torch/nn/functional.pyi` and `torch/return_types.pyi`
|
XuehaiPan
|
open
|
[
"module: nn",
"open source",
"better-engineering",
"module: codegen",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150732
* #150731
* #150730
* #150626
* __->__ #150729
* #150728
* #150727
* #150726
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ezyang @bhosmer @bdhirsh @kadeng
| true
|
2,974,003,788
|
[BE] Update `.pyi` stub template to use Generic TypeAlias (PEP 585) and Union Type (PEP 604)
|
XuehaiPan
|
open
|
[
"module: typing",
"open source",
"better-engineering",
"module: codegen",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
https://github.com/pytorch/pytorch/pull/129001#discussion_r1645126801 is the motivation for the whole stack of PRs. In `torch/__init__.py`, `torch._C.Type` shadows `from typing import Type`, and there is no type stub for `torch._C.Type` in `torch/_C/__init__.pyi`. So we need to use `from typing import Type as _Type`. After enabling [Generic TypeAlias (PEP 585)](https://peps.python.org/pep-0585) in the `.pyi` type stub files, we can use `type` instead of `typing.Type` or `from typing import Type as _Type`.
------
- [Generic TypeAlias (PEP 585)](https://peps.python.org/pep-0585): e.g. `typing.List[T] -> list[T]`, `typing.Dict[KT, VT] -> dict[KT, VT]`, `typing.Type[T] -> type[T]`.
- [Union Type (PEP 604)](https://peps.python.org/pep-0604): e.g. `Union[X, Y] -> X | Y`, `Optional[X] -> X | None`, `Optional[Union[X, Y]] -> X | Y | None`.
Note that in `.pyi` stub files, we do not need `from __future__ import annotations`. So this PR does not violate issue #117449:
- #117449
------
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150732
* #150731
* #150730
* #150626
* #150729
* __->__ #150728
* #150727
* #150726
cc @ezyang @malfet @xuzhao9 @gramster @bhosmer @bdhirsh @kadeng
| true
|
2,974,003,729
|
[torchgen] Refactor and simplify `gen_pyi.py` to use Generic TypeAlias (PEP 585) and Union Type (PEP 604)
|
XuehaiPan
|
open
|
[
"module: typing",
"open source",
"better-engineering",
"module: codegen",
"topic: not user facing",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
https://github.com/pytorch/pytorch/pull/129001#discussion_r1645126801 is the motivation for the whole stack of PRs. In `torch/__init__.py`, `torch._C.Type` shadows `from typing import Type`, and there is no type stub for `torch._C.Type` in `torch/_C/__init__.pyi`. So we need to use `from typing import Type as _Type`. After enabling [Generic TypeAlias (PEP 585)](https://peps.python.org/pep-0585) in the `.pyi` type stub files, we can use `type` instead of `typing.Type` or `from typing import Type as _Type`.
------
- [Generic TypeAlias (PEP 585)](https://peps.python.org/pep-0585): e.g. `typing.List[T] -> list[T]`, `typing.Dict[KT, VT] -> dict[KT, VT]`, `typing.Type[T] -> type[T]`.
- [Union Type (PEP 604)](https://peps.python.org/pep-0604): e.g. `Union[X, Y] -> X | Y`, `Optional[X] -> X | None`, `Optional[Union[X, Y]] -> X | Y | None`.
Note that in `.pyi` stub files, we do not need `from __future__ import annotations`. So this PR does not violate issue #117449:
- #117449
------
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150732
* #150731
* #150730
* #150626
* #150729
* #150728
* __->__ #150727
* #150726
cc @ezyang @malfet @xuzhao9 @gramster @bhosmer @bdhirsh @kadeng
| true
|
2,974,003,676
|
[torchgen] Refactor `torchgen.utils.FileManager` to accept `pathlib.Path`
|
XuehaiPan
|
open
|
[
"open source",
"better-engineering",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"suppress-bc-linter",
"ci-no-td"
] | 13
|
COLLABORATOR
|
This PR allows `FileManager` to accept `pathlib.Path` as arguments while keeping the original `str` path support.
This allows us to simplify the code such as:
1. `os.path.join(..., ...)` with `Path.__floordiv__(..., ...)`.
https://github.com/pytorch/pytorch/blob/95a5958db490608cacca75b89d9a1d2e955b60e8/torchgen/utils.py#L155
https://github.com/pytorch/pytorch/blob/95a5958db490608cacca75b89d9a1d2e955b60e8/torchgen/utils.py#L176
2. `os.path.basename(...)` with `Path(...).name`.
https://github.com/pytorch/pytorch/blob/95a5958db490608cacca75b89d9a1d2e955b60e8/torchgen/utils.py#L161
3. Manual file extension split with `Path(...).with_stem(new_stem)`
https://github.com/pytorch/pytorch/blob/95a5958db490608cacca75b89d9a1d2e955b60e8/torchgen/utils.py#L241-L256
------
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150732
* #150731
* #150730
* #150626
* #150729
* #150728
* #150727
* __->__ #150726
| true
|
2,973,962,635
|
Continuous calls to nn.Linear in fp32 on the 5090D cause severe performance degradation
|
mobulan
|
open
|
[
"module: performance",
"module: nn",
"module: cuda",
"triaged",
"Blackwell"
] | 34
|
NONE
|
### 🐛 Describe the bug
Continuously calling nn.Linear in fp32 on 5090D causes severe performance degradation. I don't know if it will occur on other 50 series cards.
```python
import torch
from torch import nn
import time
import torch.nn.functional as F
linear = nn.Linear(768,768).cuda()
x = torch.randn(256, 196, 768).cuda()
torch.cuda.synchronize()
t = time.time()
for i in range(2000):
y = linear(x)
torch.cuda.synchronize()
print(f'Linear Time:{time.time() - t:.3f}')
print()
x = torch.randn(256, 196, 768).cuda()
weight = nn.Parameter(torch.randn(768,768).cuda())
bias = nn.Parameter(torch.randn(768).cuda())
torch.cuda.synchronize()
t = time.time()
for i in range(2000):
# y = F.linear(x, weight, bias)
y = x @ weight.t() + bias
torch.cuda.synchronize()
print(f'Manu Time:{time.time() - t:.3f}')
print()
```
Output of 5090D
```python
Linear Time:6.770
Manu Time:2.580
```
Output of 4090
```python
Linear Time:3.252
Manu Time:3.624
```
### Versions
Collecting environment information...
PyTorch version: 2.8.0.dev20250404+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 14.2.0-4ubuntu2~24.04) 14.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 5090 D
Nvidia driver version: 570.133.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 24
在线 CPU 列表: 0-23
厂商 ID: AuthenticAMD
型号名称: AMD Ryzen 9 9900X 12-Core Processor
CPU 系列: 26
型号: 68
每个核的线程数: 2
每个座的核数: 12
座: 1
步进: 0
Frequency boost: enabled
CPU(s) scaling MHz: 61%
CPU 最大 MHz: 5658.0000
CPU 最小 MHz: 600.0000
BogoMIPS: 8782.92
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx ext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d amd_lbr_pmc_freeze
虚拟化: AMD-V
L1d 缓存: 576 KiB (12 instances)
L1i 缓存: 384 KiB (12 instances)
L2 缓存: 12 MiB (12 instances)
L3 缓存: 64 MiB (2 instances)
NUMA 节点: 1
NUMA 节点0 CPU: 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.8.0.87
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250404+cu128
[pip3] torch-dct==0.1.6
[pip3] torchaudio==2.6.0.dev20250404+cu128
[pip3] torchvision==0.22.0.dev20250404+cu128
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.8.0.87 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0.dev20250404+cu128 pypi_0 pypi
[conda] torch-dct 0.1.6 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250404+cu128 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250404+cu128 pypi_0 pypi
cc @msaroufim @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @eqy
| true
|
2,973,828,817
|
PyTorch fails to import due to incompatible glibc version (requires GLIBC_2.27)
|
wangleiofficial
|
closed
|
[
"needs reproduction",
"module: binaries"
] | 1
|
NONE
|
🐛 Bug Report
When importing torch, an ImportError is raised due to an unmet GLIBC version requirement. The current system has glibc version 2.17, but libcurand.so.10 (used by PyTorch) requires GLIBC_2.27.
📄 Error Message
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/wanglei/anaconda3/envs/trl/lib/python3.11/site-packages/torch/__init__.py", line 367, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /home/wanglei/anaconda3/envs/trl/lib/python3.11/site-packages/torch/lib/../../../../libcurand.so.10)
```
🧾 System Information
OS: <e.g., CentOS 7.9>
Python version: 3.11
PyTorch version:2.5.1
CUDA version: 11.2
Installation method: conda
GLIBC version: 2.17
🔁 Steps to Reproduce
Install PyTorch in a conda environment on a system with glibc 2.17
Run import torch
Observe the ImportError regarding GLIBC_2.27
cc @seemethere @malfet @osalpekar @atalman
| true
|
2,973,639,324
|
Add check in `test_cow_input` to ensure COW data is never changed
|
kurtamohler
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150723
| true
|
2,973,606,938
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 27
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,973,538,906
|
Avoid overwriting COW data in MPS code
|
kurtamohler
|
open
|
[
"open source",
"release notes: mps",
"ciflow/mps",
"keep-going"
] | 2
|
COLLABORATOR
|
Fixes MPS ops that were breaking COW behavior by overwriting data without first materializing. Along with necessary materializations, this also introduces many unnecessary materializations, but the MPS-CPU lazy cloning feature should now be safe to use.
Also introduces APIs in the MPS code which will be used in preventing unnecessary materializations in future PRs.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150569
* __->__ #150721
* #148408
| true
|
2,973,530,060
|
[wip] support tracing async collectives
|
xmfan
|
open
|
[
"oncall: distributed",
"release notes: distributed (c10d)",
"module: dynamo",
"ciflow/inductor"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150720
* #150258
* #150074
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,973,526,678
|
[export] add runtime assert messages to python torch checks
|
pianpwk
|
open
|
[
"fb-exported",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor",
"merging"
] | 16
|
CONTRIBUTOR
|
~fixes #150063 (for python at least)
Before:
```
Runtime assertion failed for expression Eq(Mod(s16*s35, s35 - 1), 0) on node 'eq'
```
Now:
```
RuntimeError: Runtime assertion failed for expression Eq(Mod(s16*s35, s35 - 1), 0) on node 'eq'
The original traceback points to the following location and error message:
/data/users/pianpwk/pytorch/torch/_prims_common/__init__.py:954
shape '[s35 - 1, ((s16*s35)//(s35 - 1))]' is invalid for input of size s16*s35
```
Differential Revision: D72483950
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,973,522,646
|
Codegen or Lint for python-api.md
|
svekars
|
open
|
[
"module: build",
"module: docs",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 📚 The doc issue
In https://github.com/pytorch/pytorch/pull/149331, we are migrating to pytorch_sphinx_theme2 and the main file that will contain toctree for the python APIs will be `python-api.md`. `index.md` will contain torctrees whose captions will be displayed on the horizontal bar.
We need to add a codegen or lint for the `python-api.md` file to ensure its integrity.
### Suggest a potential alternative/fix
To maintain the integrity and consistency of the `python-api.md` file, we need to implement a code generation tool or a linter. This tool will help ensure that the structure and content of the file adhere to the required standards.
cc @malfet @seemethere @sekyondaMeta @AlannaBurke
| true
|
2,973,522,624
|
[utils] Print compilation time breakdown across main components
|
anijain2305
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151410
* #151409
* #150704
* __->__ #150717
* #151357
* #151256
* #151330
Prints something like this
<img width="384" alt="image" src="https://github.com/user-attachments/assets/eaeae3ec-bab1-42e2-acbf-9e74904c6ac2" />
which is very helpful for quick compile time optimizations testing.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,973,459,642
|
[export] raise when Dim.DYNAMIC 0/1 specializes
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Previously we didn't catch this, mark_dynamic() just doesn't allocate a symbol for it
Differential Revision: D72486930
| true
|
2,973,437,169
|
Add type hints to `_tensor_docs.add_docstr_all`
|
pganssle-google
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
There is some sort of bug in `pytype` where if this function doesn't have type hints, `pytype` will spend 10 minutes inferring the types. Not that this matters much for a project not using `pytype`, but it led me to realize that this function could easily be type hinted and is not, so here is a PR adding some type hints.
| true
|
2,973,433,050
|
sglang x torch.compile silent incorrectness in PyTorch 2.6 for deepseek-v3
|
zou3519
|
closed
|
[
"high priority",
"triaged",
"module: correctness (silent)",
"oncall: pt2"
] | 7
|
CONTRIBUTOR
|
We should check if it's in PyTorch 2.7 as well. If it is then we should fix it
cc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,973,428,721
|
cd: Introduce new binary build workflows (cpu)
|
seemethere
|
open
|
[
"release notes: releng"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150713
* #149830
Introduces new binary build workflows and some supplementary changes to
downstream scripts in order to accommodate the new workflows.
Goal here is to get off the ground with the new syntax and ideally make
it easier to add on more workflows after this one is solidified.
I'm not entirely happy with the state of some of these workflows since
there are lot of unused variables but I do want to test this as a POC
ASAP so we can flesh out the UX / how we'd like to maintain it.
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,973,416,424
|
DISABLED test_parity__foreach_abs_fastpath_outplace_cuda_float16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_outplace_cuda_float16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40002544917).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_outplace_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,973,400,957
|
[ued] HF diffusers pipeline `enable_cpu_offload` errors or graph breaks with a `torch.compile`-ed transformer
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
_No response_
### Error logs
## Non-inplace `torch.compile` repro
```python
import torch
from diffusers import (
AuraFlowPipeline,
GGUFQuantizationConfig,
AuraFlowTransformer2DModel,
)
transformer = AuraFlowTransformer2DModel.from_single_file(
"https://huggingface.co/city96/AuraFlow-v0.3-gguf/blob/main/aura_flow_0.3-Q2_K.gguf",
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
)
pipeline = AuraFlowPipeline.from_pretrained(
"fal/AuraFlow-v0.3",
torch_dtype=torch.bfloat16,
transformer=transformer,
).to("cuda")
pipeline.transformer = torch.compile(pipeline.transformer, fullgraph=True)
pipeline.enable_model_cpu_offload()
pipeline("A cute pony", width=256, height=256, num_inference_steps=1)
pipeline("A cute pony", width=256, height=256, num_inference_steps=1)
pipeline("A cute pony", width=256, height=256, num_inference_steps=1)
```
output:
```verbatim
Traceback (most recent call last):
File "/home/ryanguo99/scratch/recompiles.py", line 21, in <module>
pipeline("A cute pony", width=256, height=256, num_inference_steps=1)
File "/home/ryanguo99/repos/pytorch/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/diffusers/src/diffusers/pipelines/aura_flow/pipeline_aura_flow.py", line 636, in __call__
self.maybe_free_model_hooks()
File "/home/ryanguo99/repos/diffusers/src/diffusers/pipelines/pipeline_utils.py", line 1189, in maybe_free_model_hooks
self.enable_model_cpu_offload(device=getattr(self, "_offload_device", "cuda"))
File "/home/ryanguo99/repos/diffusers/src/diffusers/pipelines/pipeline_utils.py", line 1111, in enable_model_cpu_offload
self.remove_all_hooks()
File "/home/ryanguo99/repos/diffusers/src/diffusers/pipelines/pipeline_utils.py", line 1076, in remove_all_hooks
accelerate.hooks.remove_hook_from_module(model, recurse=True)
File "/home/ryanguo99/.conda/envs/pt311/lib/python3.11/site-packages/accelerate/hooks.py", line 194, in remove_hook_from_module
delattr(module, "_hf_hook")
File "/home/ryanguo99/repos/pytorch/torch/nn/modules/module.py", line 2052, in __delattr__
super().__delattr__(name)
AttributeError: 'OptimizedModule' object has no attribute '_hf_hook'
```
## Inplace `torch.compile` repro
```python
import torch
from diffusers import (
AuraFlowPipeline,
GGUFQuantizationConfig,
AuraFlowTransformer2DModel,
)
transformer = AuraFlowTransformer2DModel.from_single_file(
"https://huggingface.co/city96/AuraFlow-v0.3-gguf/blob/main/aura_flow_0.3-Q2_K.gguf",
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
)
pipeline = AuraFlowPipeline.from_pretrained(
"fal/AuraFlow-v0.3",
torch_dtype=torch.bfloat16,
transformer=transformer,
).to("cuda")
pipeline.transformer.compile(fullgraph=True)
pipeline.enable_model_cpu_offload()
pipeline("A cute pony", width=256, height=256, num_inference_steps=1)
pipeline("A cute pony", width=256, height=256, num_inference_steps=1)
pipeline("A cute pony", width=256, height=256, num_inference_steps=1)
```
Output:
```verbatim
Traceback (most recent call last):
File "/home/ryanguo99/scratch/recompiles.py", line 21, in <module>
pipeline("A cute pony", width=256, height=256, num_inference_steps=1)
File "/home/ryanguo99/repos/pytorch/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/diffusers/src/diffusers/pipelines/aura_flow/pipeline_aura_flow.py", line 593, in __call__
noise_pred = self.transformer(
^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_dynamo/eval_frame.py", line 667, in _fn
raise e.with_traceback(None) from e.__cause__
torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor device call_function <built-in function getitem>
from user code:
File "/home/ryanguo99/.conda/envs/pt311/lib/python3.11/site-packages/accelerate/hooks.py", line 161, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "/home/ryanguo99/.conda/envs/pt311/lib/python3.11/site-packages/accelerate/hooks.py", line 690, in pre_forward
self.prev_module_hook.offload()
File "/home/ryanguo99/.conda/envs/pt311/lib/python3.11/site-packages/accelerate/hooks.py", line 706, in offload
self.hook.init_hook(self.model)
File "/home/ryanguo99/.conda/envs/pt311/lib/python3.11/site-packages/accelerate/hooks.py", line 686, in init_hook
return module.to("cpu")
File "/home/ryanguo99/.conda/envs/pt311/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3162, in to
return super().to(*args, **kwargs)
File "/home/ryanguo99/repos/pytorch/torch/nn/modules/module.py", line 1314, in to
device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (ple
```
Note that `fullgraph=False` works in this case and gives some speed-up (consistently >= 30%, but quite flaky from crude experiments), although I still see a few graph breaks and recompiles: [tlparse](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpPL01sD/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000).
### Versions
bb98749, python 3.11
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,973,376,433
|
torch.compile LLMs on MPS progress tracker
|
manuelcandales
|
closed
|
[
"triaged",
"module: mps",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This issue is used to keep track of progress using torch.compile on MPS to compile LLMs
##### gpt-fast (mps-compile-experiments branch):
- [ ] Make mps compile work on main.
- [x] stories15M
- [x] stories110M
- [x] llama2-7B
- [ ] llama2-7B 8bit quantized
- [IndexError: Out of range: piece id is out of range.](https://gist.github.com/manuelcandales/727c3507e978d3621a5b666bf1c6a976)
- [x] llama3.2-1B
- [ ] llama3.2-1B 8bit quantized
- [OverflowError: out of range integral type conversion attempted](https://gist.github.com/manuelcandales/9a97252933c2138776b3afae0fa3e7fd)
- [ ] Make mps compile handle reductions > 1024. Right now we are making llamas work by either disabling rms_norm compilation, or by adding `make_fallback(aten.mean)` to torch/_inductor/lowering.py
##### torchchat:
the following errors are caused by #150629
- [ ] stories15M
- [IndexError: Out of range: piece id is out of range.](https://gist.github.com/manuelcandales/9b67e0fcc88f74cd0f760eab8672a1fe)
- [ ] llama3.2-1B
- [OverflowError: out of range integral type conversion attempted](https://gist.github.com/manuelcandales/166d9fbcdd199b7a420db703566d3236)
- [ ] llama3.2-1B 4bit quantized
- [OverflowError: out of range integral type conversion attempted](https://gist.github.com/manuelcandales/ba44a5ffe5dfacaaf652f6c577a35470)
### Versions
2.7.0/nightly
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,973,373,840
|
torch.profile aten metadata plumbing
|
exclamaforte
|
open
|
[
"oncall: profiler"
] | 3
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Inductor would like to pass some metadata for `aten` ops down to the dispatcher so that it can add it to the args field of the kineto trace.
### Alternatives
We are manually post processing the profile.json, which is a bit fragile.
### Additional context
_No response_
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,973,367,928
|
[ued] VRAM keeps growing upon new resolution for diffuser pipeline with `torch.compile`-ed transformer
|
StrongerXi
|
closed
|
[
"needs reproduction",
"oncall: pt2"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
From a user: https://github.com/huggingface/diffusers/issues/10795#issuecomment-2745417752
> 2.2 The VRAM usage keeps growing significantly on each new resolution used in inference (I've run my tests after compiling the pipeline independently) which makes me believe that a new graph may be loaded each time instead of one dynamic one?
Testing script: https://github.com/AstraliteHeart/torch-compile-tests-for-diffusers/blob/main/test_resolution_switching.py
We should just look into this -- could be expected, or just a bug.
### Error logs
_No response_
### Versions
bb987492302, python 3.11
cc @chauhang @penguinwu
| true
|
2,973,365,734
|
Reland of "[ROCm] change preferred blas lib defaults (#150249)""
|
atalman
|
closed
|
[
"module: rocm",
"ciflow/rocm",
"ci-no-td"
] | 1
|
CONTRIBUTOR
|
Relands pytorch/pytorch#150658 since fixed.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,973,364,200
|
[ued] Slow start up time for `torch.compile` on GGUF Auraflow
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"compile-cache",
"dynamo-triage-jan2025"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
From a user: https://github.com/huggingface/diffusers/issues/10795#issuecomment-2745417752
> 2.1 Pre-compiling for each resolution is manageable (and somewhat expected), but loading the pipeline and warming it up for each resolution seems to be a big bottleneck as each new resolution takes about 3 minutes of warm up time while the GPU is idle and CPU is only using 2 cores.
>
> I understand this is necessary due to https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html#:~:text=This%20is%20because%20the%20%22reduce%2Doverhead%22%20mode%20runs%20a%20few%20warm%2Dup%20iterations%20for%20CUDA%20graphs. which also applies to max-autotune but is it possible to cache/pickle results of the warm-up (at least on the same machine)?
## Repro
```python
import torch
from diffusers import (
AuraFlowPipeline,
GGUFQuantizationConfig,
AuraFlowTransformer2DModel,
)
transformer = AuraFlowTransformer2DModel.from_single_file(
"https://huggingface.co/city96/AuraFlow-v0.3-gguf/blob/main/aura_flow_0.3-Q2_K.gguf",
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
)
pipeline = AuraFlowPipeline.from_pretrained(
"fal/AuraFlow-v0.3",
torch_dtype=torch.bfloat16,
transformer=transformer,
).to("cuda")
pipeline.transformer = torch.compile(pipeline.transformer, fullgraph=True)
pipeline("A cute pony", width=512, height=512, num_inference_steps=1)
```
## Findings
[cold-cache tlparse](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpqAJWZf/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000) (270s)
[warm-cache tlparse](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmp5Ghe2i/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000) (180s)
There are at least 2 places of nontrivial improvements:
1. Remove `detach` node (50%), which came from [tracing through `torch.Tensor._make_subclass`](https://github.com/pytorch/pytorch/pull/149483/files#diff-da9d9ec114a29db99cf66ec9b704df2813ade6ae85950cb99f38e03df1a46a8dR18). We should be able to skip this for inference cases. [Experiment](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpsD1i5X/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000) suggests cold start goes from 270s to 220s (10s in dynamo, 50s in aot-dispatcher)
2. [AotAutograd cache bypass](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmp5Ghe2i/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000). That looks like a small remiss, fixing it should reduce a big chunk of 99s out of the 180s warm-cache overhead.
So even with these 2 fixes, the projected time might go from:
- cold-cache: 270s --> 220s
- warm-cache: 180s --> ~70s
### Error logs
_No response_
### Versions
bb98749, python 3.11
cc @chauhang @penguinwu @oulgen @jamesjwu @masnesral
| true
|
2,973,340,290
|
[CUDA] Only use vec128 if CUDA version is newer than 12.8
|
malfet
|
open
|
[
"Merged",
"Reverted",
"ciflow/binaries",
"ciflow/trunk",
"release notes: cuda",
"ciflow/periodic",
"ci-no-td",
"no-runner-experiments"
] | 16
|
CONTRIBUTOR
|
By addressing a feedback requested at https://github.com/pytorch/pytorch/pull/145746
| true
|
2,973,328,361
|
[invoke_subgraph][fake_tensor] Run the subgraph with fake tensor mode to validate cache
|
anijain2305
|
open
|
[
"topic: not user facing",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151620
* __->__ #150704
* #151410
* #151409
* #151756
* #151633
* #151477
* #151357
* #151256
* #151330
| true
|
2,973,319,279
|
Support XPU in memory tracker
|
frost-intel
|
open
|
[
"oncall: distributed",
"open source",
"release notes: python_frontend"
] | 4
|
COLLABORATOR
|
This PR adds support for XPU devices to the distributed MemoryTracker tool, including unit test for XPU.
In detail, I add a few pure-python functions to `torch.accelerator`:
- `torch.accelerator.empty_cache`
- `torch.accelerator.memory_allocated`
- `torch.accelerator.memory_reserved`
- `torch.accelerator.memory_stats`
I also added the necessary code to include essential statistics in `memory_stats` as part of XPUCachingAllocator.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,973,309,427
|
[ued] Investigate diffuser pipeline transformer recompilations due to different width/height
|
StrongerXi
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 17
|
CONTRIBUTOR
|
### 🐛 Describe the bug
From a user: https://github.com/huggingface/diffusers/issues/10795#issuecomment-2745417752
> 2. Changing the pipeline resolution triggers a recompilation, this happens with both dynamic=None and dynamic=True and resolution affecting the compilation is the most annoying issue right now.
We should look into the root cause of this. The user's testing script is [here](https://github.com/AstraliteHeart/torch-compile-tests-for-diffusers/blob/main/test_resolution_switching.py), but I didn't see any recompilation with the following minimal repro (`dynamic=None` also recompiles only once):
Repro:
```python
import torch
from diffusers import (
AuraFlowPipeline,
GGUFQuantizationConfig,
AuraFlowTransformer2DModel,
)
transformer = AuraFlowTransformer2DModel.from_single_file(
"https://huggingface.co/city96/AuraFlow-v0.3-gguf/blob/main/aura_flow_0.3-Q2_K.gguf",
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
)
pipeline = AuraFlowPipeline.from_pretrained(
"fal/AuraFlow-v0.3",
torch_dtype=torch.bfloat16,
transformer=transformer,
).to("cuda")
pipeline.transformer = torch.compile(pipeline.transformer, fullgraph=True, dynamic=True)
pipeline("A cute pony", width=256, height=256, num_inference_steps=1)
pipeline("A cute pony", width=384, height=384, num_inference_steps=1)
pipeline("A cute pony", width=512, height=512, num_inference_steps=1)
```
### Error logs
_No response_
### Versions
bb987492302, python 3.11
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,973,245,857
|
Support having no metadata file for HuggingFaceStorageReader
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)"
] | 6
|
CONTRIBUTOR
|
Summary: If there is only one safetensors file, we don't need users to have a metadata file and we can just construct it from the keys of that file. This is a use-case for some HuggingFace models, so adding support for it
Test Plan:
ensure existing tests pass
tested e2e in a notebook
Differential Revision: D72472490
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,973,235,669
|
CuDNN + H100 Gives Weird Gradients
|
alanhdu
|
open
|
[
"module: cudnn",
"module: cuda",
"triaged"
] | 9
|
CONTRIBUTOR
|
### 🐛 Describe the bug
If I run this reproduction script at https://gist.github.com/alanhdu/68aeca1b1cfbe63fe3464541a201fb79 (using this [arr.txt](https://github.com/user-attachments/files/19609554/arr.txt) file in `/tmp/arr.pt`), then I see something quite strange.
On an A100 GPU , I get results like:
```
cpu bf16 True 1.5634121894836426 0.3220250606536865
cpu fp32 True 1.5633708238601685 0.32201677560806274
cuda bf16 False 1.56340754032135 0.32201358675956726
cuda fp32 False 1.5633708238601685 0.32201698422431946
cuda bf16 True 1.5634031295776367 0.3220202922821045
cuda fp32 True 1.5633710622787476 0.32200005650520325
```
This makes sense to me -- while there are minor differences in the loss and the gradient norm, all of them are quite small and approximately the same.
But on an H100 GPU, I get results like:
```
cpu bf16 True 1.5634121894836426 0.3220250606536865
cpu fp32 True 1.5633708238601685 0.32201677560806274
cuda bf16 False 1.563407301902771 0.32201361656188965
cuda fp32 False 1.5633708238601685 0.32201698422431946
cuda bf16 True 1.5724581480026245 0.08233493566513062
cuda fp32 True 1.5720577239990234 0.18226394057273865
```
This is much more confusing -- for some reason, even though the loss function is roughly the same, the gradient norms are *very* different when using the CuDNN backend! Far more different than what I would expect given "normal" floating point differences.
This is a very shrunken down version of a model internally where the model trains just fine on an A100 but fails to converge at all on an H100:
.
### Versions
This is running on the internal Meta cluster, so we are using a nightly version of PyTorch and Python 3.12. I've attached the collect env information at ([A100](https://gist.github.com/alanhdu/ccbd54e378a44159a0e39031fbac0ac4) and [H100](https://gist.github.com/alanhdu/bf43069780f4859c4aa96d3241a27efd)), although.
On both nodes, I get:
```
torch.version.cuda='12.4.0'
torch.backends.cudnn.version()=8903
```
cc @csarofeen @ptrblck @xwang233 @eqy @msaroufim
| true
|
2,973,234,563
|
Revert "Dont exclude constant_pad_nd in prologue fusion"
|
atalman
|
closed
|
[
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 1
|
CONTRIBUTOR
|
Reverts pytorch/pytorch#150145
| true
|
2,973,204,049
|
[Feature Request] Memory optimization for backward propagation in GPU
|
jobs-git
|
open
|
[
"module: autograd",
"module: memory usage",
"triaged"
] | 3
|
NONE
|
### 🚀 The feature, motivation and pitch
Backprop uses a lot of VRAM and can reach multiple factors the size of the model parameters and input data resulting to a lower GPU utilization. Computing resource and power savings maybe realized if instead we can optimize backprop VRAM usage.
### Alternatives
Possible solutions:
- Backprop calculation by parts. So instead of calculating it as one big matrix, it could be possible to divide the chain matrix by batches, then clear batches that has already completed.
- Utilize async/unified memory for intermediate data. When backprop calculation exceeds the vram, the process is being send to OOM, instead, the excess data can be sent to CPU RAM via unified memory framework
### Additional context
Code that demonstrate high VRAM usage
With backprop (6GB VRAM)
```python
import torch
import torch.nn as nn
import torch.optim as optim
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.fc1 = nn.Linear(128, 1024*256)
self.fc2 = nn.Linear(1024*256, 1024)
self.fc3 = nn.Linear(1024, 16)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return x
torch.cuda.set_device(0)
torch.cuda.empty_cache()
model = SimpleModel().cuda()
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss_fn = nn.MSELoss()
input_data = torch.randn(128, 128).pin_memory().cuda(non_blocking=True)
target_data = torch.randn(128, 16).pin_memory().cuda(non_blocking=True)
output = model(input_data)
loss = loss_fn(output, target_data)
loss.backward(retain_graph=False)
torch.cuda.synchronize()
optimizer.step()
optimizer.zero_grad(set_to_none=True)
```
Without backprop (1.6GB VRAM)
```python
import torch
import torch.nn as nn
import torch.optim as optim
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.fc1 = nn.Linear(128, 1024*256)
self.fc2 = nn.Linear(1024*256, 1024)
self.fc3 = nn.Linear(1024, 16)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return x
torch.cuda.set_device(0)
torch.cuda.empty_cache()
model = SimpleModel().cuda()
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss_fn = nn.MSELoss()
input_data = torch.randn(128, 128).pin_memory().cuda(non_blocking=True)
target_data = torch.randn(128, 16).pin_memory().cuda(non_blocking=True)
output = model(input_data)
loss = loss_fn(output, target_data)
#loss.backward(retain_graph=False)
#torch.cuda.synchronize()
#optimizer.step()
#optimizer.zero_grad(set_to_none=True)
```
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,973,188,735
|
Fix conv2d strided prologue
|
eellison
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150697
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,973,178,868
|
Remove Clear Cache Time from do_bench_using_profiling
|
oniononion36
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary: In most instances, this action would take ~33% of the total run time, which means that our benchmark would previously differ from the end results by a lot.
Test Plan:
We can compare the benchmark results for
```
CUDA_VISIBLE_DEVICES=4,5 buck run mode/opt -c python.package_style=inplace -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=h100a //caffe2/torch/fb/model_transform/experimental/benchmark:mts_gpu_benchmark -- --model-snapshot-id=672308665_0 --lower-backend=AOT_INDUCTOR --node-replacement-dict="{'torch.nn.Linear':{'(autotune)': 'fp8_float_model_dynamic_quantization_rowwise'}}" --trace-aot-inductor-module=True --disable-acc-tracer=False --batch-size=1024
```
before and after the diff, and notice that on average, the benchmark results decrease by ~0.1ms per iteration, which is more closely aligned with the lowered modules.
Differential Revision: D72469845
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,973,149,041
|
[AOTI][dashboard] Fix mis-calculated memory compression ratio
|
desertfire
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150695
Summary: https://github.com/pytorch/pytorch/pull/149817 introduced an extra warmup run to compute AOTI memory compression ratio, but since weights are only loaded once in the AOTI run, the peak memory seen in the extra warmup won't include the weight, which causes an aritifically high memory compression ratio. This PR removes that extra warmup run, and calls reset_peak_memory_stats in the proper place instead.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,973,142,897
|
[draft][distributed] add into 3d composability test at AMD CI test
|
mori360
|
open
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,973,129,850
|
Remove a workaround added in #149381
|
tengyifei
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng"
] | 5
|
CONTRIBUTOR
|
Remove a workaround added in https://github.com/pytorch/pytorch/pull/149381.
Fixes https://github.com/pytorch/xla/issues/8934
| true
|
2,973,127,295
|
[MTIA] Map names to operand indices when folding submodules
|
klintqinami
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 15
|
CONTRIBUTOR
|
When replacing placeholders with getattrs during constant folding, we can have an argument and parameter name mismatch. In fact, there is no guarantee that the parameter name is equivalent to the argument name used in the module call.
Differential Revision: D72415970
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,973,054,461
|
Raise `BufferError` for DLPack buffer-related errors.
|
ysiraichi
|
open
|
[
"open source",
"module: dlpack",
"release notes: python_frontend"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150691
* #150218
* #150217
* #150216
* #145000
This PR addresses the Array API documentation for [`__dlpack__`][1] and
[`from_dlpack`][2] by making some buffer-related errors `BufferError`
instead of `RuntimeError`, e.g. incompatible dtype, strides, or device.
[1]: https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.__dlpack__.html
[2]: https://data-apis.org/array-api/latest/API_specification/generated/array_api.from_dlpack.html#from-dlpack
| true
|
2,972,966,098
|
Fixing NCCL abort hang issue when a ProcessGroupNCCL manages multiple ncclComms
|
hexinw-nvidia
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ci-no-td"
] | 35
|
CONTRIBUTOR
|
Detail of the issue:
If PyTorch issues send/recv to each 2 rank comm, and these comms are managed by a single ProcessGroupNCCL instance, then comms need to abort either in sequence or in group.
I.e. the following sequential abort will cause hang in NCCL. recv(..., comm0, stream);
send(..., comm1, stream);
abort(comm1);
abort(comm0);
Fixes #119797
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,972,920,291
|
[rfc] Guard filter hook
|
anijain2305
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150689
* #150429
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,972,880,083
|
[test] cusparse installation in binary build docker img
|
clee2000
|
open
|
[
"ciflow/binaries",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,972,843,702
|
[CI][Inductor] Add missing unittest import
|
nWEIdia
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Fixes unit test failures:
test/inductor/test_fused_attention.py", line 567, in TestSDPAPatternRewriterTemplate
@unittest.skip("disabled in upstream") ^^^^^^^^
NameError: name 'unittest' is not defined. Did you forget to import 'unittest'
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
@ptrblck @eqy @tinglvv @malfet @atalman
| true
|
2,972,837,626
|
[Inductor] Fix consolidating _scaled_mm into mm template TMA error
|
PaulZhang12
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: The previous diff broke a few tests that didn't run on internal or GH CI: T220169086, this fixes that issue. The {% if } block is only supposed to support autotuned parameters (constexpr), and should not be used for locals based on other examples.
Test Plan: buck test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:fp8 -- --exact 'caffe2/test/inductor:fp8 - test_tensorwise_scaling_bfloat16_shape_16,32,32_has_bias_False_use_fast_accum_True_persistent_matmul_True (caffe2.test.inductor.test_fp8.TestFP8Lowering)'
Reviewed By: NikhilAPatel
Differential Revision: D72460516
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,972,821,266
|
WIP : test3
|
laithsakka
|
open
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150685
| true
|
2,972,794,969
|
Register also future allocations in mempool with NCCL
|
lw
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150564
* __->__ #150684
* #150683
This is the final PR, where everything comes together.
The problem I'm trying to solve is the following: when we register a MemPool with the NCCL ProcessGroup, it calls `ncclCommRegister` on all the allocations that are _currently_ in the pool. However, any later allocation will _not_ be registered with the NCCL communicator!
This is terribly inconvenient, because it means that every piece of code that allocates a tensor must be changed to become aware of whether it's doing so within a private pool, and it must become aware of NCCL and of all the PGs in existence, in order to re-register that pool with them.
Moreover, I believe there can be performance implications because allocating tensors is usually done in the critical path (i.e., during the forward and backward of every step of a training), whereas registering memory is a slow operation that should be done once at init time.
With this PR, once the user registers a Mempool with the NCCL PG, we install some hooks into the CachingAllocator in order to listen for all future memory allocations and, if they belong to the pool, we automatically call `ncclCommRegister` on them! (In fact, we reuse the hooks that already exist for `TORCH_NCCL_USE_TENSOR_REGISTER_ALLOCATOR_HOOK`).
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,972,794,696
|
Add mempool to allocator's trace events
|
lw
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150564
* #150684
* __->__ #150683
In the NCCL ProcessGroup we want to support being able to "register" with NCCL all the allocations that belong to a certain private MemPool. In order to do so on-the-fly for every new allocation, we register a hook for the CachingAllocator's TraceEvents. However, we were lacking a way to know whether a given TraceEvent belonged to the MemPool that we cared about or not. With this PR, we add a MempoolId_t field to the TraceEvents.
| true
|
2,972,794,398
|
Clarify behavior of TORCH_NCCL_USE_TENSOR_REGISTER_ALLOCATOR_HOOK
|
lw
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150564
* #150684
* #150683
* __->__ #150682
* #150681
I still don't really understand the original purpose of that env var, but it appears that its usage is completely disconnected from MemPools and from `ncclMemAlloc`/`Free`. In fact, when that env var is set, we invoke `ncclCommRegister` for _all_ NCCL communicators for _all_ the memory segments managed by the allocator (both the global ones, allocated with `cudaMalloc`, and the ones in private MemPools), and we do that both for the segments that already exist when the PG is initialized and for all segments that will be allocated later.
I'm reworking the code a bit, by using a few helper functions, whose name should make this behavior clearer.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,972,794,159
|
Safer bookkeeping of NCCL communicators
|
lw
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150564
* #150684
* #150683
* #150682
* __->__ #150681
This consists mainly in two changes:
- ensure we can reliably obtain the device from a `NCCLComm` object (there was one constructor which didn't set the device)
- use a RAII pattern for acquiring the lock to the global dictionary of `NCCLComms` (which ensures the lock is released in case of exceptions)
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,972,772,914
|
DISABLED test_parity__foreach_abs_fastpath_outplace_cuda_bool (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_outplace_cuda_bool&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39977154907).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_outplace_cuda_bool`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,972,740,951
|
Revert "[ATen][CUDA] Implement 128 bit vectorization v2 (#145746)"
|
atalman
|
open
|
[
"topic: not user facing",
"ci-no-td"
] | 2
|
CONTRIBUTOR
|
This reverts commit e84bf88dde509d44175a0a1c00cec13c9926843e.
This PR caused binary size increase 10% and compile time increase:
https://github.com/pytorch/pytorch/issues/150647
https://github.com/pytorch/pytorch/issues/147376
Proposing to revert this PR and reland together: Optimize 128 bit vectorization https://github.com/pytorch/pytorch/pull/148320
| true
|
2,972,463,420
|
Split up cub-RadixSortPairs-scalars.cu to parallelize compilation
|
TovlyFB
|
open
|
[
"fb-exported",
"ciflow/trunk",
"release notes: cuda"
] | 4
|
CONTRIBUTOR
|
Summary: `cub-RadixSortPairs-scalars.cu` has slow compilation times, especially on Windows. These changes split up the file into smaller components to allow each component to compile in parallel. On Windows, I observed a compile time drop from about 6 minutes to 4 minutes. This is a similar follow up to [PR 148936](https://github.com/pytorch/pytorch/pull/148936).
Differential Revision: D70539650
| true
|
2,972,454,113
|
Compilation Errors with Float Values in flex_attention and create_block_mask
|
Rilwan-Adewoyin
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: flex attention"
] | 3
|
NONE
|
### 🐛 Describe the bug
Bug description:
I've encountered two reproducible bugs when using PyTorch's compiled flex_attention or create_block_mask and referencing either a float value < 1.0 or a scalar float tensor as described in the reproduction steps section. In the script I test four cases: scalar tensor < 1.0, scalar tensor > 1.0, python float <1.0 and python float > 1.0.
note: With regards to the float issue, when breakpointing in the actual triton code generated, there seemed to be some complications regarding pytorch converting these referenced scalar float objects into SymFloat during a mask_mod operation (as part of the flex_attention op or create_block_mask op). The SymFloat objects would cause issues with the triton kernels such as division which only accept ( torch.tensor, float, int,.. ) etc.
@clessig
Reproduction Steps:
1. Create a block mask using a mask_mod function that is itself a method of a class A and uses a scalar tensor or float which is an attribute of the same class A
2. Let the float scalar have values < 1.0 or the tensor scalar have any value
3. Run the flex_attention
4. If a scalar float of value < 1.0 is used, there is an error during block mask creation (related to the SymFloat issue above)
5. if a scalar tensor value is used the error is as shown in the examples below.
```Py
import torch
import time
from torch.nn.attention.flex_attention import create_block_mask, flex_attention
torch.set_float32_matmul_precision('high')
flex_attention_compiled = torch.compile(
flex_attention,
dynamic=False,
fullgraph=True,
options={
"cuda.use_fast_math": False,
"triton.cooperative_reductions": False,
}
)
create_block_mask_compiled = torch.compile(
create_block_mask,
dynamic=False,
fullgraph=True,
options={
},
)
class BlockMaskHelper(torch.nn.Module):
def __init__(self, Q_LEN, KV_LEN, y_radius, x_radius):
super().__init__()
self.Q_LEN = Q_LEN
self.KV_LEN = KV_LEN
if isinstance(y_radius, torch.Tensor):
self.register_buffer("y_radius", y_radius)
self.register_buffer("x_radius", x_radius)
else:
self.y_radius = y_radius
self.x_radius = x_radius
def get_block_mask(self, device):
return create_block_mask_compiled(self.mask_mod, B=None, H=None, Q_LEN=self.Q_LEN, KV_LEN=self.KV_LEN, device=device, BLOCK_SIZE=128)
# Define mask function using tensor values
def mask_mod(self, b: torch.Tensor, h: torch.Tensor, q_idx: torch.Tensor, kv_idx: torch.Tensor) -> torch.Tensor:
d1 = (q_idx - kv_idx) * 0.1 # Arbitrary diff for testing
d2 = (q_idx - kv_idx) * 0.2 # Arbitrary diff for testing
output = self.locality_check(d1, d2, self.y_radius, self.x_radius)
return output
def locality_check(self, y_diff, x_diff, y_radius, x_radius):
return ( (y_diff / y_radius) + (x_diff / x_radius) ) <= 1
# Test function to demonstrate the issues
def test_issues():
# Setup
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
Q_LEN = 1600
KV_LEN = 3200
HEADS = 16
dim = 64
# Create sample tensors
query = torch.randn(1, HEADS, Q_LEN, dim, device=device, requires_grad=True, dtype=torch.bfloat16)
key = torch.randn(1, HEADS, KV_LEN, dim, device=device, requires_grad=True, dtype=torch.bfloat16)
value = torch.randn(1, HEADS, KV_LEN, dim, device=device, requires_grad=True, dtype=torch.bfloat16)
print("""\n\n================================================\nTesting with large tensor values (>1.0) - should work...\n================================================
""")
# Test with larger values (>1.0)
large_y_radius = 15.0
large_x_radius = 6.0
# Approach 1: Using scalar_tensor (should work with large values)
large_y_radius_tensor = torch.scalar_tensor(large_y_radius, dtype=torch.bfloat16).contiguous()
large_x_radius_tensor = torch.scalar_tensor(large_x_radius, dtype=torch.bfloat16).contiguous()
# Test create_block_mask with large tensor values
start = time.time()
with torch.amp.autocast(device_type=device.type, dtype=torch.bfloat16):
try:
bmh = BlockMaskHelper(Q_LEN, KV_LEN, large_y_radius_tensor, large_x_radius_tensor)
block_mask = bmh.get_block_mask(device)
print(f"✓ create_block_mask successful with tensor values > 1.0")
# Test flex_attention with the created mask
output = flex_attention_compiled(query.contiguous(), key.contiguous(), value.contiguous(), block_mask=block_mask)
print(f"✓ flex_attention successful with tensor values > 1.0 ")
except Exception as e:
print(f"✗ Error: {e}")
print(f"Elapsed time: {time.time() - start:.2f}s\n")
# Test create_block_mask with large > 1.0 python floats
print("""\n\n================================================\nTesting with large float values (>1.0) - should work...\n================================================
""")
# Test create_block_mask with large tensor values
start = time.time()
with torch.amp.autocast(device_type=device.type, dtype=torch.bfloat16):
try:
bmh = BlockMaskHelper(Q_LEN, KV_LEN, large_y_radius, large_x_radius)
block_mask = bmh.get_block_mask(device)
print(f"✓ create_block_mask successful with float values > 1.0")
with torch.amp.autocast(device_type=device.type, dtype=torch.bfloat16):
# Test flex_attention with the created mask
output = flex_attention_compiled(query.contiguous(), key.contiguous(), value.contiguous(), block_mask=block_mask)
print(f"✓ flex_attention successful with float values > 1.0")
except Exception as e:
print(f"✗ Error: {e}")
print(f"Elapsed time: {time.time() - start:.2f}s\n")
print("""\n\n=================================================\nTesting with small tensor values (<1.0) - Issue 1...\n================================================
""")
# Issue 1: Using scalar_tensor with small values
small_y_radius = 0.5
small_x_radius = 0.6
small_y_radius_tensor = torch.scalar_tensor(small_y_radius, dtype=torch.bfloat16).contiguous()
small_x_radius_tensor = torch.scalar_tensor(small_x_radius, dtype=torch.bfloat16).contiguous()
# Test with tensor small values
start = time.time()
with torch.amp.autocast(device_type=device.type, dtype=torch.bfloat16):
try:
bmh = BlockMaskHelper(Q_LEN, KV_LEN, small_y_radius, small_x_radius)
block_mask = bmh.get_block_mask(device)
print(f"✓ create_block_mask successful with tensor values < 1.0")
with torch.amp.autocast(device_type=device.type, dtype=torch.bfloat16):
# This may trigger Issue 1
output = flex_attention_compiled(query.contiguous(), key.contiguous(), value.contiguous(), block_mask=block_mask)
print(f"✓ flex_attention successful with tensor values < 1.0")
except Exception as e:
print(f"✗ Issue 1 reproduced: {e}")
print(f"Elapsed time: {time.time() - start:.2f}s\n")
print("""\n\n=================================================\nTesting with small float tensor values (<1.0) - Issue 2...\n================================================
""")
# Test with Python float small values
start = time.time()
with torch.amp.autocast(device_type=device.type, dtype=torch.bfloat16):
try:
# This may trigger Issue 2
bmh = BlockMaskHelper(Q_LEN, KV_LEN, small_y_radius_tensor, small_x_radius_tensor)
block_mask = bmh.get_block_mask(device)
print(f"✓ create_block_mask successful with float values < 1.0")
output = flex_attention_compiled(query.contiguous(), key.contiguous(), value.contiguous(), block_mask=block_mask)
print(f"✓ flex_attention successful with float values < 1.0")
except Exception as e:
print(" flex attention unsuccessful with float values < 1.0")
print(f"✗ Issue 2 reproduced: {e}")
print(f"Elapsed time: {time.time() - start:.2f}s\n")
if __name__ == "__main__":
# Configure PyTorch/Triton for optimal testing
# torch._dynamo.config.cache_size_limit = 16384
# torch._dynamo.config.accumulated_cache_size_limit = 16384
# torch._dynamo.config.fail_on_cache_limit_hit = False
# if hasattr(torch, '_inductor'):
torch._inductor.config.triton.cooperative_reductions = True
# torch._inductor.config.cuda.use_fast_math = True
test_issues()
```
### Error logs
```
================================================
Testing with large tensor values (>1.0) - should work...
================================================
0.01s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
[redacted].cursor-server/extensions/ms-python.debugpy-2024.6.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/_vendored/force_pydevd.py:18: UserWarning: incompatible copy of pydevd already imported:
[redacted]/lib/python3.11/site-packages/pydevd_plugins/extensions/pydevd_plugin_omegaconf.py
warnings.warn(msg + ':\n {}'.format('\n '.join(_unvendored)))
✓ create_block_mask successful with tensor values > 1.0
✗ Error: backend='inductor' raised:
SubprocException: An exception occurred in a subprocess:
Traceback (most recent call last):
File "[redacted]/lib/python3.11/site-packages/triton/language/core.py", line 35, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "[redacted]/lib/python3.11/site-packages/triton/language/core.py", line 1635, in load
return semantic.load(pointer, mask, other, boundary_check, padding_option, cache_modifier, eviction_policy,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[redacted]/lib/python3.11/site-packages/triton/language/semantic.py", line 1141, in load
return _load_legacy(ptr, mask, other, boundary_check, padding, cache, eviction, is_volatile, builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[redacted]/lib/python3.11/site-packages/triton/language/semantic.py", line 1069, in _load_legacy
raise ValueError(f"Unsupported ptr type {ptr.type.__repr__()} in `tl.load`")
ValueError: Unsupported ptr type triton.language.float32 in `tl.load`
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 61:15:
if CHECK_BLOCK_BOUNDARY:
# Mask out the elements that are out of the KV_LEN for non divisible seqlen.
post_mod_scores = tl.where(offs_n < KV_LEN, post_mod_scores, float("-inf"))
if not IS_FULL_BLOCKS:
tmp0 = (m) - (n)
tmp1 = tmp0.to(tl.float32)
tmp2 = 0.1
tmp3 = tmp1 * tmp2
tmp4 = tl.load(in_ptr8 + 0).to(tl.float32)
^
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 58:28:
acc, l_i, m_i,
# Offsets
off_z, off_h, offs_m, offs_n,
MATMUL_PRECISION, RCP_LN2,
IS_FULL_BLOCKS,
)
else:
# Benchmark shows even we applied mod & mask to each block for non divisible seqlen,
# it's on par or slightly faster than only applying to the last block in fwd.
# However, we choose different strategy for bwd, where we only apply mod & mask
# to the last block because it's faster a lot.
acc, l_i, m_i = forward_block_mn(
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "[redacted]/lib/python3.11/site-packages/torch/_inductor/compile_worker/subproc_pool.py", line 279, in do_job
result = job()
^^^^^
File "[redacted]lib/python3.11/site-packages/torch/_inductor/runtime/compile_tasks.py", line 68, in _worker_compile_triton
load_kernel().precompile(warm_cache_only=True)
File "[redacted]/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 293, in precompile
compiled_binary, launcher = self._precompile_config(
^^^^^^^^^^^^^^^^^^^^^^^^
File "[redacted]/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 493, in _precompile_config
binary = triton.compile(*compile_args, **compile_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[redacted]/lib/python3.11/site-packages/triton/compiler/compiler.py", line 273, in compile
module = src.make_ir(options, codegen_fns, module_map, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[redacted]lib/python3.11/site-packages/triton/compiler/compiler.py", line 100, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
triton.compiler.errors.CompilationError: at 157:20:
)
V_block_ptr = tl.make_block_ptr(
base=V,
shape=(KV_LEN, V_HEAD_DIM),
strides=(stride_vn, stride_vk),
offsets=(kv_start, 0),
block_shape=(BLOCK_N, V_HEAD_DIM),
order=(1, 0)
)
offs_n = kv_start + tl.arange(0, BLOCK_N)
acc, l_i, m_i = forward_inner(
^
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
Elapsed time: 27.10s
================================================
Testing with large float values (>1.0) - should work...
================================================
✓ create_block_mask successful with float values > 1.0
✓ flex_attention successful with float values > 1.0
Elapsed time: 7.16s
=================================================
Testing with small tensor values (<1.0) - Issue 1...
================================================
✗ Issue 1 reproduced: backend='inductor' raised:
AssertionError: While executing %div : [num_users=1] = call_method[target=div](args = (%d1, %item), kwargs = {})
Original traceback:
File "[redacted]lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 886, in create_block_mask
mask_tensor = create_mask(mask_mod, B, H, Q_LEN, KV_LEN, device)
File "[redacted]/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 817, in create_mask
mask = mask_mod(b, h, m, n)
File "[redacted]/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "[redacted]/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "[redacted]/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "[redacted]/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "[redacted]/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "[redacted]/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "[redacted]/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "[redacted]/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "[redacted]/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "[redacted]/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "[redacted]lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "[redacted]/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "[redacted]", line 45, in mask_mod
output = self.locality_check(d1, d2, self.y_radius, self.x_radius)
File "[redacted]", line 49, in locality_check
return ( (y_diff / y_radius) + (x_diff / x_radius) ) <= 1
File "[redacted]packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
Elapsed time: 3.19s
=================================================
Testing with small float tensor values (<1.0) - Issue 2...
================================================
✓ create_block_mask successful with float values < 1.0
flex attention unsuccessful with float values < 1.0
✗ Issue 2 reproduced: backend='inductor' raised:
SubprocException: An exception occurred in a subprocess:
Traceback (most recent call last):
File "[redacted]/lib/python3.11/site-packages/triton/language/core.py", line 35, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "[redacted]lib/python3.11/site-packages/triton/language/core.py", line 1635, in load
return semantic.load(pointer, mask, other, boundary_check, padding_option, cache_modifier, eviction_policy,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[redacted]/lib/python3.11/site-packages/triton/language/semantic.py", line 1141, in load
return _load_legacy(ptr, mask, other, boundary_check, padding, cache, eviction, is_volatile, builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[redacted]lib/python3.11/site-packages/triton/language/semantic.py", line 1069, in _load_legacy
raise ValueError(f"Unsupported ptr type {ptr.type.__repr__()} in `tl.load`")
ValueError: Unsupported ptr type triton.language.float32 in `tl.load`
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 61:15:
if CHECK_BLOCK_BOUNDARY:
# Mask out the elements that are out of the KV_LEN for non divisible seqlen.
post_mod_scores = tl.where(offs_n < KV_LEN, post_mod_scores, float("-inf"))
if not IS_FULL_BLOCKS:
tmp0 = (m) - (n)
tmp1 = tmp0.to(tl.float32)
tmp2 = 0.1
tmp3 = tmp1 * tmp2
tmp4 = tl.load(in_ptr8 + 0).to(tl.float32)
^
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 58:28:
acc, l_i, m_i,
# Offsets
off_z, off_h, offs_m, offs_n,
MATMUL_PRECISION, RCP_LN2,
IS_FULL_BLOCKS,
)
else:
# Benchmark shows even we applied mod & mask to each block for non divisible seqlen,
# it's on par or slightly faster than only applying to the last block in fwd.
# However, we choose different strategy for bwd, where we only apply mod & mask
# to the last block because it's faster a lot.
acc, l_i, m_i = forward_block_mn(
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "[redacted]/lib/python3.11/site-packages/torch/_inductor/compile_worker/subproc_pool.py", line 279, in do_job
result = job()
^^^^^
File "[redacted]/lib/python3.11/site-packages/torch/_inductor/runtime/compile_tasks.py", line 68, in _worker_compile_triton
load_kernel().precompile(warm_cache_only=True)
File "[redacted]/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 293, in precompile
compiled_binary, launcher = self._precompile_config(
^^^^^^^^^^^^^^^^^^^^^^^^
File "[redacted]/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 493, in _precompile_config
binary = triton.compile(*compile_args, **compile_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[redacted]/lib/python3.11/site-packages/triton/compiler/compiler.py", line 273, in compile
module = src.make_ir(options, codegen_fns, module_map, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[redacted]/lib/python3.11/site-packages/triton/compiler/compiler.py", line 100, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
triton.compiler.errors.CompilationError: at 157:20:
)
V_block_ptr = tl.make_block_ptr(
base=V,
shape=(KV_LEN, V_HEAD_DIM),
strides=(stride_vn, stride_vk),
offsets=(kv_start, 0),
block_shape=(BLOCK_N, V_HEAD_DIM),
order=(1, 0)
)
offs_n = kv_start + tl.arange(0, BLOCK_N)
acc, l_i, m_i = forward_inner(
^
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
Elapsed time: 5.96s
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.8 (Ootpa) (x86_64)
GCC version: 13.2.0
Clang version: 15.0.7 (Red Hat 15.0.7-1.module+el8.8.0+17939+b58878af)
CMake version: version 3.28.3
Libc version: glibc-2.28
Python version: 3.11.8 (main, Apr 24 2024, 09:31:19) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (64-bit runtime)
Python platform: Linux-4.18.0-477.43.1.el8_8.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.9.7
/usr/lib64/libcudnn_adv_infer.so.8.9.7
/usr/lib64/libcudnn_adv_train.so.8.9.7
/usr/lib64/libcudnn_cnn_infer.so.8.9.7
/usr/lib64/libcudnn_cnn_train.so.8.9.7
/usr/lib64/libcudnn_ops_infer.so.8.9.7
/usr/lib64/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
CPU MHz: 3337.527
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4500.10
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-31,128-159
NUMA node1 CPU(s): 32-63,160-191
NUMA node2 CPU(s): 64-95,192-223
NUMA node3 CPU(s): 96-127,224-255
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.7.101
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.2.10.91
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.0.1
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.4.91
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.14.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.7.91
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0
[pip3] torch_geometric==2.3.1
[pip3] torch_harmonics==0.7.4
[pip3] torch_scatter==2.1.2
[pip3] torch-tb-profiler==0.4.3
[pip3] torchaudio==2.6.0
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.21.0
[pip3] transformer_engine_torch==1.13.0
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,972,357,665
|
[CUDA][avgpool2d] Fix backward launch bounds again for `sm100`, `sm120`
|
pytorchbot
|
closed
|
[
"open source",
"release notes: cuda"
] | 1
|
COLLABORATOR
|
`__CUDA_ARCH__` is not visible in host code, which causes incorrect launch bounds and `too many resources requested for launch` on blackwell
CC @atalman @malfet as we would want this in 2.7 @nWEIdia
cc @ptrblck @msaroufim
| true
|
2,972,156,930
|
Add notes of non-integer `dtype` in documentation of `torch.triu_indices()` and `torch.tril_indices()`
|
ILCSFNO
|
closed
|
[
"module: docs",
"triaged",
"module: linear algebra"
] | 5
|
CONTRIBUTOR
|
### 📚 The doc issue
The docs of [torch.triu_indices()](https://pytorch.org/docs/stable/generated/torch.triu_indices.html#torch-triu-indices) and [torch.tril_indices()](https://pytorch.org/docs/stable/generated/torch.tril_indices.html#torch-tril-indices) show their shared parameter as below:
https://github.com/pytorch/pytorch/blob/73358d37dab22a9d080de3e29a576dbab775d15f/torch/_torch_docs.py#L11616-L11617
For that returns are coordinate, they should be of integer. I accept that it should error in case below:
### Repro
```python
import torch
import numpy as np
row = np.random.randint(3, 6)
col = np.random.randint(3, 6)
offset = np.random.randint((- 1), 2)
torch_offset = torch.triu_indices(row, col, offset=offset, dtype=torch.float)
```
### Output
```txt
RuntimeError: "triu_indices" not implemented for 'Float'
```
Though I accept that it should error. In the document description, may we need to note users that we shouldn't transfer results into other types.
That is, suggestions showed below.
Thanks for noting.
### Suggest a potential alternative/fix
I suggest that change the description of `dtype`:
from:
https://github.com/pytorch/pytorch/blob/73358d37dab22a9d080de3e29a576dbab775d15f/torch/_torch_docs.py#L11616-L11617
to:
```python
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, ``torch.long``. No other types of non-integer will be accepted.
cc @svekars @sekyondaMeta @AlannaBurke @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,972,117,715
|
Size of `tau` can mismatch with the context in `torch.ormqr()`
|
ILCSFNO
|
closed
|
[
"module: docs",
"module: error checking",
"triaged",
"module: linear algebra"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The docs of [torch.ormqr()](https://pytorch.org/docs/stable/generated/torch.ormqr.html#torch-ormqr) show its documentation as below:
https://github.com/pytorch/pytorch/blob/73358d37dab22a9d080de3e29a576dbab775d15f/torch/_torch_docs.py#L8316-L8358
Let's see a repro below, it can run well:
### Repro
```python
import torch
input = torch.randn(2, 3, 4)
tau = torch.randn(2, 2)
other = torch.randn(2, 4, 3)
output = torch.ormqr(input, tau, other, left=False, transpose=False)
print(input)
print(tau)
print(other)
print(output)
```
### Output
```txt
tensor([[[-0.5181, -0.2672, 0.8500, -0.6820],
[ 0.8879, -0.0715, 1.2798, -0.0766],
[-0.2168, -0.2497, -0.7872, 0.0749]],
[[ 0.9200, -1.6159, -1.9501, -0.7741],
[ 0.3273, -1.7461, -0.1577, 0.8340],
[-0.8952, -0.7348, 1.5504, 0.3243]]])
tensor([[ 0.6281, -0.5486],
[-0.0506, -0.2596]])
tensor([[[ 1.6261, -0.4931, -1.9407],
[ 0.3792, 0.2051, -0.3430],
[-0.5243, 2.3373, 2.2473],
[ 0.3151, -0.1882, 1.0962]],
[[ 0.8039, 0.2879, 0.5951],
[-0.7199, -0.6064, -0.2302],
[ 0.3524, -1.0317, 1.0906],
[-0.3807, 0.1414, -2.1497]]])
tensor([[[ 0.6156, -1.9174, -1.5900],
[-0.0201, -0.1962, -0.2447],
[-1.1926, 2.3730, 2.2351],
[ 0.3714, -0.3625, 1.1400]],
[[ 0.8224, 0.2599, 0.6036],
[-0.7560, -0.7410, -0.1078],
[ 0.3037, -1.5359, 1.4930],
[-0.3003, 0.6350, -2.5651]]])
```
### Analyze
For arguments, look at the description above:
#### other
other ([Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor)) – tensor of shape (*, m, n) where * is zero or more batch dimensions.
In this case, other is a tensor of shape (2, 4, 3), meaning that `batch=2`, `m=4`, `n=3`
#### left
left ([bool](https://docs.python.org/3/library/functions.html#bool)) – controls the order of multiplication.
In this case, `left=False`, meaning that `mn=n=3`, where mn equals to m or n depending on the left.
#### input
input ([Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor)) – tensor of shape (*, mn, k) where * is zero or more batch dimensions and mn equals to m or n depending on the left.
In this case, input is a tensor of shape (2, 3, 4), meaning that `batch=2`, `mn=3`, `k=4`, agree with analysis of `left` above.
#### tau
tau ([Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor)) – tensor of shape (*, min(mn, k)) where * is zero or more batch dimensions.
In this case, tau is a tensor of shape (2, 2), meaning that `batch=2`, `min(mn, k) = 2`.
But, `mn=n=3`, `k=4`, so `min(mn, k) = min(3, 4) = 3`, it's against to the `tau`!
So the repro should error, but it haven't!
In all, there may be not a check on the size of `tau`, and it can run well, which is not expected.
### Suggestion
* Add a check on the size of `tau`, especially on the value of `min(mn, k)`
* Raise error when size not matched
### Versions
Nightly
cc @svekars @sekyondaMeta @AlannaBurke @malfet @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,971,629,485
|
AOTI: add all fallback ops that are missing from C-shim
|
benjaminglass1
|
open
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"release notes: inductor (aoti)"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150673
Adds all fallback ops that are logged as missing when running the Inductor OpInfo tests with cpp_wrapper mode, with the exception of one or two ops that cannot be currently represented in the C-shim interface.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,971,629,367
|
AOTI fallback ops: sort alphabetically
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150673
* #147225
* __->__ #150672
* #150671
This is just a housekeeping task that makes the listed fallback op order match what's in the generated C shim files.
| true
|
2,971,629,250
|
cpp_wrapper: Re-enable code disabled for forward compatibility
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150673
* #147225
* #150672
* __->__ #150671
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,971,558,604
|
Add inductor standalone_compile API
|
oulgen
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150670
This PR adds standalone_compile API that does precompilation via caching to support vLLM use case in the short term while we work on the longer term precompilation solution.
```
standalone_compile(gm, example_inputs, options) -> CompiledArtifact
CompiledArtifact.save(path, format: binary|unpacked = binary)
CompiledArtifact.load(path, format: binary|unpacked = binary)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,971,439,069
|
[Inductor] Fix CUDA memory usage for CPU only compile
|
leslie-fang-intel
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150669
* #151528
**Summary**
Fix https://github.com/pytorch/pytorch/issues/150622. The root-cause is CUDA device used by default when CUDA is available to generate pattern for a CPU specific compilation. The original PR comes from @vfdev-5 in https://github.com/pytorch/pytorch/pull/124722 and combine the comments from @lezcano in https://github.com/pytorch/pytorch/issues/129131#issuecomment-2182533863
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,971,427,610
|
DISABLED test_parity__foreach_abs_fastpath_outplace_cuda_bfloat16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 6
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_outplace_cuda_bfloat16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39961656616).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_outplace_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_abs', keys=('aten::_foreach_abs', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1161, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1173, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.bfloat16]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_abs_fastpath_outplace_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,971,343,322
|
[ROCm][CI] Enable distributed CI on MI300
|
jithunnair-amd
|
closed
|
[
"oncall: distributed",
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"keep-going",
"ciflow/rocm",
"ciflow/periodic-rocm-mi300"
] | 6
|
COLLABORATOR
|
* Enable distributed CI on MI300 runners, same schedule-based and release-branch triggers as `periodic.yml`; also uses label `ciflow/periodic-rocm-mi300` for triggering on PRs.
* Disabled failing distributed tests on MI300 via Github issues: [151077](https://github.com/pytorch/pytorch/issues/151077), [151078](https://github.com/pytorch/pytorch/issues/151078), [151081](https://github.com/pytorch/pytorch/issues/151081), [151082](https://github.com/pytorch/pytorch/issues/151082), [151083](https://github.com/pytorch/pytorch/issues/151083), [151084](https://github.com/pytorch/pytorch/issues/151084), [151085](https://github.com/pytorch/pytorch/issues/151085), [151086](https://github.com/pytorch/pytorch/issues/151086), [151087](https://github.com/pytorch/pytorch/issues/151087), [151088](https://github.com/pytorch/pytorch/issues/151088), [151089](https://github.com/pytorch/pytorch/issues/151089), [151090](https://github.com/pytorch/pytorch/issues/151090), [151153](https://github.com/pytorch/pytorch/issues/151153)
* Disable failing distributed tests via `skipIfRocm`: https://github.com/pytorch/pytorch/pull/150667/commits/ea9315ff9588ec3dea4de655dcb8bd877f027421
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,971,303,426
|
[invoke_subgraph] Lazy backward
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150717
* #150782
* __->__ #150666
| true
|
2,971,168,530
|
The WanVideo ImageClip Encode node Or the Wan Image to Video or WanVideo VACE Encode node in ComfyUI runs very slowly
|
githust66
|
closed
|
[] | 6
|
NONE
|
### 🐛 Describe the bug
The WanVideo ImageToVideo Encode node Or the Wan Image to Video node or WanVideo VACE Encode node in ComfyUI runs very slowly. It takes about 500 seconds to execute this node. he GPU utilization is consistently at 5%-8%. Previously, the decoding node also had this problem. Now the decoding work is normal, but there is a problem with encoding.
https://github.com/comfyanonymous/ComfyUI/issues/7480


### Versions
PyTorch version: 2.7.0+rocm6.3
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (conda-forge gcc 12.1.0-17) 12.1.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 7900 XT (gfx1100)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 7700 8-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 2
BogoMIPS: 7599.86
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 4 MiB (4 instances)
L3 cache: 32 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] lion-pytorch==0.2.3
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnx2torch==1.5.15
[pip3] onnxruntime-rocm==1.19.0
[pip3] onnxruntime-training==1.19.0+rocm634.76
[pip3] open_clip_torch==2.30.0
[pip3] optree==0.14.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch-triton-rocm==3.3.0
[pip3] torch==2.7.0+rocm6.3
[pip3] torch_migraphx==0.0.4
[pip3] torchaudio==2.7.0+rocm6.3
[pip3] torchmetrics==1.6.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.22.0+rocm6.3
[pip3] triton==3.3.0
[conda] lion-pytorch 0.2.3 pypi_0 pypi
[conda] numpy 1.26.4 py310hb13e2d6_0 conda-forge
[conda] onnx2torch 1.5.15 pypi_0 pypi
[conda] open-clip-torch 2.30.0 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] pytorch-lightning 2.5.0.post0 pypi_0 pypi
[conda] pytorch-triton-rocm 3.3.0 pypi_0 pypi
[conda] torch 2.7.0+rocm6.3 pypi_0 pypi
[conda] torch-migraphx 0.0.4 pypi_0 pypi
[conda] torchaudio 2.7.0+rocm6.3 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.22.0+rocm6.3 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
| true
|
2,971,122,431
|
[MPS/inductor] Add support for hermite_polynomial_h.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,971,078,614
|
Proposing torch.empty_cache (device_type) as generalization of torch.cuda.empty_cache() .
|
githubsgi
|
open
|
[
"triaged",
"module: accelerator"
] | 5
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Device dependent call torch.cuda.empty_cache () make PyTorch model code brittle and un-portable. Proposing a general api torch.empty_cache (device_type) .
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @guangyey @EikanWang
| true
|
2,971,016,811
|
DISABLED test_parity__foreach_abs_fastpath_inplace_cuda_uint8 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 6
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_inplace_cuda_uint8&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39951410523).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_inplace_cuda_uint8`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,971,015,429
|
[MPS] Make fused rms_norm traceable
|
malfet
|
closed
|
[
"Merged",
"Reverted",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150661
Which is a regression, introduced by https://github.com/pytorch/pytorch/issues/150629#issue-2970312779 which I should have reviewed more thoroughly.
- Defined `_fused_rms_norm`, added MPS-only implementation for it and dispatch from `rms_norm_symint`, which is registered as `CompositeImplicitAutograd`, i.e. it is not supposed to do any computations over Tensor, only dispatch to other ops
-
- Register `_fused_rms_norm` as a fallback in `torch/_inductor/lowering.py`
- Added unit test to avoid those regressions in the future
TODO:
- Get rid of this op, change `rms_norm_symint` definition to `CompositeExplicitAutograd` and implement backward function in `tools/autograd/derivatives.yaml`
- Benchmark compiler and re-enable decomp as follows when compiled code is faster
```python
@register_decomposition(aten._rms_norm_fused)
def rms_norm_fused(
self: torch.Tensor, ndim: int, weight: torch.Tensor, eps: float
) -> torch.Tensor:
dtr = [self.dim() - i - 1 for i in range(ndim)]
return self * weight * (self.pow(2).mean(dtr, keepdim=True).add(eps).rsqrt())
```
Fixes https://github.com/pytorch/pytorch/issues/150629
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,970,970,216
|
torchrun global rank assignement issues
|
nsrilalith
|
closed
|
[
"oncall: distributed"
] | 5
|
NONE
|
### 🐛 Describe the bug
```
import torch
import torch.distributed as dist
import os
def main():
# Initialize the distributed process group using NCCL
rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
local_rank = int(os.environ["LOCAL_RANK"])
torch.cuda.set_device(local_rank)
dist.init_process_group(backend="nccl")
# Create a tensor on the GPU with a value equal to the rank
tensor = torch.tensor([rank], device=torch.device(f"cuda:{local_rank}"))
# All-reduce: sum up the tensor values from all processes
dist.all_reduce(tensor, op=dist.ReduceOp.SUM)
print(f"Global Rank {rank}; Local Rank {local_rank} has tensor value: {tensor.item()}")
if __name__ == '__main__':
main()
```
When I use torchrun to run this with "torchrun --nnodes=3 --nproc_per_node=1 --node-rank=0 --rdzv_id=1234 --rdzv_backend=c10d --rdzv_endpoint=MASTERADDR:29500 simple_nccl_test.py" on master node, and same command on worker nodes by changing --node-rank=1 and 2, the global rank 0 is not assigned to the node having the MASTERADDR.
I found out through trial tests that torchrun is assigning global ranks based on IP address, i.e. the numerically 1st IP in the participating nodes is assigned Global Rank 0. Anywawy to fix this?
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1018-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 7
BogoMIPS: 4999.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB (2 instances)
L1i cache: 64 KiB (2 instances)
L2 cache: 2 MiB (2 instances)
L3 cache: 35.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] torch==2.5.1
[pip3] torch-model-archiver==0.12.0b20240930
[pip3] torch-workflow-archiver==0.2.15b20240930
[pip3] torchaudio==2.5.1
[pip3] torchserve==0.11.1b20240718
[pip3] torchtext==0.6.0
[pip3] torchvision==0.20.1
[pip3] transformer_engine_torch==1.11.0
[pip3] triton==3.1.0
[conda] blas 1.0 mkl conda-forge
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch 2.5.1 py3.11_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.5.0a0+gita8d6afb pypi_0 pypi
[conda] torch-model-archiver 0.12.0 py311_0 pytorch
[conda] torch-workflow-archiver 0.2.15 py311_0 pytorch
[conda] torchaudio 2.5.1 py311_cu124 pytorch
[conda] torchserve 0.11.1 py311_0 pytorch
[conda] torchtext 0.6.0 py_1 pytorch
[conda] torchtriton 3.1.0 py311 pytorch
[conda] torchvision 0.20.1 py311_cu124 pytorch
[conda] transformer-engine-torch 1.11.0 pypi_0 pypi
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,970,949,478
|
[Inductor] Fallback embedding when sparse is True
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150659
**Summary**
Fix issue: https://github.com/pytorch/pytorch/issues/150656, fallback `embedding` when sparse is True.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_torchinductor.py -k test_embedding_sparse
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,970,944,719
|
Revert "[ROCm] change preferred blas lib defaults (#150249)"
|
atalman
|
closed
|
[
"module: rocm",
"ciflow/rocm",
"ci-no-td"
] | 1
|
CONTRIBUTOR
|
This reverts commit 8b6bc59e9552689e115445649b76917b9487a181.
The associated Test was reverted on Trunk: https://github.com/pytorch/pytorch/pull/150581
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,970,935,948
|
[AOTI] Remove typedef for half and bfloat16
|
desertfire
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ci-no-td",
"ciflow/inductor-periodic"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150657
Summary: typedef is prone to name collision. Explicitly spell out the actual aten types, needed for the libtorch-free codegen.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,970,927,881
|
`torch.compile` fails with sparse embedding (`F.embedding(sparse=True)`)
|
merajhashemi
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When using `torch.compile` on a function that calls `torch.nn.functional.embedding` with `sparse=True`, an assertion error occurs during graph lowering. It appears that this behavior may be intentional due to the lack of support for sparse embeddings, but I couldn't find a tracking issue for it.
### Reproduction Steps
```python
import torch
import torch.nn.functional as F
@torch.compile
def forward(weight, indices):
return F.embedding(indices, weight, sparse=True)
if __name__ == "__main__":
indices = torch.randint(10, (2, 3))
weight = torch.randn(10, 3, requires_grad=True)
out = forward(weight, indices)
```
### Error Message
```
File ".venv/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 3163, in embedding
assert not sparse
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
LoweringException: AssertionError:
target: aten.embedding.default
args[0]: TensorBox(StorageBox(
InputBuffer(name='primals_1', layout=FixedLayout('cpu', torch.float32, size=[10, 3], stride=[3, 1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='primals_2', layout=FixedLayout('cpu', torch.int64, size=[2, 3], stride=[3, 1]))
))
args[2]: -1
args[3]: False
args[4]: True
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7543 32-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
Stepping: 1
BogoMIPS: 5589.33
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,970,920,645
|
Create better alerting for binary size validations and time it takes to build the binary
|
atalman
|
open
|
[
"module: binaries",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We have following workflow that validates binary size:
https://github.com/pytorch/test-infra/blob/main/.github/workflows/validate-pypi-wheel-binary-size.yml
However it has low visibility.
We need to:
1. Add this to https://github.com/pytorch/test-infra/blob/main/.github/workflows/validate-binaries.yml so its executed on nightly bases and on on RC builds
2. Add Clickhouse table for each nightly build size and time it takes to build the binary
3. Add Clickhouse table to track Linux Binary size on each PR so we can do better bisection, when increase in binary size and build time happened
### Versions
2.8.0
cc @seemethere @malfet @osalpekar
| true
|
2,970,919,590
|
[Inductor] Add decomposeK as an autotuning choice for mm
|
PaulZhang12
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td",
"skip-url-lint"
] | 26
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150654
As a result of adding subgraph as a choice to inductor https://github.com/pytorch/pytorch/pull/149761 and enabling FP32 output from PyTorch GEMMs from FP16/BF16 inputs: https://github.com/pytorch/pytorch/pull/150812, this PR enables decompose_k as an autotuning choice for Inductor in generating the fastest matmuls with Triton. DecomposeK is currently only enabled for `torch.compile`.
Followups:
* decompose_k does not currently support epilogue fusion, which will take some work to enable
* Enable autotuning the bmm with Triton Templates as well without requiring tons of more compile time, async compilation. Anecdotal evidence shows that Triton BMM performs better usually than aten BMM
* Add for addmm
* Enable for Inference and AOTI
Below are the results of running TritonBench for Split-K shapes, comparing the aten performance versus pt2_triton, which now autotunes on decompose_k, seeing >10% speedup compared to aten on average, and for some shapes over 3x the performance of the best Triton mm previously:
<img width="929" alt="Screenshot 2025-04-28 at 9 15 39 PM" src="https://github.com/user-attachments/assets/27d85bbc-4f3a-43a6-a8fa-d4a5bbb8c999" />
TorchInductor Benchmark Dashboard:
<img width="1727" alt="Screenshot 2025-04-30 at 2 02 53 PM" src="https://github.com/user-attachments/assets/4acd7ffc-407f-4cfd-98bb-2e3d8b1f00b3" />
We see speedups across all runs for training. Compile time increased as expected, with more `mm` options to tune over.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D73820115](https://our.internmc.facebook.com/intern/diff/D73820115)
| true
|
2,970,919,534
|
[Inductor] Add Subgraph as a Autotuning Choice
|
PaulZhang12
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150653
Add the option for providing a Subgraph as an autotuning choice in Inductor. This is crucial for implementing the split-k optimization for GEMMs by decomposing a mm -> bmm. https://github.com/pytorch/pytorch/pull/150654 uses these changes to add decomposeK as a default autotuning choice for aten.mm in Inductor.
Using https://github.com/pytorch/pytorch/pull/150654 and a simple script:
```
import torch
def f(a, b):
return torch.matmul(a, b)
def decompose_func(a_in, b_in):
M, K = a_in.shape
K, N = b_in.shape
# TODO: Ideally we want to autotune over this parameter
kPartitions = 256
assert K % kPartitions == 0, "K must be divisible by Kmini"
B = K // kPartitions
a_reshaped = a_in.reshape(M, B, kPartitions).transpose(
0, 1
) # Shape: (B, M, kPartitions)
b_reshaped = b_in.reshape(B, kPartitions, N) # Shape: (B, kPartitions, N)
result = torch.bmm(a_reshaped, b_reshaped) # Shape: (B, M, N)
return result.sum(dim=0).to(torch.float16) # Sum over B dimension, Shape: (M, N)
for k in [4096, 8192, 12288, 16384, 20480, 24576, 28672, 32768]:
a = torch.randn(32, k, dtype=torch.float16, device="cuda", requires_grad=True)
b = torch.randn(k, 32, dtype=torch.float16, device="cuda", requires_grad=True)
compiled_res = torch.compile(f, dynamic=False)(a, b)
decompose_res = decompose_func(a, b)
print(f"Compiled mm result close to aten: {torch.allclose(f(a, b), compiled_res, atol=1e-5, rtol=0.5)}")
print(f"Compiled mm result close to decompose: {torch.allclose(decompose_res, compiled_res, atol=1e-5, rtol=0.5)}")
```
we are able to autotune the decomposeK optimization to aten and the traditional Triton templates in Inductor. DecomposeK is faster than aten by about ~10% on average and > 4x speedup over the best Triton templates on an H100 machine, e.g.:
```
AUTOTUNE mm(32x28672, 28672x32)
decompose_k_mm 0.0126 ms 100.0%
mm 0.0144 ms 87.5%
triton_mm_69 0.0579 ms 21.7% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=4
triton_mm_75 0.0677 ms 18.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=4, num_warps=4
triton_mm_76 0.0850 ms 14.8% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=4
triton_mm_68 0.1444 ms 8.7% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=4
triton_mm_72 0.1546 ms 8.1% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
triton_mm_74 0.1819 ms 6.9% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=4, num_warps=4
triton_mm_67 0.1917 ms 6.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=2, num_warps=4
triton_mm_73 0.2766 ms 4.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
```
https://pastebin.com/g3FMaauT is the generated code from Inductor containing the subgraph decomposition for aten.mm.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,970,895,725
|
[c10d][fr] Improve FR dump robustness with all watchdog broadcast wait and more frequent store check
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150652
When debugging FR missing dump and missing dump logs, I have couple initial findings:
1. On the same rank, if a second watchdog timeout triggers on a different PG(or subPG), that watchdog thread will immediately throw exception instead of sleeping. We want to fix that by still making the watchdog thread to wait for 1 min.
2. The FR dump takes about 900ms to 1200ms so, we are not checking the store frequently enough. But instead of changing the frequency from 1sec to 300ms, we finally decided to just let all ranks just sleep for 1 min universally rather than using a promise.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,970,895,636
|
[aoti] Use generate_fake_kernels_from_real_mismatches config for draft exported programs
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary:
Sometimes we get `MetadataMismatchError` in aoti compilation because draft export uses the flag below to infer the fake kernel when there’s a mismatch, but aoti doesn’t have this flag turned on.
https://fburl.com/code/9qzytl6q
torch._functorch.config.generate_fake_kernels_from_real_mismatches
If we set this flag to True, then aoti compilation would work.
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r aoti_runtime_asserts
```
Differential Revision: D72345085
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,970,895,388
|
[DTensor] clean up _local_shard_size_and_offset
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150887
* #150862
* __->__ #150650
* #150490
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k
| true
|
2,970,870,410
|
ENH: Publish full-fledged tarballs also for release candidates
|
h-vetinari
|
closed
|
[
"oncall: releng",
"triaged",
"enhancement",
"actionable"
] | 1
|
CONTRIBUTOR
|
In #149044, there was the following discussion
> @h-vetinari:
> > **Phase 2 (after 3/31/25):**
> > Note that changes here require us to rebuild a Release Candidate
>
> What's the intention w.r.t. release candidates like [`v2.7.0-rc2`](https://github.com/pytorch/pytorch/releases/tag/v2.7.0-rc2)? Since we're still in Phase 1, are they intended only for internal testing (despite the name of the tag)? If they are already intended for wider testing (e.g. we could build and publish the rc's to a separate label in conda-forge), it would be nice if you could publish regular tarballs, so we don't have to collect all the submodules manually.
[added by @atalman through editing the comment]
> [@atalman](https://github.com/atalman) wrote: Hi [@h-vetinari](https://github.com/h-vetinari) The RC builds are intended for internal and external testing. Please refer to the : https://dev-discuss.pytorch.org/t/pytorch-2-7-rc1-produced-for-pytorch-audio-vision/2855 for details on how to install latest RC build.
> @h-vetinari: I just noticed that you responded by editing my comment (huh?), which means I didn't get any notification. I'm not interested in _installing_ the rc builds, I'm interested in testing the builds from the POV of building them for redistribution in conda-forge (i.e. the "external testing" part, but of the sources, not the wheels).
>
> In the past, we didn't have the resources to do that, but things have improved there, and also pytorch has stopped publishing conda packages ([#138506](https://github.com/pytorch/pytorch/issues/138506)), which makes the conda-forge builds all the more important for people who (for whatever reason) prefer to use conda. Testing the builds in advance means sorting out issues earlier and publishing with less delays after the release date.
>
> I could test-build the RCs by checking out the tag and manually collecting the refs for the many [submodules](https://github.com/pytorch/pytorch/blob/main/.gitmodules), but I don't think it's an unreasonable ask to just attach a full-fledged tarball to the RC releases, like you do for GA releases as well.
> @atalman: Hi [@h-vetinari](https://github.com/h-vetinari) This is special issue used to track the release cherry-picks hence we try not to populate it with multiple comments, but keep it for listing cherry-pick request only as much as possible. Please create an separate issue for us to track this work, we would be happy to do it. The tarball you are looking for is something like the sources tarball we publish on the release notes page: https://github.com/pytorch/pytorch/releases/download/v2.6.0/pytorch-v2.6.0.tar.gz ?
Apologies for placing the comments in the wrong issue - the release tracker seemed like a good place, but clearly that was not the case 😅
Indeed, what I'm looking for would be the kind of tarball you have on the release page

That's mainly because https://github.com/dear-github/dear-github/issues/214 makes it a hassle to actually assemble a fully populated checkout of a rc tag.
| true
|
2,970,867,381
|
[c10d] Surface error type when we unlink and create named pipe for DumpPipe
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150648
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,970,866,954
|
PyTorch wheel binary size increase ~80mb
|
atalman
|
open
|
[
"module: binaries",
"oncall: releng",
"triaged",
"topic: binaries"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Found that we had binary size increase ~80mb for cuda 12.4 happen on Jan 31 2025:
<img width="1015" alt="Image" src="https://github.com/user-attachments/assets/ad25ed02-31e2-46a3-8a36-6e3f94c26de5" />
Commit where the increase happened: https://github.com/pytorch/pytorch/commit/edf08cb080c202a7c58f96eca11dae36aefc6db9
Cuda 12.6 ~60 MB package increase in size
<img width="1006" alt="Image" src="https://github.com/user-attachments/assets/1543d9d5-1536-4beb-bf2f-3b19d3c2760f" />
Commit responcible: https://github.com/pytorch/pytorch/commit/edf08cb080c202a7c58f96eca11dae36aefc6db9
### Versions
2.7.0
cc @seemethere @malfet @osalpekar
| true
|
2,970,841,965
|
WIP: Remove Conda Instructions
|
AlannaBurke
|
closed
|
[
"module: docs",
"release notes: releng",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #149551
Removing Conda installation instructions.
Anywhere there were multiple instructions, I removed the Conda ones and left the pip ones. If I wasn't sure what to replace the instructions with, I just left a comment so we'd see all the places it's mentioned when reviewing this PR. I also cleaned up a couple of files.
cc @svekars @sekyondaMeta @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,970,841,904
|
[dynamo] reconstruct functions decorated in the compiled region properly
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150586
* __->__ #150645
We were previously unable to reconstruct functions that were decorated in the compiled region.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,970,838,269
|
[AO] Refactor convert and add QuantAffinePlaceholderObserver
|
mcr229
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 19
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150644
* #150643
* #150642
| true
|
2,970,838,146
|
[AO] Add Moving Average Affine Observer
|
mcr229
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150644
* __->__ #150643
* #150642
| true
|
2,970,838,005
|
[AO] update port_metadata_pass to support quant_affine ops
|
mcr229
|
closed
|
[
"Merged",
"release notes: quantization",
"release notes: AO frontend"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150644
* #150643
* __->__ #150642
| true
|
2,970,811,764
|
tutorial example for cp
|
XilunWu
|
open
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150641
| true
|
2,970,766,348
|
[CUDA][avgpool2d] Fix backward launch bounds again for `sm100`, `sm120`
|
eqy
|
closed
|
[
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing"
] | 8
|
COLLABORATOR
|
`__CUDA_ARCH__` is not visible in host code, which causes incorrect launch bounds and `too many resources requested for launch` on blackwell
CC @atalman @malfet as we would want this in 2.7 @nWEIdia
cc @ptrblck @msaroufim
| true
|
2,970,713,917
|
[cutlass backend] Add more logs for cutlass backend benchmark
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150639
Goal is to have a way to compare if a change make it better or worse.
```
Average edge over aten (max(-edge, 0), higher is better):
triton: 8.596507086950552 (from 6 valid values)
triton_persistent_tma: 9.517193693923307 (from 6 valid values)
cutlass_lvl_default: 3.3234737908691785 (from 6 valid values)
cutlass_lvl_1111: 7.088173348313991 (from 6 valid values)
cutlass_lvl_2222: 7.291869722320318 (from 6 valid values)
```
| true
|
2,970,658,678
|
Difference in outputs with dtype `bf16` with `torch.compile`
|
shivam15s
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
### 🐛 Describe the bug
- Difference in outputs with torch compile vs eager pytorch with input dtype bf16
- Also note, setting bias to zeros makes the torch.allclose pass.
Reproducer:
```python
import torch
import torch.nn.functional as F
# seed
torch.manual_seed(0)
def get_per_token_logps(input, weight, bias, selected_token_ids):
logits = torch.matmul(input, weight.t())
logits = logits + bias
logps = F.log_softmax(logits.float(), dim=-1)
per_token_logps = logps.gather(dim=-1, index=selected_token_ids.unsqueeze(-1)).squeeze(-1)
return per_token_logps
if __name__ == "__main__":
B, T, H, V = 8, 128, 1024, 4096
input = torch.randn(B, T, H, device="cuda", dtype=torch.bfloat16)
weight = torch.randn(V, H, device="cuda", dtype=torch.bfloat16)
bias = torch.randn(V, device="cuda", dtype=torch.bfloat16)
# bias = torch.zeros_like(bias)
selected_token_ids = torch.randint(0, V, (B, T), device="cuda")
get_per_token_logps_compiled = torch.compile(get_per_token_logps)
out = get_per_token_logps(input, weight, bias, selected_token_ids)
out_compiled = get_per_token_logps_compiled(input, weight, bias, selected_token_ids)
print(out)
print(out_compiled)
print(torch.allclose(out, out_compiled, atol=1e-3, rtol=1e-3))
```
### Error logs
`torch.allclose` returns False
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1017-azure-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 NVL
GPU 1: NVIDIA H100 NVL
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9V84 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 80
Socket(s): 1
Stepping: 1
BogoMIPS: 4800.07
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves user_shstk avx512_bf16 clzero xsaveerptr rdpru arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 2.5 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 80 MiB (80 instances)
L3 cache: 320 MiB (10 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-39
NUMA node1 CPU(s): 40-79
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torch-tb-profiler==0.4.3
[pip3] triton==3.2.0
[conda] No relevant packages
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,970,584,290
|
Segmentation fault with float16 + CPU mode + large tensor matmuls
|
srampal
|
closed
|
[
"high priority",
"module: crash",
"module: cpu",
"triaged",
"module: 64-bit",
"module: regression",
"module: half",
"module: intel"
] | 11
|
NONE
|
### 🐛 Describe the bug
Performing torch.matmul() with large tensors and dtype = float16 and cpu mode triggers a segmentation fault.
Example:
```python
import torch
def matrix_vector_operations(N_values):
for N in N_values:
A = torch.rand(N, N, dtype=torch.float16, device="cpu")
X = torch.rand(N, dtype=torch.float16, device="cpu")
print(" Allocated tensors for N = ", N)
B = torch.matmul(A, X)
print(" Completed matmul for N = ", N)
if __name__ == "__main__":
N_values = [1, 10, 50000] # Define different N values
matrix_vector_operations(N_values)
```
$ python bug-report.py
Allocated tensors for N = 1
Completed matmul for N = 1
Allocated tensors for N = 10
Completed matmul for N = 10
Allocated tensors for N = 50000
Segmentation fault (core dumped)
However when I change the dtype to float32, it completes without a segmentation fault indicating there is more than enough memory available in the system. It also works when vector X is instantiated using X = torch.rand(N, 1, dtype=torch.float16, device="cpu") but not when instantiated as shown in the example above. Also doesnt crash with device = cuda on an nvidia a100.
This may or may not be related to other issues reported with float16 on cpu such as [this one](https://github.com/pytorch/pytorch/issues/146508)
### Versions
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.4 (Plow) (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.12.5 (main, Dec 3 2024, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-2)] (64-bit runtime)
Python platform: Linux-5.14.0-503.34.1.el9_5.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 570.124.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel Xeon Processor (Cascadelake)
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 24
Stepping: 6
BogoMIPS: 5786.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat vnmi umip pku ospke avx512_vnni md_clear arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 96 MiB (24 instances)
L3 cache: 384 MiB (24 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @frank-wei
| true
|
2,970,563,557
|
[precompile] Serialization for GlobalStateGuard
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Summary: To preserve global state guards we need to make the C++ type serialzable. Using json because it's easier to do and we don't have a lot of data in global state.
Test Plan: test_dynamo -k test_global_state_guard_serialization
Differential Revision: D72410611
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,970,527,939
|
[validations] Run nccl version check on Linux only
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Followup https://github.com/pytorch/pytorch/pull/150194 to disable nccl version print on OS's other then Linux
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.