id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,901,822,822
|
Re-enable tests
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"test-config/asan"
] | 3
|
COLLABORATOR
|
No UBSAN failures.
| true
|
2,901,805,467
|
[WIP] backed_size_oblivious=True for export
|
pianpwk
|
open
|
[
"fx",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,790,922
|
[ONNX] Handle error in verification interpreter
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148707
* __->__ #148730
Use a simple try catch to handle onnx runtime errors in the verification interpreter when that happens. One example is ort will sometimes produce a list of None for some nodes. I am not sure how that happens yet.
| true
|
2,901,784,521
|
[inductor] Fix division by zero error in fractional max
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148729
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Fixes https://github.com/pytorch/pytorch/issues/148152
| true
|
2,901,780,983
|
UNSTABLE trunk / libtorch-linux-focal-cuda12.6-py3.10-gcc9-debug / build
|
malfet
|
closed
|
[
"module: ci",
"triaged",
"unstable"
] | 1
|
CONTRIBUTOR
|
Followup after https://github.com/pytorch/pytorch/issues/148495 (new one has been migrated to cuda12.6)
cc @seemethere @pytorch/pytorch-dev-infra
| true
|
2,901,777,242
|
Add ccode for FloorDiv
|
kalpit-meta-1
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Summary: Add ccode for FloorDiv
Test Plan: CIs
Differential Revision: D70749021
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,901,760,551
|
Update the comment
|
Microve
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Differential Revision: D70747931
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,757,013
|
[Inductor] Missed block pointer for tiled + broadcast load
|
blaine-rister
|
closed
|
[
"oncall: pt2"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
On this example program:
```
import torch
import torch._inductor.config as config
config.triton.prefer_nd_tiling = True
config.triton.use_block_ptr = True
full_size = (114, 10, 160)
def get_input(view_size):
full = torch.rand(full_size, device="cuda")
view = torch.as_strided(full, view_size, full.stride())
return view
inps = [
get_input(view_size)
for view_size in [(114, 10, 140), (114, 1, 140)]
]
compiled = torch.compile(torch.add)
compiled(*inps)
```
I get the following Triton kernel:
```
@triton.jit
def triton_poi_fused_add_0(in_ptr0, in_ptr1, out_ptr0, ynumel, xnumel, YBLOCK : tl.constexpr, XBLOCK : tl.constexpr):
ynumel = 1140
xnumel = 140
yoffset = tl.program_id(1) * YBLOCK
yindex = yoffset + tl.arange(0, YBLOCK)[None, :]
ymask = yindex < ynumel
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
xmask = xindex < xnumel
x2 = xindex
y3 = yindex
y1 = yindex // 10
tmp0 = tl.load(tl.make_block_ptr(in_ptr0, shape=[140, 1140], strides=[1, 160], block_shape=[XBLOCK, YBLOCK], order=[1, 0], offsets=[xoffset, yoffset]), boundary_check=[0, 1], eviction_policy='evict_last')
tmp1 = tl.load(in_ptr1 + (x2 + 1600*y1), xmask & ymask, eviction_policy='evict_last')
tmp2 = tmp0 + tmp1
tl.store(tl.make_block_ptr(out_ptr0, shape=[140, 1140], strides=[1, 140], block_shape=[XBLOCK, YBLOCK], order=[1, 0], offsets=[xoffset, yoffset]), tl.broadcast_to(tmp2, [XBLOCK, YBLOCK]).to(tl.float32), boundary_check=[0, 1])
''', device_str='cuda')
```
The line defining `tmp1` could use a block pointer, but for some reason this lowering path seems to elide block pointer analysis. The parenthetical `(x2 + 1600*y1)` leads me to think this might be an `index_expr` as opposed to the usual indexing, or some other unconventional path.
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+gitbd78b54
Is debug build: True
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: version 3.30.4
Libc version: glibc-2.34
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA PG509-210
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 22
On-line CPU(s) list: 0-21
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 22
Socket(s): 1
Stepping: 11
BogoMIPS: 3591.57
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 arat vnmi umip pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 704 KiB (22 instances)
L1i cache: 704 KiB (22 instances)
L2 cache: 88 MiB (22 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-21
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.7.0a0+gitbd78b54
[pip3] torchaudio==2.5.0a0+79047bf
[pip3] torchbench==0.1
[pip3] torchdata==0.10.0a0+c64801f
[pip3] torchtext==0.17.0a0+1d4ce73
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-sphinx-theme 0.0.24 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.7.0a0+gitbd78b54 dev_0
[conda] torchaudio 2.5.0a0+79047bf dev_0
[conda] torchbench 0.1 pypi_0 pypi
[conda] torchdata 0.7.0a0+11bb5b8 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchtext 0.17.0a0+1d4ce73 dev_0
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,901,750,946
|
Add ccode for FloorDiv
|
kalpit-meta-1
|
closed
|
[
"module: cpu",
"fb-exported"
] | 8
|
CONTRIBUTOR
|
Summary: Add ccode for FloorDiv
Test Plan: CIs
Differential Revision: D70746841
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,901,747,488
|
Don't clear feedback_saver_fns after cache clear
|
exclamaforte
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary:
Since feedback_saver_fns are used for logging, I don't think it makes sense to clear them, and this resulted in weird behavior in user code where disabling caches caused logging code to break.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,736,074
|
Workaround no triton float8_e8m0fnu support in inductor
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148722
Triton doesn't support actual float8_e8m0fnu yet, so we can't currently codegen any arithmetic on them. But we can support bitcasting, and view/memory operators and treat them as uint8 for now. Fix for https://github.com/pytorch/pytorch/issues/147873.
The one question i'm not sure of is whether or not we need to explicitly disable triton template fusion since it would fuse in these dtypes as uint8..
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,724,871
|
[cuSPARSE][B200] Bump tolerances for test_sparse_csr matvec
|
eqy
|
closed
|
[
"module: sparse",
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Small tolerance bump for blackwell (appears to use same kernel as prev. arches)
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @ptrblck @msaroufim
| true
|
2,901,719,122
|
replace usages of upload_graph in inductor with tlparse (v2)
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: inductor"
] | 19
|
CONTRIBUTOR
|
Reland of https://github.com/pytorch/pytorch/pull/148703
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #133044
* #147561
* __->__ #148720
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,716,857
|
[MPS][BE] Align bitshift behavior with CPU
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"no-runner-experiments"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148719
* #148686
* #148685
By casting the argument to output type
| true
|
2,901,700,348
|
[Inductor] Permuted memory access pattern for tiled pointwise kernels
|
blaine-rister
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 11
|
CONTRIBUTOR
|
### 🐛 Describe the bug
On this example program:
```
import torch
import torch._inductor.config as config
config.triton.prefer_nd_tiling = True
config.triton.use_block_ptr = True
full_size = (21, 32)
view_size = (21, 19)
def get_input():
full = torch.rand(full_size, device="cuda")
view = torch.as_strided(full, view_size, full.stride())
return view
inps = [get_input() for _ in range(2)]
compiled = torch.compile(torch.add)
compiled(*inps)
```
Inductor generates the Triton kernel:
```
import triton
import triton.language as tl
from triton.compiler.compiler import AttrsDescriptor
from torch._inductor.runtime import triton_helpers, triton_heuristics
from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math
from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties
triton_helpers.set_driver_to_gpu()
@triton_heuristics.pointwise(
size_hints={'y': 32, 'x': 32}, tile_hint=TileHint.DEFAULT,
filename=__file__,
triton_meta={'signature': {'in_ptr0': '*fp32', 'in_ptr1': '*fp32', 'out_ptr0': '*fp32', 'ynumel': 'i32', 'xnumel': 'i32'}, 'device': DeviceProperties(type='cuda', index=0, multi_processor_count=108, cc=80, major=8, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor(divisible_by_16=(0, 1, 2), equal_to_1=())]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'triton_poi_fused_add_0', 'mutated_arg_names': [], 'optimize_mem': True, 'no_x_dim': False, 'num_load': 2, 'num_reduction': 0, 'backend_hash': '54E46422D5DB2E55B804C8E038A4A0E2ECEED6FCC5402DED453936C14F5DFA13', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': True, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False},
min_elem_per_thread=0
)
@triton.jit
def triton_poi_fused_add_0(in_ptr0, in_ptr1, out_ptr0, ynumel, xnumel, YBLOCK : tl.constexpr, XBLOCK : tl.constexpr):
ynumel = 21
xnumel = 19
yoffset = tl.program_id(1) * YBLOCK
yindex = yoffset + tl.arange(0, YBLOCK)[None, :]
ymask = yindex < ynumel
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
xmask = xindex < xnumel
x1 = xindex
y0 = yindex
tmp0 = tl.load(tl.make_block_ptr(in_ptr0, shape=[19, 21], strides=[1, 32], block_shape=[XBLOCK, YBLOCK], order=[1, 0], offsets=[xoffset, yoffset]), boundary_check=[0, 1], eviction_policy='evict_last')
tmp1 = tl.load(tl.make_block_ptr(in_ptr1, shape=[19, 21], strides=[1, 32], block_shape=[XBLOCK, YBLOCK], order=[1, 0], offsets=[xoffset, yoffset]), boundary_check=[0, 1], eviction_policy='evict_last')
tmp2 = tmp0 + tmp1
tl.store(tl.make_block_ptr(out_ptr0, shape=[19, 21], strides=[1, 19], block_shape=[XBLOCK, YBLOCK], order=[1, 0], offsets=[xoffset, yoffset]), tl.broadcast_to(tmp2, [XBLOCK, YBLOCK]).to(tl.float32), boundary_check=[0, 1])
''', device_str='cuda')
```
This is technically correct, but the shapes and strides seem permuted for the load and store. Normally, I would expect the trailing stride to be `1` for both. On certain systems, it's more expensive to load/store permuted strides because you end up doing a DMA in the normal order, then permuting on chip.
I want to see if we can flip the dimensions back so `strides=[32, 1]`. Or does it even matter? Is this just a matter of convention?
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+gitbd78b54
Is debug build: True
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: version 3.30.4
Libc version: glibc-2.34
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA PG509-210
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 22
On-line CPU(s) list: 0-21
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 22
Socket(s): 1
Stepping: 11
BogoMIPS: 3591.57
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 arat vnmi umip pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 704 KiB (22 instances)
L1i cache: 704 KiB (22 instances)
L2 cache: 88 MiB (22 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-21
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.7.0a0+gitbd78b54
[pip3] torchaudio==2.5.0a0+79047bf
[pip3] torchbench==0.1
[pip3] torchdata==0.10.0a0+c64801f
[pip3] torchtext==0.17.0a0+1d4ce73
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-sphinx-theme 0.0.24 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.7.0a0+gitbd78b54 dev_0 <develop>
[conda] torchaudio 2.5.0a0+79047bf dev_0 <develop>
[conda] torchbench 0.1 pypi_0 pypi
[conda] torchdata 0.7.0a0+11bb5b8 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchtext 0.17.0a0+1d4ce73 dev_0 <develop>
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,901,658,170
|
Update win-vs2022-cuda12.1-py3 -> win-vs2022-cuda12.6-py3
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Should have been migrated long ago
| true
|
2,901,632,664
|
[mm_logs] enhance the printing for overview info
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
previously the dynamo counters does not print the counts information automatically.
explicitly added a log msg to print after lowering for overview info for inductor aten mms
it will look like:
the name is in `{aten_op_name}_{m}_{n}_{k}`
```
torch/_inductor/compile_fx.py:832] [0/0] Overview info of inductor aten mms: (aten.addmm_16_6_16: 1), (name: count), xxx
```
{F1975874802}
Test Plan:
```
TORCH_LOGS="+inductor" buck2 run -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=h100 @//mode/opt fbcode//caffe2/test/inductor:test_aot_inductor -- -r test_addmm_cuda
```
Differential Revision: D70739912
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,619,425
|
[FSDP2][doc] highlight equivalence of set_requires_gradient_sync and no_sync
|
weifengpy
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148715
we got asked a few times about FSDP2's equivalence of no_sync. highlight
set_requires_gradient_sync as the equivalence in docstring
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,901,611,365
|
Fix too big to optimize in test, actually use O0 when aot_inductor.compile_wrapper_with_O0 is set
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
1. Check against the "0" char instead
2. We got the following error when using anything other than O0 flag: `error: Function ZN5torch12aot_inductorL22__check_inputs_outputsEPP16AtenTensorOpaqueS3 is too big to optimize [-Werror,-Wignored-optimization-argument]` So we use O0 flag in wrapper code when `aot_inductor.compile_wrapper_opt_level` is set to `O0`.
Test Plan:
```
buck run 'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:ads_second_stage_dsnn_models_aoti_lowering_test -- -r AdsSecondStageDSNNModelsAOTILoweringTest
```
Differential Revision: D70670957
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,610,947
|
[torch.export] How to export with the model having *args and **kwargs as forward signature?
|
titaiwangms
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 3
|
COLLABORATOR
|
This is the original model code:
```python
from diffusers.models import AutoencoderKL
import torch
model_name = "black-forest-labs/FLUX.1-dev"
hf_safetensor = True
model_opts = {'torch_dtype': torch.float16}
model = AutoencoderKL.from_pretrained(model_name, subfolder="vae", use_safetensors=hf_safetensor, force_download=True, **model_opts).to("cpu")
model.forward = model.decode # This turns model forward signature to *args and **kwargs
inputs = torch.randn(1, 16, 128, 128, dtype=torch.float32, device="cpu")
B, H, W = torch.export.Dim("B"), torch.export.Dim("H"), torch.export.Dim("W")
dynamic_shapes = ({0:B, 2:H, 3:W},)
torch.export.export(
model,
(inputs,),
dynamic_shapes=dynamic_shapes,
strict=False
)
```
No matter what data structure I turn inputs or dynamic_shapes to, it mismatches.
A simple and not so much making sense example could be like this:
```python
import torch
import torch.nn as nn
import torch.onnx
class AddModel(nn.Module):
def __init__(self):
super(AddModel, self).__init__()
def forward(self, x):
return torch.sigmoid(x)
class WrappedModel(nn.Module):
def __init__(self, model):
super(WrappedModel, self).__init__()
self.model = model
def forward(self, *arga, **kwargs):
return self.model(*arga, **kwargs)
# Instantiate the model
model = WrappedModel(AddModel())
# Set the model to evaluation mode
model.eval()
# Create dynamic input tensors
x = torch.randn(2, 3)
# Define dynamic axes for ONNX export
dynamic_shapes = ({0: torch.export.Dim.AUTO, 1: torch.export.Dim.AUTO},)
torch.export.export(
model,
(x,),
dynamic_shapes=dynamic_shapes,
strict=False
)
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,901,603,849
|
Fix calling torch.compile inside of a `__torch_dispatch__`
|
zou3519
|
open
|
[
"Stale",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148712
We should only bail out in the following situation:
```py
with torch_dispatch_mode()
torch.compile(f)(x)
```
However, before this PR, we are also bailing out in the following
situation:
```py
with torch_dispatch_mode()
no_compile_f(x)
```
where the torch_dispatch_mode's __torch_dispatch__ calls torch.compile.
In order to only error out in the first situation, the check we should
be doing is "are there any non-infra modes on the current torch_dispatch
stack". That is what this PR adds.
Test Plan:
- new test
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,901,578,447
|
Flex attention significantly slower than SDPA
|
nikonikolov
|
closed
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention",
"module: sdpa"
] | 5
|
CONTRIBUTOR
|
I have been trying to use flex attention but it seems to be significantly slower than the SDPA attention. Reproduction script in https://gist.github.com/nikonikolov/4cf740b8f9268f4386a4394c7f663e12
With the script, the average time for a forward pass with flex attention is ~ `704` vs `319` with SDPA.
```
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.13 (main, Oct 3 2023, 01:22:22) [Clang 17.0.1 ] (64-bit runtime)
Python platform: Linux-5.15.0-1048-oracle-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy==1.7.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1+cu124
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,901,548,390
|
[Just SCRTCH] no review
|
laithsakka
|
open
|
[
"Stale",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148710
* #148430
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,901,547,585
|
fix lost input mutations with export_tracepoint
|
avikchaudhuri
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148709
Preserving module call signatures in the presence of input mutation cause incorrect results. The root cause turned out to be that export tracepoints would unwrap / wrap functional args that would lose mutation info on those args.
Differential Revision: [D70734821](https://our.internmc.facebook.com/intern/diff/D70734821/)
| true
|
2,901,532,470
|
[DO NOT REVIEW, review 148124 instead] stable torch library draft
|
janeyx99
|
closed
|
[
"ciflow/trunk",
"release notes: cpp",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
importable copy of https://github.com/pytorch/pytorch/pull/148124
| true
|
2,901,531,624
|
[ONNX] Create documentation for ONNX verification tools
|
justinchuby
|
closed
|
[
"open source",
"release notes: onnx"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148707
* #148730
| true
|
2,901,531,534
|
[ONNX] Improve verify_onnx_program to use VerificationInterpreter
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148707
* __->__ #148706
I realized we can just extend `verify_onnx_program` to return intermediate values. There is no need for us to expose the VerificationInterpreter to users.
I added a `compare_intermediates` option to `verify_onnx_program`.
| true
|
2,901,515,590
|
Bump triton pin. Add aarch64 triton build
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 12
|
CONTRIBUTOR
|
1. Bumps pin for triton to release/3.3.x branch
2. Bump pin for triton-xpu
3. Remove ROCm xfail tests
4. Add aarch64 triton build:
* Depends on: https://github.com/pytorch/pytorch/pull/148768
* Fixes: https://github.com/pytorch/pytorch/issues/130558
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,497,524
|
cleanup JK for duplicate pt2 compile callbacks prevention
|
burak-turk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Summary: This diff cleans up the JK we used for enabling `add pt2 callbacks for backward pass and prevent duplicate callbacks` feature.
Differential Revision: D70643543
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,476,303
|
replace usages of upload_graph in inductor with tlparse
|
bdhirsh
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 7
|
CONTRIBUTOR
|
Summary:
context here: https://fb.workplace.com/groups/1286739428954016/posts/1447200899574534/?comment_id=1447204456240845
we shouldn't be uploading graphs during compilation, which can be slow. Instead, we should be relying on tlparse everywhere we need to to dump intermediate artifacts to disk during compilation, so they can be pulled from disk offlinel ater.
Test Plan: CI
Differential Revision: D70731871
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,467,764
|
[torch.export] Dynamic shapes disappear after run_decompositions(decomp_table=None)
|
titaiwangms
|
closed
|
[
"triaged",
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 11
|
COLLABORATOR
|
### 🐛 Describe the bug
The exported program does not keep sym_size_int after decompositions.
```python
import torchvision
import torchaudio
import torch
# define a pytorch model
class SpecMaker(torch.nn.Module):
def __init__(self):
super().__init__()
self.transforms = torchvision.transforms.Compose(
[
torchaudio.transforms.Spectrogram(
n_fft=512,
win_length=512,
hop_length=256,
),
torchaudio.transforms.AmplitudeToDB(top_db=100),
]
)
def forward(self, x):
return self.transforms(x)
specmodel = SpecMaker()
input = torch.rand(32000 * 10)
spec = specmodel(input)
input_batch = torch.stack([input, input])
spec_batch = specmodel(input_batch) # just testing pytorch model works as expected
assert spec_batch.shape== torch.Size([2, 257, 1251])
exported_program = torch.export.export(
specmodel,
(input_batch,),
dynamic_shapes=({0: torch.export.Dim.AUTO},),
strict=False
)
print(f" before decompose: {exported_program}")
# before decompose: ExportedProgram:
# class GraphModule(torch.nn.Module):
# def forward(self, c_lifted_tensor_0: "f32[512]", x: "f32[s0, 320000]"):
# #
# sym_size_int_2: "Sym(s0)" = torch.ops.aten.sym_size.int(x, 0)
# # File: /home/titaiwang/audio/src/torchaudio/transforms/_transforms.py:110 in forward, code: return F.spectrogram(
# reshape: "f32[s0, 320000]" = torch.ops.aten.reshape.default(x, [-1, 320000]); x = None
# view: "f32[1, s0, 320000]" = torch.ops.aten.view.default(reshape, [1, sym_size_int_2, 320000]); reshape = None
# pad: "f32[1, s0, 320512]" = torch.ops.aten.pad.default(view, [256, 256], 'reflect'); view = None
# view_1: "f32[s0, 320512]" = torch.ops.aten.view.default(pad, [sym_size_int_2, 320512]); pad = None
# stft: "c64[s0, 257, 1251]" = torch.ops.aten.stft.default(view_1, 512, 256, 512, c_lifted_tensor_0, False, True, True); view_1 = c_lifted_tensor_0 = None
# reshape_1: "c64[s0, 257, 1251]" = torch.ops.aten.reshape.default(stft, [sym_size_int_2, 257, 1251]); stft = None
# abs_1: "f32[s0, 257, 1251]" = torch.ops.aten.abs.default(reshape_1); reshape_1 = None
# pow_1: "f32[s0, 257, 1251]" = torch.ops.aten.pow.Tensor_Scalar(abs_1, 2.0); abs_1 = None
# # File: /home/titaiwang/audio/src/torchaudio/transforms/_transforms.py:345 in forward, code: return F.amplitude_to_DB(x, self.multiplier, self.amin, self.db_multiplier, self.top_db)
# clamp: "f32[s0, 257, 1251]" = torch.ops.aten.clamp.default(pow_1, 1e-10); pow_1 = None
# log10: "f32[s0, 257, 1251]" = torch.ops.aten.log10.default(clamp); clamp = None
# mul: "f32[s0, 257, 1251]" = torch.ops.aten.mul.Tensor(log10, 10.0); log10 = None
# sub_: "f32[s0, 257, 1251]" = torch.ops.aten.sub_.Tensor(mul, 0.0); mul = None
# reshape_2: "f32[1, s0, 257, 1251]" = torch.ops.aten.reshape.default(sub_, [-1, sym_size_int_2, 257, 1251]); sub_ = None
# amax: "f32[1]" = torch.ops.aten.amax.default(reshape_2, [-3, -2, -1])
# sub: "f32[1]" = torch.ops.aten.sub.Tensor(amax, 100); amax = None
# view_2: "f32[1, 1, 1, 1]" = torch.ops.aten.view.default(sub, [-1, 1, 1, 1]); sub = None
# max_1: "f32[1, s0, 257, 1251]" = torch.ops.aten.max.other(reshape_2, view_2); reshape_2 = view_2 = None
# reshape_3: "f32[s0, 257, 1251]" = torch.ops.aten.reshape.default(max_1, [sym_size_int_2, 257, 1251]); max_1 = sym_size_int_2 = None
# return (reshape_3,)
# Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.CONSTANT_TENSOR: 4>, arg=TensorArgument(name='c_lifted_tensor_0'), target='lifted_tensor_0', persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='reshape_3'), target=None)])
# Range constraints: {s0: VR[2, int_oo]}
exported_program = exported_program.run_decompositions(decomp_table=None)
print(f" after decompose: {exported_program}")
# after decompose: ExportedProgram:
# class GraphModule(torch.nn.Module):
# def forward(self, c_lifted_tensor_0: "f32[512]", x: "f32[2, 320000]"):
# # File: /home/titaiwang/audio/src/torchaudio/transforms/_transforms.py:110 in forward, code: return F.spectrogram(
# view: "f32[2, 320000]" = torch.ops.aten.view.default(x, [-1, 320000]); x = None
# view_1: "f32[1, 2, 320000]" = torch.ops.aten.view.default(view, [1, 2, 320000]); view = None
# arange: "i64[320512]" = torch.ops.aten.arange.start_step(-256, 320256, layout = torch.strided, device = device(type='cpu'), pin_memory = False)
# abs_1: "i64[320512]" = torch.ops.aten.abs.default(arange); arange = None
# sub: "i64[320512]" = torch.ops.aten.sub.Tensor(319999, abs_1); abs_1 = None
# abs_2: "i64[320512]" = torch.ops.aten.abs.default(sub); sub = None
# sub_1: "i64[320512]" = torch.ops.aten.sub.Tensor(319999, abs_2); abs_2 = None
# index: "f32[1, 2, 320512]" = torch.ops.aten.index.Tensor(view_1, [None, None, sub_1]); view_1 = sub_1 = None
# view_2: "f32[2, 320512]" = torch.ops.aten.view.default(index, [2, 320512]); index = None
# unfold: "f32[2, 1251, 512]" = torch.ops.aten.unfold.default(view_2, -1, 512, 256); view_2 = None
# mul: "f32[2, 1251, 512]" = torch.ops.aten.mul.Tensor(unfold, c_lifted_tensor_0); unfold = c_lifted_tensor_0 = None
# _fft_r2c: "c64[2, 1251, 257]" = torch.ops.aten._fft_r2c.default(mul, [2], 0, True); mul = None
# permute: "c64[2, 257, 1251]" = torch.ops.aten.permute.default(_fft_r2c, [0, 2, 1]); _fft_r2c = None
# view_3: "c64[2, 257, 1251]" = torch.ops.aten.view.default(permute, [2, 257, 1251]); permute = None
# abs_3: "f32[2, 257, 1251]" = torch.ops.aten.abs.default(view_3); view_3 = None
# pow_1: "f32[2, 257, 1251]" = torch.ops.aten.pow.Tensor_Scalar(abs_3, 2.0); abs_3 = None
# # File: /home/titaiwang/audio/src/torchaudio/transforms/_transforms.py:345 in forward, code: return F.amplitude_to_DB(x, self.multiplier, self.amin, self.db_multiplier, self.top_db)
# clamp: "f32[2, 257, 1251]" = torch.ops.aten.clamp.default(pow_1, 1e-10); pow_1 = None
# log10: "f32[2, 257, 1251]" = torch.ops.aten.log10.default(clamp); clamp = None
# mul_1: "f32[2, 257, 1251]" = torch.ops.aten.mul.Tensor(log10, 10.0); log10 = None
# sub_2: "f32[2, 257, 1251]" = torch.ops.aten.sub.Tensor(mul_1, 0.0); mul_1 = None
# view_5: "f32[1, 2, 257, 1251]" = torch.ops.aten.view.default(sub_2, [1, 2, 257, 1251]); sub_2 = None
# amax: "f32[1]" = torch.ops.aten.amax.default(view_5, [-3, -2, -1])
# sub_3: "f32[1]" = torch.ops.aten.sub.Tensor(amax, 100); amax = None
# view_6: "f32[1, 1, 1, 1]" = torch.ops.aten.view.default(sub_3, [-1, 1, 1, 1]); sub_3 = None
# maximum: "f32[1, 2, 257, 1251]" = torch.ops.aten.maximum.default(view_5, view_6); view_5 = view_6 = None
# view_7: "f32[2, 257, 1251]" = torch.ops.aten.view.default(maximum, [2, 257, 1251]); maximum = None
# return (view_7,)
# Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.CONSTANT_TENSOR: 4>, arg=TensorArgument(name='c_lifted_tensor_0'), target='lifted_tensor_0', persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='view_7'), target=None)])
# Range constraints: {}
```
### Versions
PyTorch version: 2.7.0a0+git38479e4
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CBL-Mariner/Linux (x86_64)
GCC version: (GCC) 11.2.0
Clang version: 12.0.1
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.173.1-2.cm2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 7
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap clflushopt avx512cd avx512bw avx512vl xsaveopt xsavec xsaves md_clear
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 35.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] botorch==0.12.0
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-coding==1.3.3
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] gpytorch==1.13
[pip3] model-explorer-onnx==0.3.1
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.21.0
[pip3] onnxruntime-genai==0.5.2
[pip3] onnxscript==0.1.0.dev20241216
[pip3] optree==0.13.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.7.0a0+git38479e4
[pip3] torch-geometric==2.6.1
[pip3] torchao==0.5.0
[pip3] torchaudio==2.6.0a0+c670ad8
[pip3] torchdata==0.10.1
[pip3] torchmetrics==1.0.3
[pip3] torchmultimodal-nightly==2024.4.1
[pip3] torchrec==1.0.0
[pip3] torchrl==0.6.0
[pip3] torchvision==0.22.0a0+867521e
[pip3] torchx==0.7.0
[pip3] triton==3.1.0
[conda] botorch 0.12.0 pypi_0 pypi
[conda] gpytorch 1.13 pypi_0 pypi
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-lightning 2.5.0.post0 pypi_0 pypi
[conda] pytorch-sphinx-theme 0.0.24 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.7.0a0+git38479e4 dev_0 <develop>
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torchao 0.5.0 pypi_0 pypi
[conda] torchaudio 2.6.0a0+c670ad8 dev_0 <develop>
[conda] torchdata 0.10.1 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchmultimodal-nightly 2024.4.1 pypi_0 pypi
[conda] torchrec 1.0.0 pypi_0 pypi
[conda] torchrl 0.6.0 pypi_0 pypi
[conda] torchvision 0.22.0a0+867521e dev_0 <develop>
[conda] torchx 0.7.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,901,426,451
|
aot_eager produces wrong output with all_gather_tensor_autograd
|
eellison
|
open
|
[
"high priority",
"oncall: distributed",
"triaged",
"actionable",
"module: correctness (silent)",
"oncall: pt2",
"module: pt2-dispatcher"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
See repro here:
```
"""
torchrun --nproc-per-node 2 /home/dhaziza/work/scripts/aot_bug.py
"""
import torch
import torch.distributed as dist
import torch.distributed._functional_collectives as ft_c
def model(x):
x = ft_c.all_gather_tensor_autograd(x, gather_dim=0, group=dist.group.WORLD)
x = ft_c.wait_tensor(x)
return x
dist.init_process_group(backend="nccl", store=dist.TCPStore("127.0.0.1", 38173, is_master=True), world_size=1, rank=0)
# dist.init_process_group(backend="nccl", init_method="env://") # < for use with torchrun
rank = dist.get_rank()
torch.cuda.set_device(rank)
x = torch.randn([16], device="cuda", requires_grad=True)
gy = torch.ones([16 * dist.get_world_size()], device="cuda")
model(x).backward(gy)
print("ref", x.grad)
x.grad = None
torch.compile(model, backend="aot_eager")(x).backward(gy)
print("compiled", x.grad)
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu @bdhirsh
there is a workaround with using `is_compiling`. nonetheless I think this is high priority because we rely on ablations of `eager`/`aot_eager`/`aot_eager_decomp_partition`/`inductor` for investigating bugs.
### Versions
master
| true
|
2,901,406,012
|
Add cpp wrapper skip to cudagraph logs
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148700
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,403,626
|
CUDA 12.6 Inductor perf test failures
|
atalman
|
open
|
[
"high priority",
"module: cuda",
"module: ci",
"triaged"
] | 6
|
CONTRIBUTOR
|
We are working on PR to move our Inductor CUDA 12.4 -> 12.6
This is pull request:
https://github.com/pytorch/pytorch/pull/148612/
We do see 2 test failures and 1 pass
fail_accuracy:
timm_efficientnet,fail_accuracy,7
crossvit_9_240,fail_accuracy,7
pass:
tinynet_a,pass,6
Please note: this looks like an improvement, it was turned off with this PR: https://github.com/pytorch/pytorch/pull/123475
Workflows:
https://github.com/pytorch/pytorch/pull/148612/commits/29ec07bdd63545ec7f38970f4e48bf72be515f8e#diff-edec9c9b528f9ee5f7113d88ce1e776897a63c5ac0260c55926a570a62a0a36b
and
https://github.com/pytorch/pytorch/actions/runs/13693492178/job/38295168222
### Versions
2.7.0 nightly
cc @ptrblck @msaroufim @eqy @seemethere @malfet @pytorch/pytorch-dev-infra @ezyang @gchanan @zou3519 @kadeng @chauhang @penguinwu
| true
|
2,901,399,352
|
Tell dmypy to ignore bad package
|
aorenste
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148698
| true
|
2,901,376,234
|
Remove redundant moves in kernel.cpp
|
justinchuby
|
closed
|
[
"oncall: jit",
"open source",
"NNC",
"release notes: jit",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
During compilation the gcc compiler suggests
> ```
> pytorch/torch/csrc/jit/tensorexpr/kernel.cpp: In member function ‘torch::jit::tensorexpr::ExprHandle torch::jit::tensorexpr::TensorExprKernel::getVarForShape(const c10::ShapeSymbol&)’:
> pytorch/torch/csrc/jit/tensorexpr/kernel.cpp:486:21: warning: redundant move in return statement [-Wredundant-move]
> 486 | return std::move(var);
> | ~~~~~~~~~^~~~~
> pytorch/torch/csrc/jit/tensorexpr/kernel.cpp:486:21: note: remove ‘std::move’ call
> pytorch/torch/csrc/jit/tensorexpr/kernel.cpp: In member function ‘torch::jit::tensorexpr::ExprHandle torch::jit::tensorexpr::TensorExprKernel::getStrideArg(size_t, size_t)’:
> pytorch/torch/csrc/jit/tensorexpr/kernel.cpp:1023:21: warning: redundant move in return statement [-Wredundant-move]
> 1023 | return std::move(var);
> | ~~~~~~~~~^~~~~
> pytorch/torch/csrc/jit/tensorexpr/kernel.cpp:1023:21: note: remove ‘std::move’ call
> ```
So making these changes.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,901,360,190
|
[dynamic shapes] add backed_size_oblivious option
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Adds option `torch.fx.experimental._config.backed_size_oblivious = True` to allocate `[0, inf]` instead of `[2, inf]` ranges for size backed symbols, and opting into size-oblivious semantics for them.
Helps in a number of cases like
- Keeps `[0, inf]` bounds for unbacked symbols, when we make a unbacked -> backed replacement
- More sound handling for 0/1 inputs at runtime when we lower from export
- Avoids ends-of-bounds, sys.maxsize constraint violations for exporting with named Dims (https://github.com/pytorch/pytorch/issues/146315, https://github.com/pytorch/pytorch/issues/146046)
May look towards turning this on globally for export.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,901,354,580
|
dynamo fakification errors with opaquetensorimpl
|
j4orz
|
open
|
[
"module: autograd",
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: pt2-dispatcher"
] | 2
|
NONE
|
hacking away on the pytorch backend for tinygrad with the guidance of @albanD , and one of the early designs being explored is wrapping a tinygrad tensor in an `opaquetensorimpl` with `privateuse1` dispatch[0] [1], even though the former is for much older integrations (XLATensor) with no storage nor stride.
because of dynamo's `builder.py` using "fakification"[2] for view manipulation, using `torch.compile(model, backend="tiny")` after `@register_backend` errors from fake/meta tensor code[3] attempting to access the opaquetensor's storage: `NotImplementedError: Cannot access storage of OpaqueTensorImpl`.
although alternative design decisions are being explored (c10::Allocator, pure-python device registration), is there a world where we can prevent that execution of the conditional arm in `meta_utils.py` for opaquetensors, and `storage` remains uninitialized?
[0]: https://github.com/tinygrad/tinygrad/blob/master/extra/torch_backend/backend.py
[1]: https://github.com/tinygrad/tinygrad/blob/master/extra/torch_backend/wrapped_tensor.cpp
[2]: https://github.com/pytorch/pytorch/blob/main/torch/fx/experimental/symbolic_shapes.py#L1824-L1845
[3]: https://github.com/pytorch/pytorch/blob/main/torch/_subclasses/meta_utils.py#L288
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu @eellison @zou3519 @bdhirsh
| true
|
2,901,285,556
|
[ca] use torch.compile ca API for benchmarks and fix API to allow specifying dynamic via configs instead of torch.compile kwargs
|
xmfan
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148516
* #149420
* #149367
* __->__ #148694
* #149229
* #149336
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,901,207,934
|
[logging] Set compile_id in the CachingAutotuner during compilation so we have it for dynamo_timed logging
|
masnesral
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148693
Summary: This is a simpler alternative to https://github.com/pytorch/pytorch/pull/146455, where we can stick the compileId (and forward/backward bool) in the CachingAutotuner so that we have it for logging `benchmark_all_configs`. Recall that the first attempt put the compileId in the inductor_meta and that interfered with caching.
Test Plan:
`python benchmarks/dynamo/torchbench.py --performance --training --amp --backend inductor --device cuda --print-compilation-time --repeat 5 --cold-start-latency --only nanogpt`
* tlparse: https://fburl.com/e71yn6uc
* dynamo_compile: https://fburl.com/scuba/dynamo_compile/sandbox/4ageghhv
* pt2_compile_events: https://fburl.com/scuba/pt2_compile_events/4fgv1itq
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,184,195
|
[RFC][cutlass backend] Reduce precompile error to log.info level
|
henrylhtsang
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148692
Differential Revision: [D70719832](https://our.internmc.facebook.com/intern/diff/D70719832/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,144,918
|
codecache.py: use str.format rather than % formatting
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148691
Additionally, swaps over a fixed length `std::vector` used by `cpp_wrapper` for a `std::array`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,901,128,158
|
[RFC] Make Functional Collectives Differentiable
|
wconstab
|
closed
|
[
"oncall: distributed",
"triaged"
] | 3
|
CONTRIBUTOR
|
Currently, functional collectives are by default not differentiable, which can lead to surprises.
@d4l3k added support for differentiable functional collectives in a stack of PRs beginning with https://github.com/pytorch/pytorch/pull/123599. For now, these are 'separate' so users have to opt into them. Furthermore, there was a lot of discussion on #123599 about whether it is desirable to have one autograd formula per collective, given that may not be the optimal choice depending on the replication or sharding of the input to the collective.
The purpose of this RFC is to align on the desired behavior for functional collectives. I'd like to get input on these questions.
1) are there correctness or performance issues with the default autograd formulas? Let's list them out and see whether they merit keeping _grad variants separate
2) Should we make the normal functional collectives support autograd by default (merge the _grad variants into the non _grad ones)?
3) Should DTensor use the no-grad or _grad variants of the functional collectives? When using DTensor for tracing collectives into graphs, it can be nice to trace the DTensor away but keep the .grad so compile can produce a backwards.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o @fmassa @lw @bdhirsh
| true
|
2,901,102,071
|
NUMA Binding Integration with torchrun
|
raghavhrishi
|
open
|
[
"oncall: distributed",
"triaged"
] | 14
|
NONE
|
### 🚀 The feature, motivation and pitch
The feature request involves adding NUMA (Non-Uniform Memory Access) binding capabilities to torchrun as an option to optimize distributed training performance. This feature will automatically manage process-to-CPU core binding based on GPU-CPU topology, improving resource utilization and training efficiency.
**Proposed Implementation**
Add a new command-line flag `--numa_binding` to torchrun that supports four binding strategies that would perform NUMA Binding of the rank processes launched by torchrun:
`torchrun --numa_binding <binding_strategy> --nproc_per_node 8 main.py`
**Binding Strategies**
- node: Processes are bound to cpus within a NUMA node.
- socket: Processes are bound to cpus within a socket.
- exclusive: Processes are bound to exclusive sets of cpus within a NUMA node.
- core-complex: Processes are bound to cpus in a core-complex.
An example for illustration purpose:
```
Socket NUMA Nodes Cores per NUMA Node Core Complexes (CCX) per NUMA Node
Socket 0 NUMA Node 0, 1 8 cores each 2 CCXs each
Socket 1 NUMA Node 2, 3 8 cores each 2 CCXs each
```
**Socket:** Each rank is bound to all the NUMA Nodes part of the socket of its affined NUMA Node. For example:
If rank 0 is affined to NUMA Node 0, it would be bound to NUMA Nodes 0 & 1 as they are part of the same socket.
**Node:** Each rank is bound to its affined NUMA node. For example, if Rank 0 is affined to NUMA Node 0, it would use the following commands to bind to the cores and memory within that node:
`numactl --cpunodebind=0 --membind=0`
**Exclusive:** When multiple ranks are affined to the same NUMA node, the cores are divided exclusively between them to avoid overlap. For example:
If Rank 0 and Rank 1 are both affined to NUMA Node 0, the cores would be split as follows:
Rank 0:
`numactl --physcpubind=0-3 --membind=0`
Rank 1:
`numactl --physcpubind=4-7 --membind=0`
**Core-Complex:** If multiple ranks are affined to a NUMA node, they can be bound exclusively to all the cores within a single L3 cache (Core Complex). For example:
If Rank 0 and Rank 1 are both affined to NUMA Node 0, each rank would get exclusive access to one core-complex (4 cores per CCX).
Rank 0:
`numactl --physcpubind=0-3 --membind=0`
Rank 1:
`numactl --physcpubind=4-7 --membind=0`
The exclusive and core-complex binding strategies ensure process isolation by preventing core sharing between ranks. These algorithms are different ways of achieving the binding and would give different results based on the workload and configuration.
**Benefits**
- Enhanced training performance through optimized CPU-GPU communication
- Minimized cross-NUMA traffic
- Reduced memory access latency
- Simplified user experience - no topology knowledge required
**User Impact**
Users only need to specify their preferred binding strategy. The system automatically handles all topology detection and process binding, making NUMA optimization accessible without requiring deep system knowledge.
cc: @ptrblck @eqy @kwen2501 @arpitsardhana
### Alternatives
_No response_
### Additional context
Models featuring frequent data transfers between GPU and CPU (device-to-host and host-to-device) are expected to see the greatest performance gains from this feature. This is due to the optimized memory access patterns and reduced latency provided by the NUMA binding feature.
**Benchmark Data:**
Initial performance testing using nnU-Net Model as the benchmark:
- Achieved performance of 11-13% mean throughput improvement in some cases.
- Performance gains varied with scaling and chosen binding strategy.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,901,091,999
|
Fix _del_library
|
zou3519
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148104
* __->__ #148688
* #148092
* #148091
* #148063
* #148046
On library deletion, we need to clear fx's schema cache.
Test Plan:
- next PR up in the stack
| true
|
2,901,076,969
|
torch.onnx.export with torchaudio Spectrogram doesn't support dynamic batch size
|
sammlapp
|
closed
|
[
"module: onnx",
"triaged"
] | 11
|
NONE
|
### 🐛 Describe the bug
It is now possible to include torchaudio.transforms.Spectrogram in a model and successfully export the model to an onnx program. However, when loading the model I cannot use a batch size besides the one used in the saved model. I've tried several approaches. Here is an example based on https://github.com/pytorch/pytorch/issues/113067#issuecomment-2702153314
### create and save onnx model
```python
import torchvision
import torchaudio
import torch
# define a pytorch model
class SpecMaker(torch.nn.Module):
def __init__(self):
super().__init__()
self.transforms = torchvision.transforms.Compose(
[
torchaudio.transforms.Spectrogram(
n_fft=512,
win_length=512,
hop_length=256,
),
torchaudio.transforms.AmplitudeToDB(top_db=100),
]
)
def forward(self, x):
return self.transforms(x)
specmodel = SpecMaker()
input = torch.rand(32000 * 10)
spec = specmodel(input)
input_batch = torch.stack([input, input])
spec_batch = specmodel(input_batch) # just testing pytorch model works as expected
assert spec_batch.shape== torch.Size([2, 257, 1251])
onnx_program = torch.onnx.export(
specmodel,
(input_batch,),
dynamic_shapes=[{0: "dim_x"}],
report=True,
dynamo=True,
)
onnx_program.save("specmodel2.onnx")
```
### load onnx model and attempt to run with different batch size
```python
import onnx, onnxruntime
import torch
onnx_model = onnx.load("specmodel2.onnx")
onnx.checker.check_model(onnx_model)
input = torch.rand(32000 * 10)
input = torch.tensor((opso.birds).trim(0, 10).samples)
# what if its batched?
input_batched = torch.stack([input, input, input]) #works if batch has 2 samples, fails with 3 samples
EP_list = ["CUDAExecutionProvider", "CPUExecutionProvider"]
ort_session = onnxruntime.InferenceSession("specmodel2.onnx", providers=EP_list)
def to_numpy(tensor):
return (
tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
)
# compute ONNX Runtime output prediction
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(input_batched)}
ort_outs = ort_session.run(None, ort_inputs)
```
Error:
```
---------------------------------------------------------------------------
RuntimeException Traceback (most recent call last)
Cell In[8], [line 14](vscode-notebook-cell:?execution_count=8&line=14)
[12](vscode-notebook-cell:?execution_count=8&line=12) # compute ONNX Runtime output prediction
[13](vscode-notebook-cell:?execution_count=8&line=13) ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(input_batched)}
---> [14](vscode-notebook-cell:?execution_count=8&line=14) ort_outs = ort_session.run(None, ort_inputs)
File ~/miniconda3/envs/bmz_dev/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:266, in Session.run(self, output_names, input_feed, run_options)
[264](https://file+.vscode-resource.vscode-cdn.net/Users/SML161/nb_opso/ml/save_model/~/miniconda3/envs/bmz_dev/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:264) output_names = [output.name for output in self._outputs_meta]
[265](https://file+.vscode-resource.vscode-cdn.net/Users/SML161/nb_opso/ml/save_model/~/miniconda3/envs/bmz_dev/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:265) try:
--> [266](https://file+.vscode-resource.vscode-cdn.net/Users/SML161/nb_opso/ml/save_model/~/miniconda3/envs/bmz_dev/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:266) return self._sess.run(output_names, input_feed, run_options)
[267](https://file+.vscode-resource.vscode-cdn.net/Users/SML161/nb_opso/ml/save_model/~/miniconda3/envs/bmz_dev/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:267) except C.EPFail as err:
[268](https://file+.vscode-resource.vscode-cdn.net/Users/SML161/nb_opso/ml/save_model/~/miniconda3/envs/bmz_dev/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:268) if self._enable_fallback:
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'node_Reshape_5' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:47 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, onnxruntime::TensorShapeVector &, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{3,320000}, requested shape:{1,2,320000}
```
Am I loading and using the onnx model incorrectly, or is this an issue with dynamic shape exporting when the model contains stft? (There are several related issues including https://github.com/pytorch/pytorch/issues/113067, https://github.com/pytorch/pytorch/issues/139246, and this PR https://github.com/pytorch/pytorch/pull/145080)
### Versions
collect_env.py.1 100%[================================================================================================================================>] 23.78K --.-KB/s in 0.004s
2025-03-06 12:54:42 (6.31 MB/s) - ‘collect_env.py.1’ saved [24353/24353]
zsh: command not found: #
Collecting environment information...
PyTorch version: 2.7.0.dev20250301
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.14 (main, May 6 2024, 14:42:37) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.2.1
[pip3] optree==0.14.0
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.7.0.dev20250301
[pip3] torch-audiomentations==0.11.0
[pip3] torch_pitch_shift==1.2.5
[pip3] torchaudio==2.6.0.dev20250301
[pip3] torchmetrics==1.2.0
[pip3] torchview==0.2.6
[pip3] torchvision==0.22.0.dev20250301
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] torch 2.7.0.dev20250301 pypi_0 pypi
[conda] torch-audiomentations 0.11.0 pypi_0 pypi
[conda] torch-pitch-shift 1.2.5 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250301 pypi_0 pypi
[conda] torchmetrics 1.2.0 pypi_0 pypi
[conda] torchview 0.2.6 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250301 pypi_0 pypi
| true
|
2,901,025,795
|
[MPS] Fix scalar to tensors bitshifts
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148719
* __->__ #148686
* #148685
By introducing a concept of non-commutative binary op and renaming all op templates from `bitwise_foo_tensor` and `bitwise_foo_scalar` to `bitwise_foo_tensor_tensor` and `bitwise_foo_tensor_scalar`
Add regression tests
Please note, that for some undefined values MPS and CPU behaviors are different, for example
```
>>> import torch
>>> 4095 >> torch.arange(12, device="mps", dtype=torch.uint8)
tensor([255, 255, 255, 255, 255, 127, 63, 31, 15, 7, 3, 1],
device='mps:0', dtype=torch.uint8)
>>> 4095 >> torch.arange(12, device="cpu", dtype=torch.uint8)
tensor([255, 127, 63, 31, 15, 7, 3, 1, 0, 0, 0, 0],
dtype=torch.uint8)
```
Because on CPU scalar is cast to output dtype before operation is performed, but on MPS this happens after the op is done
Fixes https://github.com/pytorch/pytorch/issues/147889
| true
|
2,901,025,686
|
[BE][MPS] Remove redundant `handle_tensor_scalar_binary_op`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148686
* __->__ #148685
After https://github.com/pytorch/pytorch/pull/143934 `mtl_setBuffer` can handle scalar tensors correctly, so no need to have a specialized function here
| true
|
2,900,997,675
|
[triton 3.3] perf run, Mar 5
|
davidberard98
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148684
| true
|
2,900,989,147
|
[dynamo] Remove dead code path around `functools.partial` objects
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148683
This removes the code paths added in #98120, which has then been
superceded by #108846.
More importantly, it makes `EQUALS_MATCH`'s `ok_mutable_types` (added in #134016)
easier to reason about, i.e., no need to worry about `dict` types, which
was only needed for #98120.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,900,917,171
|
can we make inductor create faster fusions for tiled reductions across dim0 and dim1?
|
vkuzo
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Can we make fusions of reductions across Mx1 and 1xM tiles fast in inductor? The key use case for this is scaling tensors to MX across both dim0 and dim1 at the same time, which is important for microscaling (MX) training. Here is an example snippet to demonstrate a simplified version of the pattern:
```python
def scale_dim0_dim1_reference(x_hp: torch.Tensor, block_size) -> Tuple[torch.Tensor, torch.Tensor]:
# normalize across dim0
x_hp_d0_block = x_hp.reshape(-1, block_size)
x_hp_d0_block_abs = x_hp_d0_block.abs()
amax_dim0 = torch.amax(x_hp_d0_block_abs, dim=1).unsqueeze(1)
x_hp_d0_block_normalized = x_hp_d0_block / amax_dim0
x_hp_d0_normalized = x_hp_d0_block_normalized.reshape(x_hp.shape)
# normalize across dim1
x_hp_d1 = x_hp.t().contiguous()
x_hp_d1_block = x_hp_d1.reshape(-1, block_size)
x_hp_d1_block_abs = x_hp_d1_block.abs()
amax_dim1 = torch.amax(x_hp_d1_block_abs, dim=1).unsqueeze(1)
x_hp_d1_block_normalized = x_hp_d1_block / amax_dim1
x_hp_d1_normalized = x_hp_d1_block_normalized.reshape(x_hp_d1.shape)
return x_hp_d0_normalized, x_hp_d1_normalized.t(), amax_dim0, amax_dim1
```
In the code above, we
* start with a tensor and a block size (32 for MX)
* for dim0, partition the tensor into chunks of block_size, normalize by the max absolute value in each block, and write out a normalized tensor and the scales used for normalization
* for dim1, repeat ^
Note: in the "real" use case for MX (https://github.com/pytorch/ao/issues/1788), the differences are:
1. instead of "calculate max absolute value", we will do "calculate MX e8m0 scale"
2. instead of "write out a normalized tensor", we will do "write out a normalized low precision tensor"
3. we also need to swizzle the scales, but that can be done separately from this issue
When I run `torch.compile` on the above example kernel today, I see two kernels - one for each dim (example logs: https://gist.github.com/vkuzo/7bfd4e23411f22fc25f94323bcd93794)
Claude and I wrote a triton kernel to load the input data in tiles and do the normalization across dim0 and dim1 inline: https://gist.github.com/vkuzo/a7374b1f1f5eabff4a6d774972248c22 / https://github.com/vkuzo/pytorch_scripts/blob/6c26861f2a7d0d31930006b63e538d56026b8aba/mx_cast_poc/20250305_mx_dim0_dim1_cast.py). It seems to be up to 2x faster than the current torch.compile behavior with tile size 32, and up to 4x if we increase tile size to 128, so a faster kernel is definitely possible. Note that the triton kernel I linked here currently just normalizes across `tile_size`, it would need to be updated to normalize by `inner_tile_size` if `inner_tile_size != outer_tile_size`.
Output of comparison of torch.compile vs triton kernel across a couple of block sizes:
```bash
(pytorch) [vasiliy@devgpu023.atn1 ~/local/pytorch_scripts/mx_cast_poc (20250305_mx_max_dim0_dim1)]$ python 20250305_mx_dim0_dim1_cast.py --M 8192 -K 4096 --BLOCK_SIZE 32
M 8192 K 4096 BLOCK_SIZE 32
GPU: NVIDIA B200
torch version: 2.7.0a0+gitd518490
triton version: 3.2.0
bf16 vs normalized reference sqnrs: dim0 57.75, dim1 57.75
normalized reference vs normalized triton are bitwise equivalent
time_reference_compile_us 182.9869150943399
time_triton_us 94.03462108262123
speedup 1.9459525969011233
(pytorch) [vasiliy@devgpu023.atn1 ~/local/pytorch_scripts/mx_cast_poc (20250305_mx_max_dim0_dim1)]$ python 20250305_mx_dim0_dim1_cast.py --M 8192 -K 4096 --BLOCK_SIZE 64
M 8192 K 4096 BLOCK_SIZE 64
GPU: NVIDIA B200
torch version: 2.7.0a0+gitd518490
triton version: 3.2.0
bf16 vs normalized reference sqnrs: dim0 56.5, dim1 56.5
normalized reference vs normalized triton are bitwise equivalent
time_reference_compile_us 183.88731220657243
time_triton_us 53.66455661375654
speedup 3.42660638249705
(pytorch) [vasiliy@devgpu023.atn1 ~/local/pytorch_scripts/mx_cast_poc (20250305_mx_max_dim0_dim1)]$ python 20250305_mx_dim0_dim1_cast.py --M 8192 -K 4096 --BLOCK_SIZE 128
M 8192 K 4096 BLOCK_SIZE 128
GPU: NVIDIA B200
torch version: 2.7.0a0+gitd518490
triton version: 3.2.0
bf16 vs normalized reference sqnrs: dim0 56.0, dim1 56.0
normalized reference vs normalized triton are bitwise equivalent
time_reference_compile_us 312.7817773722634
time_triton_us 67.17439390386868
speedup 4.656264972332706
(pytorch) [vasiliy@devgpu023.atn1 ~/local/pytorch_scripts/mx_cast_poc (20250305_mx_max_dim0_dim1)]$ python 20250305_mx_dim0_dim1_cast.py --M 8192 -K 4096 --BLOCK_SIZE 256
M 8192 K 4096 BLOCK_SIZE 256
GPU: NVIDIA B200
torch version: 2.7.0a0+gitd518490
triton version: 3.2.0
bf16 vs normalized reference sqnrs: dim0 56.25, dim1 56.25
normalized reference vs normalized triton are bitwise equivalent
time_reference_compile_us 362.2346390041493
time_triton_us 1091.8661034482752
speedup 0.33175738111125397
```
Can we improve this in inductor?
### Versions
main branch
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,900,898,265
|
[CUDA 12.6 CI] Update of cuda 12.6 eager tests failing on test_pointwise_op_with_tensor_of_scalarlist_overload__foreach_addcmul_is_fastpath_True_cuda_bfloat16
|
atalman
|
open
|
[
"module: cuda",
"triaged"
] | 0
|
CONTRIBUTOR
|
While updating to CUDA 12.6 eager test, PR: https://github.com/pytorch/pytorch/pull/148602
Failing workflow: https://github.com/pytorch/pytorch/actions/runs/13690790469/job/38285054097#step:22:4164
We see following test failure:
```
_ TestForeachCUDA.test_pointwise_op_with_tensor_of_scalarlist_overload__foreach_addcmul_is_fastpath_True_cuda_bfloat16 _
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 386, in test_pointwise_op_with_tensor_of_scalarlist_overload
self._pointwise_test(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 510, in _pointwise_test
actual = op(inputs, self.is_cuda, is_fastpath, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_addcmul', keys=('aten::_foreach_addcmul', 'Unrecognized', 'aten::result_type', 'aten::empty_strided', 'cudaLaunchKernel', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 9: SampleInput(input=TensorList[Tensor[size=(0,), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(0,), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(0,), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(0,), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(0,), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(6, 6), device="cuda:0",
```
cc @ptrblck @msaroufim @eqy @janeyx99 @crcrpar @tinglvv @nWEIdia
### Versions
2.7.0 nightly
| true
|
2,900,770,678
|
Torch Windows nightly installation with torchvision/audio broken by dependencies conflict
|
chuanqi129
|
closed
|
[
"module: binaries",
"module: windows",
"triaged",
"module: xpu"
] | 5
|
COLLABORATOR
|
Recently, the torch xpu Windows nightly wheel can't install with torchvision/audio, the failure as below
```
(nightly) C:\Users\chuanqiw>pip install --pre torch torchvision torchaudio --index-url=https://download.pytorch.org/whl/
nightly/xpu
Looking in indexes: https://download.pytorch.org/whl/nightly/xpu
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250306%2Bxpu-cp310-cp310-win_amd64.whl.metadata (28 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250306%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torchaudio
Using cached https://download.pytorch.org/whl/nightly/xpu/torchaudio-2.6.0.dev20250306%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.8 kB)
Requirement already satisfied: filelock in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch) (3.16.1)
Requirement already satisfied: typing-extensions>=4.10.0 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch) (4.12.2)
Requirement already satisfied: sympy>=1.13.3 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch) (1.13.3)
Requirement already satisfied: networkx in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch) (3.4.2)
Requirement already satisfied: jinja2 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch) (3.1.4)
Requirement already satisfied: fsspec in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch) (2024.10.0)
Requirement already satisfied: tcmlib==1.2.0 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch) (1.2.0)
Requirement already satisfied: umf==0.9.1 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch) (0.9.1)
Collecting intel-pti==0.10.1 (from torch)
Downloading https://download.pytorch.org/whl/nightly/xpu/intel_pti-0.10.1-py2.py3-none-win_amd64.whl.metadata (1.1 kB)
Collecting intel-cmplr-lib-rt==2025.0.5 (from torch)
Downloading https://download.pytorch.org/whl/nightly/xpu/intel_cmplr_lib_rt-2025.0.5-py2.py3-none-win_amd64.whl.metadata (1.2 kB)
Collecting intel-cmplr-lib-ur==2025.0.5 (from torch)
Downloading https://download.pytorch.org/whl/nightly/xpu/intel_cmplr_lib_ur-2025.0.5-py2.py3-none-win_amd64.whl.metadata (1.3 kB)
Collecting intel-cmplr-lic-rt==2025.0.5 (from torch)
Downloading https://download.pytorch.org/whl/nightly/xpu/intel_cmplr_lic_rt-2025.0.5-py2.py3-none-win_amd64.whl.metadata (1.2 kB)
Collecting intel-sycl-rt==2025.0.5 (from torch)
Downloading https://download.pytorch.org/whl/nightly/xpu/intel_sycl_rt-2025.0.5-py2.py3-none-win_amd64.whl.metadata (1.6 kB)
Collecting numpy (from torchvision)
Using cached https://download.pytorch.org/whl/nightly/numpy-2.1.2-cp310-cp310-win_amd64.whl (12.9 MB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250305%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
Using cached https://download.pytorch.org/whl/nightly/pillow-11.0.0-cp310-cp310-win_amd64.whl (2.6 MB)
INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while.
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250305%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250304%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250304%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250303%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250302%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250302%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250301%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250301%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250228%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250227%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250227%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
INFO: pip is still looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while.
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250226%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250225%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250224%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250224%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250223%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250223%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250222%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250222%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250221%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250221%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250220%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250220%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250219%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250219%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250218%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250217%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250217%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250216%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250215%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250215%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250214%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250213%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250213%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250212%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250211%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250211%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250210%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250209%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250209%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250208%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250206%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250207%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250206%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250205%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250205%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250204%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250204%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250203%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250202%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250202%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250201%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250201%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250131%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250131%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250130%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250129%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250129%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250128%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250127%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250127%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250126%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250126%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250125%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250125%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250124%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250123%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250122%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250120%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250121%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250120%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250119%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250119%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250118%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250118%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250117%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250110%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250116%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250115%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250114%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250113%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250112%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250111%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250110%2Bxpu-cp310-cp310-win_amd64.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 3.4 MB/s eta 0:00:00
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250108%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250109%2Bxpu-cp310-cp310-win_amd64.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 4.5 MB/s eta 0:00:00
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250108%2Bxpu-cp310-cp310-win_amd64.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 2.8 MB/s eta 0:00:00
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250107%2Bxpu-cp310-cp310-win_amd64.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 6.0 MB/s eta 0:00:00
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250107%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250106%2Bxpu-cp310-cp310-win_amd64.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 5.3 MB/s eta 0:00:00
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250106%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250105%2Bxpu-cp310-cp310-win_amd64.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 5.7 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
ERROR: Cannot install torch and torchvision==0.22.0.dev20250105+xpu because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested torch
torchvision 0.22.0.dev20250105+xpu depends on torch==2.7.0.dev20250105+xpu
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip to attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
```
But just install torch or torchvision/audio, it works fine.
```
(nightly) C:\Users\chuanqiw>pip install --pre torchvision --index-url=https://download.pytorch.org/whl/nightly/xpu --pro
xy http://child-prc.intel.com:913
Looking in indexes: https://download.pytorch.org/whl/nightly/xpu
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250306%2Bxpu-cp310-cp310-win_amd64.whl.metadata (6.3 kB)
Collecting numpy (from torchvision)
Using cached https://download.pytorch.org/whl/nightly/numpy-2.1.2-cp310-cp310-win_amd64.whl (12.9 MB)
Collecting torch==2.7.0.dev20250305+xpu (from torchvision)
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250305%2Bxpu-cp310-cp310-win_amd64.whl.metadata (27 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
Using cached https://download.pytorch.org/whl/nightly/pillow-11.0.0-cp310-cp310-win_amd64.whl (2.6 MB)
Requirement already satisfied: filelock in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (3.16.1)
Requirement already satisfied: typing-extensions>=4.10.0 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (4.12.2)
Requirement already satisfied: sympy==1.13.3 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (1.13.3)
Requirement already satisfied: networkx in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (3.4.2)
Requirement already satisfied: jinja2 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (3.1.4)
Requirement already satisfied: fsspec in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (2024.10.0)
Requirement already satisfied: intel-cmplr-lib-rt==2025.0.2 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (2025.0.2)
Requirement already satisfied: intel-cmplr-lib-ur==2025.0.2 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (2025.0.2)
Requirement already satisfied: intel-cmplr-lic-rt==2025.0.2 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (2025.0.2)
Requirement already satisfied: intel-sycl-rt==2025.0.2 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (2025.0.2)
Requirement already satisfied: tcmlib==1.2.0 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (1.2.0)
Requirement already satisfied: umf==0.9.1 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (0.9.1)
Requirement already satisfied: intel-pti==0.10.0 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from torch==2.7.0.dev20250305+xpu->torchvision) (0.10.0)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from sympy==1.13.3->torch==2.7.0.dev20250305+xpu->torchvision) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\chuanqiw\appdata\local\miniforge3\envs\nightly\lib\site-packages (from jinja2->torch==2.7.0.dev20250305+xpu->torchvision) (2.1.5)
Downloading https://download.pytorch.org/whl/nightly/xpu/torchvision-0.22.0.dev20250306%2Bxpu-cp310-cp310-win_amd64.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 3.0 MB/s eta 0:00:00
Using cached https://download.pytorch.org/whl/nightly/xpu/torch-2.7.0.dev20250305%2Bxpu-cp310-cp310-win_amd64.whl (1094.9 MB)
Installing collected packages: pillow, numpy, torch, torchvision
```
It should be caused by the torchvision/audio linked to a last day torch wheel (refer https://github.com/pytorch/vision/actions/runs/13697759150/job/38303998802#step:9:23), due to the torch xpu wheel build need more time (about 3.5 hours, refer https://github.com/pytorch/pytorch/actions/runs/13693530329/job/38290928109)
cc @seemethere @malfet @osalpekar @atalman @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,900,758,922
|
[inductor] Fix block ptr store if input is constant
|
kundaMwiza
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
Since block ptr stores require explicit broadcasts, the input to `tl.store` needs to be reshaped and broadcasted. Currently, it is assumed that the input to be stored is in block form (e.g. `XBLOCK`), however it is possible for the input to be a scalar, and so special handling is required to reshape + broadcast the scalar to the output block shape.
Ideally the shape of the input would be an attribute of a `TritonCSEVariable` via shape propagation but that is not the case today. The patch in this PR determines if the input is a constant by checking the arguments to an FX store node which is not ideal. Maybe there is an alternative and simpler method
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,900,723,490
|
Improve Pareto frontier plot for AutoAC
|
lw
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: autograd",
"module: functorch",
"release notes: torch.func",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148678
This was added in https://github.com/pytorch/pytorch/pull/126320. It's a very nice feature, which can be used to predict memory usage for different budget values.
However, it had some limitations, notably in terms of resolution (it only sampled 21 points across the whole range thus missed many threshold values) and in distributed settings.
Here I fix those by using recursive binary searches to identify all thresholds (up to a resolution of 1e-3, which can be made configurable) and output them in SVG (to be able to discern different points), plus I add the rank to the filename and store it in a user-define directory.
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,900,707,061
|
xpu: update filter out of dg2 AOT target
|
dvrogozh
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: xpu"
] | 13
|
CONTRIBUTOR
|
torch-xpu-ops has updated list of AOT targets to use and used `dg2` instead of `dg2-g10`. This requires an update in cpp_extension.py which currently filters out `dg2-` prefixed AOT targets.
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5
| true
|
2,900,704,307
|
[dynamo] allow global import `from collections import deque` in user code
|
XuehaiPan
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td",
"ciflow/inductor-rocm"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #138214
* #113258
* #148569
* __->__ #148676
See https://github.com/pytorch/pytorch/pull/148669#discussion_r1983462218 for more details.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,900,693,003
|
Fix static functions when using module in MSVC
|
taras-janea
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng"
] | 12
|
COLLABORATOR
|
If you try to use torch in c++ using modules then it will not compile due to static function not being supported in MSVC when using modules https://developercommunity.visualstudio.com/t/10323558.
It's also aligned with [C++20 standard](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/n4849.pdf) (ISO/IEC 14882:2020) 10.2.7 Export declaration [module.interface]: "Exported names have either external linkage or no linkage".
Fixes https://github.com/pytorch/pytorch/issues/71309
Tested using the following code.
```c++
export module testModule;
import <torch/torch.h>;
import <memory>;
import <string>;
import <tuple>;
import <iostream>;
export namespace testModule
{
export void test()
{
torch::Tensor tensor1 = torch::rand({ 2, 3 });
torch::Tensor tensor2 = torch::rand({ 3, 2 });
// Perform tensor multiplication
torch::Tensor result = torch::matmul(tensor1, tensor2);
// Print the tensors
std::cout << "Tensor 1: " << tensor1 << std::endl;
std::cout << "Tensor 2: " << tensor2 << std::endl;
std::cout << "Result of multiplication: " << result << std::endl;
}
}
```
```c++
import testModule;
int main()
{
testModule::test();
return 0;
}
```
| true
|
2,900,676,729
|
Set /NODEFAULTLIB:vcomp for MSVC when linking caffe2::mkl with libiomp5md.lib
|
taras-janea
|
open
|
[
"module: build",
"module: windows",
"triaged",
"open source",
"release notes: build",
"topic: bug fixes"
] | 8
|
COLLABORATOR
|
Fixes:
- https://github.com/pytorch/pytorch/issues/113490
When using the Microsoft Visual C++ Compiler with Intel® OpenMP, it's needed to avoid linking the Microsoft OpenMP runtime library (vcomp) and explicitly pass the name of the Intel® OpenMP compatibility library as linker options.
More details: https://www.intel.com/content/www/us/en/docs/cpp-compiler/developer-guide-reference/2021-8/use-the-openmp-libraries.html#id-d42745e610
The PR sets `/NODEFAULTLIB:vcomp` link flag when linking caffe2::mkl with libiomp5md.lib to avoid linking the Microsoft OpenMP runtime library (vcomp).
The changes have been verified by checking build output with `VERBOSE=1`, for example:
```
C:\PROGRA~1\MICROS~1\2022\COMMUN~1\VC\Tools\MSVC\1442~1.344\bin\Hostx64\x64\link.exe /nologo caffe2\CMakeFiles\torch_global_deps.dir\__\torch\csrc\empty.c.obj /out:bin\torch_global_deps.dll /implib:lib\torch_global_deps.lib /pdb:bin\torch_global_deps.pdb /dll /version:0.0 /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 /debug /INCREMENTAL:NO /NODEFAULTLIB:vcomp -LIBPATH:\lib -LIBPATH:\lib\intel64 -LIBPATH:\lib\intel64_win -LIBPATH:\lib\win-x64 C:\lib\mkl_intel_lp64.lib C:\lib\mkl_intel_thread.lib C:\lib\mkl_core.lib C:\lib\libiomp5md.lib kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib /MANIFEST:EMBED,ID=2
```
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,900,668,259
|
Remove shebang line from easy_install generated python scripts on Windows only
|
taras-janea
|
open
|
[
"module: windows",
"triaged",
"open source",
"topic: bug fixes",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
Fixes
- #108602
On windows only, for install step: remove shebang line from python scripts generated by `easy_install`.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,900,656,691
|
[ROCm] Enable max_autotune run on inductor perf dashboard
|
jataylo
|
open
|
[
"module: rocm",
"open source",
"topic: not user facing",
"ciflow/rocm",
"ciflow/inductor-perf-test-nightly-rocm"
] | 5
|
COLLABORATOR
|
Enables max_autotune on ROCm inductor dashboard
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd
| true
|
2,900,640,504
|
Improve error handling when checking CUDA version in case nvcc is not found
|
taras-janea
|
closed
|
[
"module: windows",
"triaged",
"open source",
"Merged",
"release notes: fx"
] | 9
|
COLLABORATOR
|
Fixes:
- https://github.com/pytorch/pytorch/issues/101138
**Description**
The PR enhances error handling in `_check_cuda_version` by verifying the existence of the `nvcc` executable before invoking `subprocess.check_output`. If `nvcc` is missing, a `FileNotFoundError` is raised with a clear message, guiding users to check their CUDA installation and path configuration.
**Testing**
Manually tested with and without `nvcc` present in the expected path.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,900,610,160
|
MPS Backend Error: ComplexDouble (complex128) Conversion Fails When Diffusers Transformer Creates 64‐bit Complex Tensors
|
mozzipa
|
open
|
[
"feature",
"triaged",
"module: complex",
"module: mps"
] | 0
|
NONE
|
### 🐛 Describe the bug
When running a diffusers-based transformer pipeline (e.g., the WanPipeline from diffusers) on Apple’s MPS device, an error is raised because a tensor is being converted to a ComplexDouble (i.e. torch.complex128) type. The error message is:
`TypeError: Trying to convert ComplexDouble to the MPS backend but it does not have support for that dtype.`
Since the MPS backend does not support double‐precision (64‑bit) real or complex types (only torch.float32 and its corresponding complex type, torch.cfloat, are supported), this error prevents the pipeline from running. The root cause appears to be that part of the transformer’s rotary embedding computation is using float64—leading to a ComplexDouble output—when run on MPS.
Steps to Reproduce:
Use diffusers (e.g., version 0.33.0.dev0) along with a recent nightly build of PyTorch (e.g., 2.7.0.dev20250305) on an Apple Silicon machine with MPS enabled.
Load a pipeline such as:
```python
from diffusers import AutoencoderKLWan, WanPipeline
vae = AutoencoderKLWan.from_pretrained("<model_path>", subfolder="vae", torch_dtype=torch.float32).to("mps")
pipe = WanPipeline.from_pretrained("<model_path>", vae=vae, torch_dtype=torch.float32).to("mps")
```
Run inference (e.g., call the pipeline with prompt embeddings), which triggers the transformer’s rotary embedding function.
The error occurs when torch.view_as_complex is called on a tensor that was computed as float64, resulting in an unsupported complex128 tensor.
Expected Behavior:
All operations on the MPS device should use supported dtypes. In particular, any complex-valued computation should use torch.cfloat (complex64) rather than torch.cdouble (complex128). An ideal solution would either (a) automatically downcast any double-precision inputs when on MPS or (b) warn and allow developers to control the dtype.
Workaround:
A temporary workaround is to monkey-patch torch.view_as_complex so that on MPS, if the input is float64 it is first cast to float32 before conversion. For example:
```python
_orig_view_as_complex = torch.view_as_complex
def patched_view_as_complex(tensor):
if tensor.device.type == "mps" and tensor.dtype == torch.float64:
tensor = tensor.to(torch.float32)
return _orig_view_as_complex(tensor)
torch.view_as_complex = patched_view_as_complex
```
Environment Details:
PyTorch: 2.7.0.dev20250305 (nightly)
OS: macOS (Apple Silicon, MPS enabled)
diffusers: 0.33.0.dev0
Other libraries: torchaudio 2.6.0.dev20250305, torchvision 0.22.0.dev20250305
Device: MPS
### Versions
PyTorch: 2.7.0.dev20250305 (nightly)
OS: macOS (Apple Silicon, MPS enabled)
diffusers: 0.33.0.dev0
Other libraries: torchaudio 2.6.0.dev20250305, torchvision 0.22.0.dev20250305
Device: MPS
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,900,566,224
|
[pytree] fix previously failed dynamo tests
|
XuehaiPan
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: pytree",
"module: dynamo"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148676
* #148569
* __->__ #148669
cc @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,900,495,357
|
Fix CUPTI lookup to include target directory
|
mgorny
|
open
|
[
"triaged",
"open source",
"Stale"
] | 3
|
CONTRIBUTOR
|
CUPTI library and headers are installed to the target subdirectory rather than the top-level prefix in conda-forge. Include `CUDAToolkit_TARGET_DIR` subdirectories in CUPTI search paths to fix finding it in that environment.
| true
|
2,900,349,201
|
Enable FSDP2 on HPU device
|
AnantGulati
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)"
] | 6
|
CONTRIBUTOR
|
The motivation of this PR is to enable FSDP2 collectives for HPU
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,900,336,111
|
[HPU] Add hpu to fused kernels supported devices
|
Nitin-x2a
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: foreach_frontend"
] | 3
|
CONTRIBUTOR
|
This change adds "hpu" to the list of device types that support fused kernels in the optimizer, ensuring
compatibility with HPU backend.
Without this change, when `test_all_gather_extension_outer_size_stride` of `pytorch/test/distributed/_composable/fsdp/test_fully_shard_extensions.py` is run on 'hpu' backend, it fails with:
RuntimeError: fused=True requires all the params to be floating point Tensors
of supported devices: ['mps', 'cuda', 'xpu', 'cpu', 'privateuseone']
but torch.float32 and hpu
| true
|
2,900,332,600
|
DISABLED test_wrap_all_kwarg_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 7
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_wrap_all_kwarg_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38290922931).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 14 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_wrap_all_kwarg_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 1641, in test_wrap_all_kwarg
self._test_wrap_simple(f, default_args_generator((x, y)), arg_count)
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 191, in _test_wrap_simple
self.assertEqual(len(wrap_node.args), expected_num_wrap_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4087, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 4 but got 7.
Absolute difference: 3
Relative difference: 0.75
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesHigherOrderOpTests.test_wrap_all_kwarg_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,900,224,309
|
[AOTInductor] Codegen fix
|
Dinesh-Mareedu
|
open
|
[
"triaged",
"open source",
"module: inductor"
] | 5
|
NONE
|
-- Added CPP Codegen support for the List[Optional[Tensor]] data type during function argument conversion from Python, as the existing codegen did not support it.
-- Modified std::vector to c10::ArrayRef to solve below runtime issues due to signature mismatch.
**Error Message:**
correct signature: std::vector<at::Tensor, std::allocatorat::Tensor > (c10::ArrayRefat::Tensor, c10::ArrayRefat::Tensor, c10::ArrayRef<double>, c10::ArrayRef<double>, c10::ArrayRef<long>, c10::ArrayRef<long>, std::string) accessed/called as: std::vector<at::Tensor, std::allocatorat::Tensor > (std::vector<at::Tensor, std::allocatorat::Tensor >, std::vector<at::Tensor, std::allocatorat::Tensor >, std::vector<double, std::allocator<double> >, std::vector<double, std::allocator<double> >, std::vector<long, std::allocator<long> >, std::vector<long, std::allocator<long> >, std::string)
**Function signature:**
std::vector<at::Tensor> zentorch_function(at::TensorList self, at::TensorList inputs,
at::TensorList weights, at::ArrayRef<double> betas,
at::ArrayRef<double> alphas, at::IntArrayRef fuse,
at::IntArrayRef name)
std::string zentorch_op_name
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,900,152,727
|
[Profiler][HPU] Fix incorrect availabilities for HPU
|
wdziurdz
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ci-no-td"
] | 22
|
CONTRIBUTOR
|
Fixes #148661
| true
|
2,900,152,577
|
[Triton 3.3] Remove ROCm specific mm gemm template
|
AmdSampsa
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 11
|
COLLABORATOR
|
Fixes: https://github.com/pytorch/pytorch/issues/147121
Since triton 3.3.x fixes the problem
Needs to be handled in none BC breaking way, so we will conditionalise this change on triton version.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,900,149,450
|
[Profiler][HPU] Incorrect availabilities for the HPU device
|
wdziurdz
|
closed
|
[
"triaged",
"module: hpu"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This commit does not contain the complete availabilities for HPU devices (https://github.com/pytorch/pytorch/pull/148182).
We need to add the complete availabilities for HPU devices.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+hpu.git99dbd97
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.14.0
[pip3] torch==2.6.0
[pip3] torch-debug==2.6.0
[pip3] torch_tb_profiler==0.4.0
[conda] Could not collect
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,900,117,493
|
Remove deprecated std::aligned_storage_t
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,900,101,536
|
[HPU] Add HPU as a supported device for NestedTensor
|
Nitin-x2a
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: hpu"
] | 9
|
CONTRIBUTOR
|
This change enables basic NestedTensor operations on HPU,
fixing the runtime error when creating a NestedTensor on HPU.
- Extended `NestedTensorImpl` to recognize `hpu` as a valid storage device.
- Added `NestedTensorHPU` to `DispatchKey` parsing in `DispatchKey.cpp`.
- Updated `torchgen/model.py` to include `NestedTensorHPU` in `dispatch_keys`.
- Modified `native_functions.yaml` to enable `NestedTensorHPU` support for various ops.
Fixes #ISSUE_NUMBER
cc @jeromean @bsochack @sujoysaraswati
| true
|
2,900,027,217
|
[docs] fix autograd description on convex function case
|
dw61
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
The sub-gradient of minimum norm is the least steep descent direction.
```python
import torch
x = torch.tensor([-2, -1, 0, 1, 2.], requires_grad=True)
torch.relu(x).sum().backward()
print(x.grad) # tensor([0., 0., 0., 1., 1.])
y = torch.tensor([-2, -1, 0, 1, 2.], requires_grad=True)
torch.abs(y).sum().backward()
print(y.grad) # tensor([-1., -1., 0., 1., 1.])
```
cc @lezcano according to git blame
cc @albanD @soulitzer according to autograd codeowners
(How can I request a reviewer? I don't have the button on the right)
| true
|
2,900,025,097
|
[associative_scan] Refactoring of input checking and dynamo invocation
|
bohnstingl
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 10
|
COLLABORATOR
|
This PR is the counterpart of https://github.com/pytorch/pytorch/pull/142125 for the associative_scan operation. The way the input checks are performed and the combine_fn is not invoked in the frontend to check the output trees, but rather dynamo is used for that.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @ydwu4
| true
|
2,899,909,032
|
Allow to run flex_attention on HPU
|
m-a-nowak
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: flex attention"
] | 11
|
CONTRIBUTOR
|
HPU specific implementation details are to be located in out-of-tree HPU library.
cc @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,899,882,270
|
Extra onnx::Neg_2 input after torch.onnx.export
|
xeasonx
|
open
|
[
"module: onnx",
"triaged",
"OSS contribution wanted"
] | 2
|
NONE
|
### 🐛 Describe the bug
Convert huggingface model meta-llama/Llama-3.2-1B to ONNX
```python
input_ids = torch.ones((1, 256), dtype=torch.long)
attention_mask = torch.ones((1, 256), dtype=torch.long)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="cpu"
)
model.config.use_cache = False
model.eval()
torch.onnx.export(
model,
(input_ids, attention_mask),
"llama3/llama.onnx",
input_names=["input_ids", "attention_mask"],
output_names=["logits"],
dynamic_axes={
"input_ids": {1: "sequence_length"},
"attention_mask": {1: "sequence_length"},
"logits": {1: "sequence_length"}
},
opset_version=14
)
```
I then opened the onnx graph using netron, found a onnx::Neg_2 input which should not exist. Causes quantization failed.

### Versions
torch: 2.6.0 cpu
transformers: 4.47.1
python 3.10
| true
|
2,899,827,571
|
Enable ruff check for `torch/utils/data/*.ipynb`
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"release notes: dataloader"
] | 3
|
CONTRIBUTOR
|
Fixes part of #146411
Enable ruff check for `torch/utils/data/*.ipynb` files
## Test Result
```bash
lintrunner -a --take RUFF torch/utils/data/*.ipynb
```

cc @Skylion007
| true
|
2,899,784,759
|
Enable qint8 and quint8 add for AArch64 using ACL directly
|
fadara01
|
closed
|
[
"module: cpu",
"open source",
"module: arm",
"Merged",
"release notes: quantization",
"ciflow/linux-aarch64",
"arm priority"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148653
* #148585
This enables qint8 and quint8 add for AArch64 through Arm Compute Library (ACL) directly.
Relative performance improvement using OMP_NUM_THREADS=1 is ~15x, using OMP_NUM_THREADS=32 it’s ~5.4x.
Co-authored-by: David Svantesson <david.svantesson-yeung@arm.com>
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,899,737,968
|
[Intel GPU] Fix SDPA dummy LSE output to match meta function
|
DDEle
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/xpu"
] | 6
|
CONTRIBUTOR
|
To fix XPU patched UTs including
```bash
pytest -vs third_party/torch-xpu-ops/test/xpu/test_meta_xpu.py::TestMetaXPU::test_dispatch_symbolic_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_bfloat16
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,899,700,103
|
Avoid fork for TORCHINDUCTOR_COMPILE_THREADS > 1
|
AmdSampsa
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
COLLABORATOR
|
For inductor-related unit tests, setting the env variable `TORCHINDUCTOR_COMPILE_THREADS` to a bigger value than 1, leads many times to flaky behaviour.
I think this is because inductor combines multithreading with forking (multiprocessing) : this is a notorious pitfall in concurrent programming, and is known to create hard-to-debug and flaky behaviour.
It seems that the current implementation does this:
- `import torch` -> starts several threads that are instantiated at the c++ code
- forks multiprocesses in order to do the inductor code-generation and tuning in parallel
Let's break this down
In `torch/_inductor/async_compile.py`, `AsyncCompile` uses `torch/_inductor/compile_worker/subproc_pool.py`
There is this comment in `class SubprocPool`:
```
Mimic a concurrent.futures.ProcessPoolExecutor, but wrap it in
a subprocess.Popen() to try to avoid issues with forking/spawning
```
So the author is aware of the multithreading + multiprocessing problem. For this reason independent python processes are created with `subprocess.Popen` (i.e. in a separate python interpreter- It's like running a new python
process from the command line).
The program being run this way by inductor is:
```bash
torch/_inductor/compile_worker/__main__.py
```
But in there we have:
```
...
from torch._inductor.async_compile import ...
...
def main():
...
pre_fork_setup()
_async_compile_initializer(args.parent)
SubprocMain(args.pickler, args.kind, args.workers, read_fd, write_fd).main(..)
```
Where `SubprocMain` is using python multiprocessing.
So the problem is in `torch/_inductor/compile_worker/__main__.py`: it both imports torch and afterwards forks multiprocesses.
I confirmed this with gdb: `torch/_inductor/compile_worker/__main__.py` starts lots of threads at the c++ level.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,899,696,184
|
[Inductor-CPU] With cpp-wrapper, some ATen ops don't get profiled with PyTorch profiler
|
sanchitintel
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
With cpp-wrapper, some ATen ops don't get profiled with PyTorch profiler
Here's a code example (needs torchao), in which `_weight_int8pack_mm` ATen op doesn't appear in PyTorch profiling results.
<details>
```python
# Most of the code has been adapted from a script authored by leslie-fang-intel
import torch
from torch.profiler import profile, record_function, ProfilerActivity
import torch._inductor.config as config
from torchao.quantization import quant_api
from torchao.utils import unwrap_tensor_subclass
config.freezing = True
config.cpp_wrapper = True
M=1
N=4096
K=4096
class Model(torch.nn.Module):
def __init__(self,):
super().__init__()
self.linear = torch.nn.Linear(K, N)
def forward(self, x, x2):
tmp = self.linear(x)
return tmp
if __name__ == "__main__":
m = Model().eval()
input = torch.randn(M, K)
input2 = torch.randn(M, N)
with torch.autocast(device_type="cpu", dtype=torch.bfloat16), torch.no_grad():
quant_api.quantize_(m, quant_api.int8_weight_only(), set_inductor_config=False)
cm = torch.compile(m)
res = cm(input, input2)
print("---- benchmark Inductor WOQ INT8 ----", flush=True)
with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:
cm(input, input2)
print(prof.key_averages(group_by_input_shape=True).table(sort_by="cpu_time_total", row_limit=100), flush=True)
```
</details>
Probably not a high priority issue.
### Versions
Main branch
cc @chauhang @penguinwu
| true
|
2,899,692,643
|
Use oneDNN v3.7.1 for Intel GPU
|
ZhiweiYan-96
|
closed
|
[
"module: mkldnn",
"open source",
"ciflow/linux-aarch64"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148649
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,899,659,313
|
Bump Clang-tidy to 19.1.4
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Because Clang-tidy 19 has more powerful clang-analyzer checks to detect subtle bugs. New checks such as misc-use-internal-linkage can help identify potential static variables or functions, thus reducing binary sizes.
Some new checks are disabled temporarily for later enabling. Additional warnings have been fixed or suppressed.
| true
|
2,899,634,288
|
[SGD] Add SGD capturable API and tests
|
zeshengzong
|
open
|
[
"open source",
"Stale",
"release notes: optim"
] | 3
|
CONTRIBUTOR
|
Fixes #118018
| true
|
2,899,618,156
|
[XPU] Add an implict conversion from XPUStream to sycl::queue*
|
zhiweij1
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: xpu"
] | 19
|
CONTRIBUTOR
|
# Motivation
Currently, in Pytorch XPU, `cudaStream_t` is mapped to `sycl::queue&`, so an implicit cast from `XPUStream` to `sycl::queue&` is provided just like `CUDAStream` has an implicit cast to `cudaStream_t`.
But on the SYCLomatic side, we migrate `cudaStream_t` to `sycl::queue*` but not `sycl::queue&` (One reason is that `cudaStream_t` is actually a pointer so users can do anything with that integer. Another reason is that the early `sycl::queue` was not impl-ed by a pointer, so copy by value is not desirable.)
Without this PR:
```
cudaStream_t a = getCurrentCUDAStream();
cudaStream_t b = getCurrentCUDAStream().stream();
```
need be migrated to:
```
queue_ptr a = &(sycl::queue&)getCurrentXPUStream();
queue_ptr b = &(getCurrentXPUStream().queue());
```
With this PR:
```
queue_ptr a = getCurrentXPUStream();
queue_ptr b = &(getCurrentXPUStream().queue());
```
| true
|
2,899,597,898
|
[inductor][torchbench][CI] timm models got obvious performance drop with --ci flag
|
LifengWang
|
closed
|
[
"module: ci",
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I observed that the accuracy test for timm_models with max_autotune experienced a performance degradation by multiples when the `--ci` flag was enabled. For example, the e2e time of spnasnet_100 increased from 42s to 4m52.451s after adding `--ci` flag.
This issue seems to be caused by the below line.
https://github.com/pytorch/pytorch/blob/aade4fbd55a07aaa23dbdfe055d70cd503fd0059/benchmarks/dynamo/common.py#L3544
bad commit: aade4fbd55a07aaa23dbdfe055d70cd503fd0059
suspected guilty commit: 84abeaad5c147ecfbe6444342ba331bf60b9a60e
The log without --ci flag:
```
$ time python benchmarks/dynamo/timm_models.py --accuracy --timing --explain --print-compilation-time --inductor --inductor-compile-mode max-autotune --dynamic-shapes --dynamic-batch-only --device cpu --inference --amp --freezing --total-partitions 2 --partition-id 0 --only spnasnet_100
loading model: 0it [00:02, ?it/s]
cpu eval spnasnet_100
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
AUTOTUNE linear_unary(8x1280, 1000x1280, 1000)
_linear_pointwise 0.1689 ms 100.0%
cpp_CppMicroGemmAMX_0 0.1776 ms 95.1%
SingleProcess AUTOTUNE benchmarking takes 0.2599 seconds and 2.2344 seconds precompiling for 2 choices
Compilation time (from dynamo_timed): 27.34581936
pass
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
TIMING: _recursive_pre_grad_passes:0.03108 _recursive_joint_graph_passes:0.14904 _recursive_post_grad_passes:0.01873 linear_unary_template_precompiling:2.23426 linear_unary_template_autotuning:0.25924 async_compile.wait:1.63709 code_gen:1.89927 inductor_compile:16.11893 backend_compile:25.3792 entire_frame_compile:27.34582 gc:0.00116 total_wall_time:27.34582
STATS: call_* op count: 190 | FakeTensorMode.__torch_dispatch__:37960 | attempt fast:2094 | fast is_contiguous:1100 | ProxyTorchDispatchMode.__torch_dispatch__:6036 | slow no contiguity match:940 | fast channels_last:54 | FakeTensor.__torch_dispatch__:1431
Dynamo produced 1 graphs covering 190 ops with 0 graph breaks (0 unique)
real 0m42.220s
user 28m5.698s
sys 0m54.603s
```
The log without --ci flag:
```
$ time python benchmarks/dynamo/timm_models.py --ci --accuracy --timing --explain --print-compilation-time --inductor --inductor-compile-mode max-autotune --dynamic-shapes --dynamic-batch-only --device cpu --inference --amp --freezing --total-partitions 2 --partition-id 0 --only spnasnet_100
loading model: 0it [00:02, ?it/s]
cpu eval spnasnet_100
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
AUTOTUNE linear_unary(8x1280, 1000x1280, 1000)
_linear_pointwise 0.1696 ms 100.0%
cpp_CppMicroGemmAMX_0 0.1721 ms 98.6%
SingleProcess AUTOTUNE benchmarking takes 0.2736 seconds and 2.1789 seconds precompiling for 2 choices
Compilation time (from dynamo_timed): 190.432641936
pass
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
TIMING: _recursive_pre_grad_passes:0.74099 _recursive_joint_graph_passes:0.17998 _recursive_post_grad_passes:0.03466 linear_unary_template_precompiling:2.1788 linear_unary_template_autotuning:0.27299 async_compile.wait:1.57446 code_gen:1.84068 inductor_compile:19.45921 backend_compile:182.22927 entire_frame_compile:190.43264 gc:0.00015 total_wall_time:190.43264
STATS: call_* op count: 190 | FakeTensorMode.__torch_dispatch__:37960 | attempt fast:2094 | fast is_contiguous:1100 | ProxyTorchDispatchMode.__torch_dispatch__:6036 | slow no contiguity match:940 | fast channels_last:54 | FakeTensor.__torch_dispatch__:1431
Dynamo produced 1 graphs covering 190 ops with 0 graph breaks (0 unique)
real 4m52.451s
user 30m31.642s
sys 0m50.273s
```
cc @seemethere @malfet @pytorch/pytorch-dev-infra @chauhang @penguinwu @ezyang @bobrenjc93 @chuanqi129
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+gitaade4fb
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) 6972P
CPU family: 6
Model: 173
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 9 MiB (192 instances)
L1i cache: 12 MiB (192 instances)
L2 cache: 384 MiB (192 instances)
L3 cache: 960 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] torch==2.7.0a0+gitaade4fb
[pip3] torchaudio==2.6.0.dev20241218+cpu
[pip3] torchvision==0.22.0.dev20241218+cpu
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.1.2 pypi_0 pypi
[conda] torch 2.7.0a0+gitaade4fb dev_0 <develop>
[conda] torchaudio 2.6.0.dev20241218+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20241218+cpu pypi_0 pypi
| true
|
2,899,508,439
|
DISABLED test_set_stance_aot_eager_then_compile (__main__.DecoratorTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
Platforms: linux, mac, macos, rocm, asan
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_set_stance_aot_eager_then_compile&suite=DecoratorTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38282592450).
Over the past 3 hours, it has been determined flaky in 14 workflow(s) with 14 failures and 14 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_set_stance_aot_eager_then_compile`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_decorators.py", line 1205, in test_set_stance_aot_eager_then_compile
self.assertEqual(cnts.op_count, 2)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 4087, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
...<4 lines>...
)
AssertionError: Scalars are not equal!
Expected 2 but got 4.
Absolute difference: 2
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
python test/dynamo/test_decorators.py DecoratorTests.test_set_stance_aot_eager_then_compile
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_decorators.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,899,508,355
|
DISABLED test_symint_in_slice_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 6
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_symint_in_slice_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38278999890).
Over the past 3 hours, it has been determined flaky in 16 workflow(s) with 32 failures and 16 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_symint_in_slice_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 320, in test_symint_in_slice
self._test_wrap_simple(
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 191, in _test_wrap_simple
self.assertEqual(len(wrap_node.args), expected_num_wrap_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4087, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 7 but got 9.
Absolute difference: 2
Relative difference: 0.2857142857142857
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesHigherOrderOpTests.test_symint_in_slice_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,899,501,732
|
Feature request: throw `torch.cuda.OutOfMemoryError` for TorchScript OOM
|
njzjz
|
open
|
[
"oncall: jit"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Currently, `torch.cuda.OutOfMemoryError` is used to throw OOM errors, which is easy to catch. However, TorchScript throws `RuntimeError` instead of `OutOfMemoryError`, as shown below. Thus, when one wants to catch a specific OOM error, it's not easy to do so.
```py
import torch
def z():
return torch.zeros(9999999999999999, device="cuda:0")
z()
```
> OutOfMemoryError: CUDA out of memory. Tried to allocate 37252902.99 GiB. GPU 0 has a total capacity of 14.74 GiB of which 14.64 GiB is free. Process 8366 has 100.00 MiB memory in use. Of the allocated memory 0 bytes is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
```py
torch.jit.script(z)()
```
> RuntimeError: The following operation failed in the TorchScript interpreter.
> Traceback of TorchScript (most recent call last):
> File "<ipython-input-5-fdb3a6751e61>", line 2, in z
> def z():
> return torch.zeros(9999999999999999, device="cuda:0")
> ~~~~~~~~~~~ <--- HERE
> RuntimeError: CUDA out of memory. Tried to allocate 37252902.99 GiB. GPU 0 has a total capacity of 14.74 GiB of which 14.64 GiB is free. Process 8366 has 100.00 MiB memory in use. Of the allocated memory 0 bytes is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Hope `OutOfMemoryError` can be thrown in the second situation.
### Alternatives
_No response_
### Additional context
_No response_
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,899,457,460
|
Fix torch.utils.checkpoint import error
|
alphahui
|
closed
|
[
"open source"
] | 4
|
NONE
|
# Problem
We were trying to use the torch.utils.checkpoint.checkpoint function directly with only the import of torch and without importing torch.utils.checkpoint in our script. However, we would encounter an import error "AttributeError: module 'torch.utils' has no attribute 'checkpoint'". We would like to propose a small fix to this.
# Test case
```
import torch
import torch.nn as nn
# Define a simple multi - layer perceptron
class SimpleMLP(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(SimpleMLP, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, output_size)
def forward(self, x):
# Apply checkpointing to the first fully - connected layer
def custom_forward(*inputs):
layer, input_tensor = inputs
return layer(input_tensor)
# error of module 'torch.utils' has no attribute 'checkpoint' here
out = torch.utils.checkpoint.checkpoint(custom_forward, self.fc1, x)
out = self.relu(out)
out = self.fc2(out)
return out
# Set the input parameters
input_size = 5
hidden_size = 10
output_size = 5
batch_size = 4
# Create an instance of the MLP
model = SimpleMLP(input_size, hidden_size, output_size)
# Generate some random input data
x = torch.randn(batch_size, input_size, requires_grad=True)
# Forward pass
output = model(x)
# Compute the loss (using a simple sum as an example)
loss = output.sum()
# Backward pass
loss.backward()
# Print the gradients of the input tensor
print("Gradients of the input tensor:", x.grad)
```
## Before fix
```
Traceback (most recent call last):
File "C:\PATH\torch_checkpoint_importBug.py", line 38, in <module>
output = model(x)
File "C:\PATH\myenv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\PATH\myenv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "C:\PATH\torch_checkpoint_importBug.py", line 19, in forward
out = torch.utils.checkpoint.checkpoint(custom_forward, self.fc1, x)
AttributeError: module 'torch.utils' has no attribute 'checkpoint'
```
## After fix
```
C:\PATH\myenv\lib\site-packages\torch\_dynamo\eval_frame.py:632: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
Gradients of the input tensor: tensor([[ 0.2166, 0.3589, 0.0104, 0.0993, 0.0271],
[-0.0243, 0.1251, -0.0784, 0.0148, 0.0960],
[ 0.1056, 0.2088, 0.0594, -0.0161, 0.1761],
[-0.1030, 0.2255, -0.2244, 0.1612, 0.1799]])
```
# Brief description of solution
We added a \_\_getattr__ function to return the "checkpoint" module import from "import torch.utils.checkpoint as checkpoint" when we are requesting "torch.utils.checkpoint". We cannot directly add a line "import torch.utils.checkpoint as checkpoint" directly in \_\_init__.py as this would lead to a long chain of circular import. But this function bypass this issue as we are not actually importing it to use and only want a reference to it.
| true
|
2,899,427,750
|
[Intel GPU][quant] Refine zero-point memory creation
|
ZhiweiYan-96
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 10
|
COLLABORATOR
|
# Motivation
This PR skips zero-point GPU memory creation when zero-point=0, as it would not be used by oneDNN library. This could help save the 1~3 H2D copy overhead per QLinear/QConv kernel.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148640
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,899,419,292
|
[inductor][cpu] poolformer_m36 AMP static shape multiple thread performance regression in 2025-03-03 nightly release
|
zxd1997066
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>AMP static shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>poolformer_m36</td>
<td>multiple</td>
<td>64</td>
<td>1.761927</td>
<td>0.29042322000000004</td>
<td>0.5117045127449401</td>
<td>94.214544</td>
<td>64.0</td>
<td>2.312968</td>
<td>0.22561134000000002</td>
<td>0.5218318098571201</td>
<td>92.121351</td>
<td>0.76</td>
<td>1.02</td>
<td>0.78</td>
<td>0.98</td>
</tr>
</tbody>
</table>
<p>AMP static shape CPP wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>poolformer_m36</td>
<td>multiple</td>
<td>64</td>
<td>1.771049</td>
<td>0.182792973</td>
<td>0.323735312038677</td>
<td>45.929831</td>
<td>64</td>
<td>2.471369</td>
<td>0.134041885</td>
<td>0.33126695929056504</td>
<td>44.665175</td>
<td>0.72</td>
<td>1.02</td>
<td>0.73</td>
<td>0.97</td>
</tr>
</tbody>
</table>
the bad commit: d23051f29ba01d0b5a1da03ed1f023bfe643b640
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance timm_models poolformer_m36 amp first static cpp
Testing with cpp wrapper.
Testing with inductor.
multi-threads testing....
loading model: 0it [00:02, ?it/s]
cpu eval poolformer_m36
running benchmark: 100%|███████████████████████████████████████████████████████████████████████████████████| 50/50 [00:29<00:00, 1.71it/s]
1.990x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,poolformer_m36,64,1.990374,194.937080,53.737049,0.985064,1164.911821,1182.574182,552,1,0,0,0,0,0
```
the last good commit: 8bf3920279842604bba5ffe4a1fb560f7baa9823
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance timm_models poolformer_m36 amp first static cpp
Testing with cpp wrapper.
Testing with inductor.
multi-threads testing....
loading model: 0it [00:02, ?it/s]
cpu eval poolformer_m36
running benchmark: 100%|███████████████████████████████████████████████████████████████████████████████████| 50/50 [00:26<00:00, 1.86it/s]
2.643x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,poolformer_m36,64,2.643202,147.106718,56.101441,0.983617,1166.849229,1186.284339,552,1,0,0,0,0,0
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>373ffb19</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>ce2f680e0009550ef0dc594f375d542662fcb7e5</td>
<td>main</td>
<td>bea72180ed75f522ce4fe5e723bc2112e0874732</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance timm_models poolformer_m36 amp first static cpp
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/d23051f29ba01d0b5a1da03ed1f023bfe643b640
[timm_models-poolformer_m36-inference-amp-static-cpp-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/19102166/timm_models-poolformer_m36-inference-amp-static-cpp-multiple-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129
| true
|
2,899,408,026
|
Remove cppcoreguidelines-pro-type-member-init_fix suppression
|
cyyever
|
closed
|
[
"oncall: jit",
"module: cpu",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: jit",
"module: dynamo",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,899,375,844
|
[cutlass backend] switch host optimizer to O3
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148637
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,899,373,916
|
[inductor][cpu] basic_gnn_gin and basic_gnn_sage AMP single thread performance regression in 2025-03-03 nightly release
|
zxd1997066
|
closed
|
[
"oncall: cpu inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>amp static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>basic_gnn_gin</td>
<td>single</td>
<td>1</td>
<td>2.920084</td>
<td>0.010522014</td>
<td>0.030725164729176</td>
<td>6.642135</td>
<td>1</td>
<td>3.446807</td>
<td>0.008659898000000001</td>
<td>0.029848997045686006</td>
<td>6.59857</td>
<td>0.85</td>
<td>0.97</td>
<td>0.82</td>
<td>0.99</td>
</tr>
<tr>
<td>torchbench</td>
<td>basic_gnn_sage</td>
<td>single</td>
<td>1</td>
<td>2.593641</td>
<td>0.01143014</td>
<td>0.02964567973974</td>
<td>6.75319</td>
<td>1</td>
<td>3.023105</td>
<td>0.009504473000000001</td>
<td>0.028733019848665005</td>
<td>6.694961</td>
<td>0.86</td>
<td>0.97</td>
<td>0.83</td>
<td>0.99</td>
</tr>
</tbody>
</table>
<p>amp dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>basic_gnn_gin</td>
<td>single</td>
<td>1</td>
<td>2.92793</td>
<td>0.010461732</td>
<td>0.030631218974759997</td>
<td>6.626763</td>
<td>1</td>
<td>3.431984</td>
<td>0.008875217</td>
<td>0.030459602740528</td>
<td>6.611055</td>
<td>0.85</td>
<td>0.99</td>
<td>0.85</td>
<td>1.0</td>
</tr>
<tr>
<td>torchbench</td>
<td>basic_gnn_sage</td>
<td>single</td>
<td>1</td>
<td>2.583718</td>
<td>0.011378174</td>
<td>0.029397992970932</td>
<td>6.73779</td>
<td>1</td>
<td>2.970166</td>
<td>0.009915694999999999</td>
<td>0.029451260155369995</td>
<td>6.712735</td>
<td>0.87</td>
<td>1.0</td>
<td>0.87</td>
<td>1.0</td>
</tr>
</tbody>
</table>
the bad commit: be830c8b1c496277491bbbdd40a5cb35de17d5fb
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench basic_gnn_gin amp first dynamic
Testing with dynamic shapes.
Testing with inductor.
single-thread testing....
loading model: 0it [00:01, ?it/s]
cpu eval basic_gnn_gin
running benchmark: 100%|███████████████████████████████████████████████████████████████████████████████████| 50/50 [00:01<00:00, 27.29it/s]
2.699x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,basic_gnn_gin,1,2.698579,9.636381,16.466996,0.938290,61.475226,65.518387,54,1,0,0,0,0,1
```
the last good commit: f522d899fb297453d0b821140bac38c1b4eef569
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench basic_gnn_gin amp first dynamic
Testing with dynamic shapes.
Testing with inductor.
single-thread testing....
loading model: 0it [00:01, ?it/s]
cpu eval basic_gnn_gin
running benchmark: 100%|███████████████████████████████████████████████████████████████████████████████████| 50/50 [00:01<00:00, 29.47it/s]
3.546x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,basic_gnn_gin,1,3.545700,7.259345,16.372305,0.935616,55.563878,59.387494,54,1,0,0,0,0,1
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>373ffb19</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>ce2f680e0009550ef0dc594f375d542662fcb7e5</td>
<td>main</td>
<td>bea72180ed75f522ce4fe5e723bc2112e0874732</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh single inference performance torchbench basic_gnn_gin amp first dynamic
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/be830c8b1c496277491bbbdd40a5cb35de17d5fb
[torchbench-basic_gnn_gin-inference-amp-dynamic-cpp-single-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/19101767/torchbench-basic_gnn_gin-inference-amp-dynamic-cpp-single-performance-drop_guilty_commit.log)
cc @chuanqi129
| true
|
2,899,320,267
|
Subprocess compile (attempt 2)
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"fx",
"module: inductor",
"ciflow/inductor",
"ciflow/slow"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148635
Add a mode to fx_codegen_and_compile() to compile in a separate process. This is to prepare for async compile where we'll compile and run eager in parallel (and also be able to move the compile phase to a remote computer).
Added a test based which runs the test_torchinductor tests with subprocess compiling turned on.
Fixed the test which caused the previous version (#146134) to be reverted:
```
$ PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_WITH_SLOW=1 PYTORCH_TEST_SKIP_FAST=1 python test/inductor/test_compile_subprocess.py CpuTests.test_conv_bn_fuse_cpu
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,899,305,596
|
README doesn't explain how to run tests in the "Test PyTorch" section
|
yurivict
|
closed
|
[] | 3
|
NONE
|
### 📚 The doc issue
README needs to have the "Test PyTorch" section after the [Install PyTorch](https://github.com/pytorch/pytorch#install-pytorch) section in the README.
Testing is the next step after building PyTorch.
### Suggest a potential alternative/fix
_No response_
| true
|
2,899,265,329
|
update torch.nn.RelicationPad{1,2,3}d deternimistic documentation
|
1274085042
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 23
|
CONTRIBUTOR
|
https://github.com/pytorch/pytorch/issues/115395
This issue mentioned that when deterministic mode is turned on, added a decomp for replication_pad_{1,2,3}d
to make the backward function deterministic.
@malfet
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.