id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,025,476,125
|
[conda] Remove conda usage from TD llm retriever job
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Remove conda usage from TD llm retriever job
python3 in the base is python3.9 right now. I'm not sure what the best way to deal with a potentially different python version would be, dnf install?
| true
|
3,025,439,538
|
Unusually slow draft_export time
|
tugsbayasgalan
|
open
|
[
"triage review",
"oncall: pt2",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
To repro:
1. Clone https://github.com/SWivid/F5-TTS
2. Apply: https://gist.github.com/tugsbayasgalan/1adddb5517e1648c91c94bc2bd1ae098
3. Install with torch-nightly.
4. Run:
```
f5-tts_infer-cli --model F5TTS_v1_Base -c src/f5_tts/infer/examples/basic/basic.toml --gen_text "pytorch is the best"
```
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
3,025,428,573
|
[Security] Advise against loading untrusted TorchScripts
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
As torchscripted model is a Turing complete program
| true
|
3,025,407,592
|
pin_memory crashes for big tensors and leaks page locked memory
|
c-rizz
|
open
|
[
"module: memory usage",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
Allocating pinning large tensors (>2 GB on my machine) crashes with "CUDA error: invalid arguement". Also, it seems to allocate additional memory compared to the actual tensor size, maybe due to [150517](https://github.com/pytorch/pytorch/issues/150517). Memory that is not freed after the crash, requiring a reboot to clear.
This can be reproduced with the following:
```python
#!/usr/bin/env python3
import subprocess
subprocess.run("free -m",shell=True)
import gc
import time
import sys
import torch as th
print(f"imported torch")
subprocess.run("free -m",shell=True)
gbs = float(sys.argv[1])
t = th.zeros(int(gbs*(1024**3)),dtype=th.uint8,device="cpu")
print(f"allocated tensor")
print(f"size={t.size()}, numel={t.numel()} pinned={t.untyped_storage().is_pinned()}, nbytes={t.untyped_storage().nbytes()}")
subprocess.run("free -m",shell=True)
t.pin_memory()
print(f"Pinned tensor")
print(f"size={t.size()}, numel={t.numel()} pinned={t.untyped_storage().is_pinned()}, nbytes={t.untyped_storage().nbytes()}")
subprocess.run("free -m",shell=True)
del t
gc.collect()
time.sleep(1)
print(f"deleted tensor")
subprocess.run("free -m",shell=True)
```
On my machine allocating 10GB results in the following:
```
(venv) crizz@machine:~/testenv$ free -m
total used free shared buff/cache available
Mem: 128293 26972 100562 5 1836 101321
Swap: 31249 0 31249
(venv) crizz@machine:~/testenv$ ./test_pin.py 10
total used free shared buff/cache available
Mem: 128293 26977 100557 5 1836 101316
Swap: 31249 0 31249
imported torch
total used free shared buff/cache available
Mem: 128293 27197 100336 5 1836 101095
Swap: 31249 0 31249
allocated tensor
size=torch.Size([10737418240]), numel=10737418240 pinned=False, nbytes=10737418240
total used free shared buff/cache available
Mem: 128293 37725 89808 5 1836 90568
Swap: 31249 0 31249
Traceback (most recent call last):
File "/home/crizz/testenv/./test_pin.py", line 17, in <module>
t.pin_memory()
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
(venv) crizz@machine:~/testenv$ free -m
total used free shared buff/cache available
Mem: 128293 43573 83960 5 1836 84719
Swap: 31249 0 31249
```
As can be seen, 17Gb of memory remain occupied in memory, which correspond to 2^34 bytes, the next power of two after 10Gb.
With 2GB instead:
```
(venv) crizz@machine:~/testenv$ free -m
total used free shared buff/cache available
Mem: 128293 43375 84142 5 1852 84917
Swap: 31249 0 31249
(venv) crizz@machine:~/testenv$ ./test_pin.py 2
total used free shared buff/cache available
Mem: 128293 43387 84131 5 1852 84906
Swap: 31249 0 31249
imported torch
total used free shared buff/cache available
Mem: 128293 43622 83896 5 1852 84671
Swap: 31249 0 31249
allocated tensor
size=torch.Size([2147483648]), numel=2147483648 pinned=False, nbytes=2147483648
total used free shared buff/cache available
Mem: 128293 45813 81705 5 1852 82480
Swap: 31249 0 31249
Pinned tensor
size=torch.Size([2147483648]), numel=2147483648 pinned=False, nbytes=2147483648
total used free shared buff/cache available
Mem: 128293 48020 79495 2061 3913 80272
Swap: 31249 0 31249
deleted tensor
total used free shared buff/cache available
Mem: 128293 46203 81313 2061 3913 82090
Swap: 31249 0 31249
(venv) crizz@machine:~/testenv$ free -m
total used free shared buff/cache available
Mem: 128293 43736 83782 5 1852 84557
Swap: 31249 0 31249
```
The expected behaviour would be:
- Not occupying much more pinned memory than required
- Not crashing, especially in my case, where enough memory is actually available
- Most importantly, not leaving dangling locked memory, that is seemingly impossible to free up
Thanks!
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-24-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.133.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 7970X 32-Cores
CPU family: 25
Model: 24
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 30%
CPU max MHz: 5352.0000
CPU min MHz: 545.0000
BogoMIPS: 7987.35
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 32 MiB (32 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.7.0
[pip3] triton==3.3.0
[conda] Could not collect
| true
|
3,025,287,243
|
[AOTI] Fix a memory leak in model_package_loader
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 6
|
CONTRIBUTOR
|
Summary: There was a char array allocated but never freed. It was found by valgrind and verified fixed with this PR, although it's not easy to write a unit test for it.
| true
|
3,025,261,386
|
Remove cuda dependencies from non cuda buids
|
atalman
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
These dependancies added to fix poetry issue on pypi. However inclusion of these dependencies creates issue with poetry on download.pytorch.org due to poetry reading first available wheel on index for METADATA requirements. Hence all metadata requirements for CPU wheels can't list any cuda dependencies.
Injecting these dependencies via prep for pypi will need to be done via:
https://github.com/pytorch/test-infra/blob/main/release/pypi/prep_binary_for_pypi.sh
Ref: https://github.com/pytorch/pytorch/issues/152121
| true
|
3,025,234,483
|
[dynamo] Guard serialization for NAME_MATCH
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152332
* #152331
* #152330
* #152329
* #152328
* #152327
* #152326
* #152325
Differential Revision: [D73780430](https://our.internmc.facebook.com/intern/diff/D73780430/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,025,234,340
|
[dynamo] Guard serialization for DISPATCH_KEY_SET_MATCH
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152332
* __->__ #152331
* #152330
* #152329
* #152328
* #152327
* #152326
* #152325
Differential Revision: [D73780433](https://our.internmc.facebook.com/intern/diff/D73780433/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,025,233,909
|
[dynamo] Guard serialization for ID_MATCH
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152332
* #152331
* __->__ #152330
* #152329
* #152328
* #152327
* #152326
* #152325
Differential Revision: [D73780431](https://our.internmc.facebook.com/intern/diff/D73780431/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,025,233,776
|
[dynamo] Guard serialization for NONE_MATCH.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152332
* #152331
* #152330
* __->__ #152329
* #152328
* #152327
* #152326
* #152325
Differential Revision: [D73780435](https://our.internmc.facebook.com/intern/diff/D73780435/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,025,233,632
|
[dynamo] Guard serialization for BOOL_MATCH.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152332
* #152331
* #152330
* #152329
* __->__ #152328
* #152327
* #152326
* #152325
Differential Revision: [D73780434](https://our.internmc.facebook.com/intern/diff/D73780434/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,025,233,510
|
[dynamo] Guard serialization for DICT_CONTAINS
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152332
* #152331
* #152330
* #152329
* #152328
* __->__ #152327
* #152326
* #152325
Adding serialization for DICT_CONTAINS
Differential Revision: [D73780432](https://our.internmc.facebook.com/intern/diff/D73780432/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,025,233,374
|
[dynamo] Guard serialization for DICT_VERSION
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152332
* #152331
* #152330
* #152329
* #152328
* #152327
* __->__ #152326
* #152325
I think we shouldn't support DICT_VERSION for 2 reasons:
1. dict version is not well defined across processes
2. they are pretty rare (only with pytree calls)
Differential Revision: [D73780437](https://our.internmc.facebook.com/intern/diff/D73780437/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,025,233,226
|
[dynamo] Guard serialization for TYPE_MATCH
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152332
* #152331
* #152330
* #152329
* #152328
* #152327
* #152326
* __->__ #152325
Adding guard serialization for TYPE_MATCH
Differential Revision: [D73780438](https://our.internmc.facebook.com/intern/diff/D73780438/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,025,148,900
|
[benchmarking] Inc aarch64 bench shards to 15
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152324
As it frequently timing out with 12, but also it feels like shards are somewhat unbalanced
I.e. if one to look at https://github.com/pytorch/pytorch/actions/runs/14696840776/job/41239776679
Shard 12 takes 3.6 hours, while shard 11 is only 40 min
| true
|
3,025,146,877
|
compile generates inefficient code when mutating small slice of a graph input
|
bdhirsh
|
open
|
[
"triaged",
"module: functionalization",
"oncall: pt2",
"module: inductor",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
See this repro:
```python
import torch
def plus_one(x):
x[0].add_(1.0)
return x
x_og = torch.randn(32 * 1024, 1024, device="cuda", dtype=torch.float32)
x = x_og.clone()
plus_one(x)
plus_one_compiled = torch.compile(plus_one)
x = x_og.clone()
plus_one_compiled(x)
```
if you run with `TORCH_LOGS="output_code"` to get inductor output, you'll see that:
(1) functionalization captures this program as a call to `aten.add` + `add.copy_`, and we end up generating two kernels (bad)
(2) the second kernel involves a `copy_` on the entire input, instead of just the slice (equally bad)
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @zou3519
| true
|
3,024,957,450
|
Skip test requiring MKL
|
Flamefire
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
`test_reproduce_121253_issue_addmm_fusion_check` checks for "mkl._mkl_linear" being found in the generated source which cannot be there when MKL isn't available.
Add skip marker similar to other tests in this file.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,024,900,747
|
[torch-xpu-ops] Update torch-xpu-ops commit pin.
|
etaf
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/xpu",
"ci-no-td"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152321
Update the torch-xpu-ops commit to [655fa9bc7f88ab5bd3766b5f2fd5b43989c2caca](https://github.com/intel/torch-xpu-ops/commit/655fa9bc7f88ab5bd3766b5f2fd5b43989c2caca), including:
- Fixes batch_norm numeric error by adding additional boundary check
- Enable two operators: fft & jagged_to_padded_dense
- XCCL relevant changes:
- Cache cclStream to improve performance.
- Add support for complex datatypes in allgather and broadcast.
- Support coalescing operations and batch_isend_irecv.
- Introduce additional logging; use export TORCH_CPP_LOG_LEVEL=INFO.
- Fix #152296
- Fix #152020
| true
|
3,024,523,730
|
[dynamo] Use getattr when accessing self.value.__module__ in SkipFunctionVariable
|
wdziurdz
|
closed
|
[
"open source",
"topic: not user facing",
"module: dynamo"
] | 7
|
CONTRIBUTOR
|
Fixes #152316
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,024,507,273
|
Fix common_distributed.py to NOT set root logger
|
wizzniu
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Using `logging.basicConfig` to set root logger's level is not a good behavior. Fix common_distributed.py to set level for current logger only, because it affects downstream's 3rd-party testing plugins.
cc @ezyang @albanD
| true
|
3,024,461,295
|
DISABLED test_comprehensive_pca_lowrank_cuda_float64 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_pca_lowrank_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41249343003).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_pca_lowrank_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 648, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 489, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 860, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 844, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1458, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1345, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2209, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2256, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpxf73h0pb/mw/cmw5oymmrj3u5zgjky25agsa3x5bjku6g3kiov45sxhjywi3kbnb.py", line 83, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpq4a5qi1u/triton/MALQGGDZXFNJRXMOSNAMVPGAJZYNDVBZAEIH3J2PSB5B6D2RNOMA/triton_poi_fused_randn_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_cuda.py", line 248, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 4: SampleInput(input=Tensor[size=(2, 3, 2), device="cuda:0", dtype=torch.float64], args=TensorList[Tensor[size=(2, 3, 2), device="cuda:0", dtype=torch.float64]], kwargs={'q': '2', 'center': 'False'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=4 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_pca_lowrank_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,024,442,315
|
Correct torch.xpu.is_bf16_supported return False if no XPU detected
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"keep-going",
"ciflow/xpu",
"release notes: xpu",
"module: xpu"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152317
# Motivation
Fix https://github.com/pytorch/pytorch/issues/152301
When XPU is not available, calling `torch.xpu.is_bf16_supported()` still returns `True`, which is inconsistent with the expected behavior (should be False).
# Solution
Align to other backend, adding `including_emulation` to `torch.xpu.is_bf16_supported` and,
- return `False` if XPU is not available
- return `True` if `including_emulation` is True
- return `torch.xpu.get_device_properties().has_bfloat16_conversions` if `including_emulation` is False, it means if the device could generate SPIRV code for bf16.
cc @gujinghui @EikanWang @fengyuan14
| true
|
3,024,385,848
|
[dynamo] torch._dynamo crashes on `self.value.__module__` inside SkipFunctionVariable.call_function() (PyTorch 2.7, works 2.6)
|
wdziurdz
|
open
|
[
"high priority",
"needs reproduction",
"triaged",
"module: regression",
"oncall: pt2",
"module: dynamo"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Start cacth after upgrade from 2.6 to 2.7. crash in dynamo . The crash happens because the PyTorch doesn’t check whether the object has a `__module__` attribute:
```python
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1754, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1765, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/root/model_garden/PyTorch/examples/gpu_migration/nlp/bert/modeling.py", line 859, in forward
[rank1]: tmp = (attention_mask == i+1).type(torch.float32).unsqueeze(-1)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/gpu_migration/torch/_tensor.py", line 206, in type
[rank1]: log_args = locals() if G_LOGGER.module_severity <= G_LOGGER.INFO else None
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1432, in __call__
[rank1]: return self._torchdynamo_orig_callable(
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1213, in __call__
[rank1]: result = self._inner_convert(
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 598, in __call__
[rank1]: return _compile(
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1110, in _compile
[rank1]: raise InternalTorchDynamoError(
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1059, in _compile
[rank1]: guarded_code = compile_inner(code, one_graph, hooks, transform)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 97, in wrapper_function
[rank1]: return function(*args, **kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 761, in compile_inner
[rank1]: return _compile_inner(code, one_graph, hooks, transform)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 797, in _compile_inner
[rank1]: out_code = transform_code_object(code, transform)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
[rank1]: transformations(instructions, code_options)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 257, in _fn
[rank1]: return fn(*args, **kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 715, in transform
[rank1]: tracer.run()
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3500, in run
[rank1]: super().run()
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
[rank1]: while self.step():
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
[rank1]: self.dispatch_table[inst.opcode](self, inst)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 819, in wrapper
[rank1]: return inner_fn(self, inst)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2266, in CALL_FUNCTION_EX
[rank1]: self.call_function(fn, argsvars.items, kwargsvars)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1170, in call_function
[rank1]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 926, in call_function
[rank1]: return super().call_function(tx, args, kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 404, in call_function
[rank1]: return super().call_function(tx, args, kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 185, in call_function
[rank1]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1187, in inline_user_function_return
[rank1]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3726, in inline_call
[rank1]: return tracer.inline_call_()
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3905, in inline_call_
[rank1]: self.run()
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
[rank1]: while self.step():
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
[rank1]: self.dispatch_table[inst.opcode](self, inst)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 819, in wrapper
[rank1]: return inner_fn(self, inst)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2266, in CALL_FUNCTION_EX
[rank1]: self.call_function(fn, argsvars.items, kwargsvars)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1170, in call_function
[rank1]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py", line 1224, in call_function
[rank1]: if self.value.__module__ in known_python_builtin_modules:
[rank1]: torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'method_descriptor' object has no attribute '__module__'
```
This example also shows that not every object has a `__module__ `attribute, the code below crashes because the method descriptor torch.Tensor.type lacks that attribute:
```python
import torch
tmp = torch.Tensor.type
print(tmp.__module__)
```
The problem is in SkipFunctionVariable.call_function(), where the code unconditionally accesses `self.value.__module__`.
Many built-in C descriptors (e.g. method_descriptor) do not define that attribute, so the lookup itself raises an AttributeError. Relevant source code below.
```python
class SkipFunctionVariable(VariableTracker):
...
def call_function(
self,
tx: "InstructionTranslator",
args: "list[VariableTracker]",
kwargs: "dict[str, VariableTracker]",
) -> "VariableTracker":
....
except TypeError:
known_python_builtin_modules = {"_abc", "_warnings"}
if self.value.__module__ in known_python_builtin_modules:
explanation = (
f"Dynamo does not know how to trace the Python builtin "
f"`{self.value.__module__}.{qualname}`."
)
hints = [
"If you are attempting to call a logging function (e.g. `_warnings.warn`), "
"you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.",
"Please file an issue on GitHub "
"so the PyTorch team can add support for it. ",
]
elif (
self.value.__module__ is not None
and self.value.__module__.startswith("optree")
):
explanation = f"Dynamo cannot trace optree C/C++ function {self.value.__module__}.{qualname}."
hints = [
" Consider using torch.utils._pytree - "
"https://github.com/pytorch/pytorch/blob/main/torch/utils/_pytree.py"
]
# also warn on it because most users won't see the graph break message
torch._dynamo.utils.warn_once(explanation + "\n" + "\n".join(hints))
else:
explanation = (
f"Dynamo does not know how to trace the builtin `{self.value.__module__}.{qualname}.` "
f"This function is either a Python builtin (e.g. _warnings.warn) "
f"or a third-party C/C++ Python extension (perhaps created with pybind)."
)
hints = [
"If it is a Python builtin, please file an issue on GitHub "
"so the PyTorch team can add support for it and see the next case for a workaround.",
"If it is a third-party C/C++ Python extension, please "
"either wrap it into a PyTorch-understood custom operator "
"(see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html "
"for more details) or, if it is traceable, use "
"`torch.compiler.allow_in_graph`.",
]
# also warn on it because most users won't see the graph break message
torch._dynamo.utils.warn_once(explanation + "\n" + "\n".join(hints))
if qualname == "allow_in_graph":
explanation = (
"Found an allow_in_graph decorator to a function which "
"is created inside the parent function that is getting "
"compiled. This is not supported for now."
)
hints = []
reason = self.reason if self.reason else "<missing reason>"
unimplemented_v2(
gb_type="Attempted to call function marked as skipped",
context=f"module: {self.value.__module__}, qualname: {qualname}, skip reason: {reason}",
explanation=explanation,
hints=hints,
)
```
Suggested fix.
```python
- if self.value.__module__ in known_python_builtin_modules:
+ module = getattr(self.value, "__module__", None)
+ if module in known_python_builtin_modules:
```
All subsequent uses of `self.value.__module__` in this block should be replaced by module.
### Versions
Collecting environment information...
PyTorch version: 2.7.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-136-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 160
On-line CPU(s) list: 0-159
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 40
Socket(s): 2
Stepping: 6
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3.8 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 100 MiB (80 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-39,80-119
NUMA node1 CPU(s): 40-79,120-159
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.5.1
[pip3] torch==2.7.0
[pip3] torch_tb_profiler==0.4.0
[pip3] torchaudio==2.7.0a0
[pip3] torchdata==0.11.0
[pip3] torchmetrics==1.7.0
[pip3] torchtext==0.18.0a0
[pip3] torchvision==0.22.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames
| true
|
3,024,358,898
|
Fixed RELEASE.md typo
|
Ariouz
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 2
|
NONE
|
Fixed two short typo errors in RELEASE.md
| true
|
3,024,145,651
|
[ATen][CUDA][SDPA] Enable SDPA on sm_121
|
Aidyn-A
|
closed
|
[
"module: cuda",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"topic: not user facing",
"module: core aten",
"module: sdpa"
] | 5
|
COLLABORATOR
|
This PR adds support for `sm_121` of the DGX Spark. The `sm_121` is binary compatible with `sm_120` (just like `sm_89` and `sm_86`), therefore a compilation targeting `sm_121` is not required.
cc @ptrblck @msaroufim @eqy @jerryzh168 @manuelcandales @SherlockNoMad @angelayi
| true
|
3,024,102,892
|
setuptools.build_meta:__legacy__ backend is deprecated
|
atupone
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
setuptools.build_meta:__legacy__ backend is deprecated
see https://projects.gentoo.org/python/guide/qawarn.html#deprecated-pep-517-backends
| true
|
3,023,937,623
|
[cp] dispatch flex_attention_backward to CP impl in TorchDispatchMode
|
XilunWu
|
open
|
[
"oncall: distributed",
"ciflow/inductor",
"module: context parallel",
"release notes: context parallel"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152311
* #151497
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,023,894,833
|
[DCP] failure case of save method
|
XuezheMax
|
open
|
[
"oncall: distributed checkpointing"
] | 0
|
NONE
|
### 🐛 Describe the bug
[#147675](https://github.com/pytorch/pytorch/pull/147675) fixed the issue of dcp `gather_object`. However, in `broadcast_object`, the similar bug is still there. https://github.com/pytorch/pytorch/blob/13966d0bf55f858f7512c8f4258900a9289ed01b/torch/distributed/checkpoint/utils.py#L122
I manually fixed this bug but still got the following failure case when saving FSDP model. In the following code sample, I created two FSDP models on 8 GPUs. The first one is on [0, 2, 4, 6], and the second one on [1, 3, 5, 7]. But when I use `dcp.save` to separately save this two models, the job hangs and failed.
command line:
```bash
NGPU=8; torchrun --nproc_per_node=$NGPU test_dcp.py
```
output
```bash
W0428 06:10:32.341000 1791284 site-packages/torch/distributed/run.py:766] *****************************************
W0428 06:10:32.341000 1791284 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0428 06:10:32.341000 1791284 site-packages/torch/distributed/run.py:766] *****************************************
Run launched with torchrun, local rank: 3
Run launched with torchrun, local rank: 1
Run launched with torchrun, local rank: 7
Run launched with torchrun, local rank: 2
Run launched with torchrun, local rank: 0
Run launched with torchrun, local rank: 4
Run launched with torchrun, local rank: 5
Run launched with torchrun, local rank: 6
Global rank: 0 -- data parallel rank: 0/4 -- tensor parallel rank: 0/2 -- context parallel rank: 0/1
Global rank: 2 -- data parallel rank: 1/4 -- tensor parallel rank: 0/2 -- context parallel rank: 0/1
Global rank: 7 -- data parallel rank: 3/4 -- tensor parallel rank: 1/2 -- context parallel rank: 0/1
Global rank: 6 -- data parallel rank: 3/4 -- tensor parallel rank: 0/2 -- context parallel rank: 0/1
Global rank: 3 -- data parallel rank: 1/4 -- tensor parallel rank: 1/2 -- context parallel rank: 0/1
Global rank: 4 -- data parallel rank: 2/4 -- tensor parallel rank: 0/2 -- context parallel rank: 0/1
Global rank: 5 -- data parallel rank: 2/4 -- tensor parallel rank: 1/2 -- context parallel rank: 0/1
Global rank: 1 -- data parallel rank: 0/4 -- tensor parallel rank: 1/2 -- context parallel rank: 0/1
Global rank: 0 -- tensor parallel rank: 0/2 -- hsdp group: [0, 2, 4, 6]
Global rank: 4 -- tensor parallel rank: 0/2 -- hsdp group: [0, 2, 4, 6]
Global rank: 2 -- tensor parallel rank: 0/2 -- hsdp group: [0, 2, 4, 6]
Global rank: 6 -- tensor parallel rank: 0/2 -- hsdp group: [0, 2, 4, 6]
Global rank: 3 -- tensor parallel rank: 1/2 -- hsdp group: [1, 3, 5, 7]
Global rank: 5 -- tensor parallel rank: 1/2 -- hsdp group: [1, 3, 5, 7]
Global rank: 7 -- tensor parallel rank: 1/2 -- hsdp group: [1, 3, 5, 7]
Global rank: 1 -- tensor parallel rank: 1/2 -- hsdp group: [1, 3, 5, 7]
Global rank: 0 -- Checkpoint done.
Global rank: 2 -- Checkpoint done.
Global rank: 4 -- Checkpoint done.
Global rank: 6 -- Checkpoint done.
[rank3]: Traceback (most recent call last):
[rank3]: File "/lustrefs/users/[xuezhe.ma/projects/gecko/tests/test_dcp.py](http://xuezhe.ma/projects/gecko/tests/test_dcp.py)", line 272, in <module>
[rank3]: dcp.save(sharded_model_state, checkpoint_id=model_path, process_group=hsdp_group)
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/logger.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/logger.py)", line 87, in wrapper
[rank3]: result = func(*args, **kwargs)
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py)", line 465, in inner_func
[rank3]: return func(*args, **kwargs)
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_saver.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_saver.py)", line 176, in save
[rank3]: return _save_state_dict(
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_saver.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_saver.py)", line 350, in _save_state_dict
[rank3]: central_plan: SavePlan = distW.reduce_scatter("plan", local_step, global_step)
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py)", line 196, in reduce_scatter
[rank3]: all_data = self.gather_object(local_data)
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py)", line 135, in gather_object
[rank3]: dist.gather_object(
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/c10d_logger.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/c10d_logger.py)", line 81, in wrapper
[rank3]: return func(*args, **kwargs)
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py)", line 3153, in gather_object
[rank3]: all_gather(object_size_list, local_size, group=group)
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/c10d_logger.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/c10d_logger.py)", line 81, in wrapper
[rank3]: return func(*args, **kwargs)
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py)", line 3706, in all_gather
[rank3]: return handle_torch_function(
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/overrides.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/overrides.py)", line 1721, in handle_torch_function
[rank3]: result = mode.__torch_function__(public_api, types, args, kwargs)
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/utils/_device.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/utils/_device.py)", line 104, in __torch_function__
[rank3]: return func(*args, **kwargs)
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/c10d_logger.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/c10d_logger.py)", line 81, in wrapper
[rank3]: return func(*args, **kwargs)
[rank3]: File "/lustrefs/users/[xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py](http://xuezhe.ma/miniconda3/envs/gecko2.7.0_cu12.8/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py)", line 3728, in all_gather
[rank3]: work = group.allgather([tensor_list], [tensor])
[rank3]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:77, remote process exited or there was a network error, NCCL version 2.26.2
[rank3]: ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[rank3]: Last error:
[rank3]: socketPollConnect: connect returned Connection refused, exceeded error retry count (35)
```
```python
import sys
import math
import datetime
from typing import Dict, Tuple, Any
import logging
from pathlib import Path
import contextlib
import os
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.distributed.device_mesh import DeviceMesh
from torch.distributed.fsdp import FullyShardedDataParallel
import torch.distributed.checkpoint as dcp
from torch.distributed.fsdp.wrap import enable_wrap, wrap
from torch.distributed.fsdp import MixedPrecision, ShardingStrategy
from torch.distributed.checkpoint.state_dict import (
get_model_state_dict,
get_optimizer_state_dict,
set_model_state_dict,
set_optimizer_state_dict,
StateDictOptions
)
logger = logging.getLogger()
@contextlib.contextmanager
def create_on_gpu():
torch.set_default_device("cuda")
try:
yield
finally:
torch.set_default_device("cpu")
def initialize_logger() -> logging.Logger:
# log everything
logger = logging.getLogger()
logger.setLevel(logging.NOTSET)
# stdout: everything
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setLevel(logging.NOTSET)
# stderr: warnings / errors and above
stderr_handler = logging.StreamHandler(sys.stderr)
stderr_handler.setLevel(logging.WARNING)
# set stream handlers
logger.handlers.clear()
assert len(logger.handlers) == 0, logger.handlers
logger.handlers.append(stdout_handler)
logger.handlers.append(stderr_handler)
return logger
class Linear(nn.Module):
def __init__(
self,
in_features: int,
out_features: int,
tensor_parallel_rank: int,
tensor_parallel_size: int,
) -> None:
super(Linear, self).__init__()
# Keep input parameters
self.in_features = in_features
self.out_features = out_features
# Divide the weight matrix along the last dimension.
assert out_features % tensor_parallel_size == 0
self.output_size_per_partition = out_features // tensor_parallel_size
self.weight = nn.Parameter(torch.Tensor(self.output_size_per_partition, self.in_features))
# Initialize master weight
master_weight = torch.empty(out_features, in_features, dtype=self.weight.dtype, requires_grad=False)
nn.init.kaiming_normal_(master_weight, a=math.sqrt(5.0))
# Split and copy
weight_list = torch.split(master_weight, self.output_size_per_partition, dim=0)
my_weight = weight_list[tensor_parallel_rank]
with torch.no_grad():
self.weight.copy_(my_weight)
def forward(self, x: torch.Tensor) -> torch.Tensor: # type: ignore
return F.linear(x, self.weight)
def init_torch_distributed(timeout: int = 1800) -> Tuple[int, int]:
"""
Handle single and multi-GPU / multi-node / SLURM jobs.
Initialize the following variables:
- global_rank
- world_size
"""
global_rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
local_rank = int(os.environ["LOCAL_RANK"])
logger.info(f"Run launched with torchrun, local rank: {local_rank}")
# set GPU device
assert 0 <= local_rank < 8
torch.cuda.set_device(local_rank)
torch.distributed.init_process_group(
init_method="env://",
backend="nccl",
timeout=datetime.timedelta(seconds=timeout),
)
assert global_rank == torch.distributed.get_rank()
assert world_size == torch.distributed.get_world_size()
# sanity check
assert 0 <= local_rank <= global_rank < world_size
return global_rank, world_size
def get_parallel_ranks(
global_rank: int,
tensor_parallel_size: int,
context_parallel_size: int
) -> Tuple[int, int, int]:
tensor_parallel_rank = global_rank % tensor_parallel_size
global_rank = global_rank // tensor_parallel_size
context_parallel_rank = global_rank % context_parallel_size
data_parallel_rank = global_rank // context_parallel_size
return data_parallel_rank, context_parallel_rank, tensor_parallel_rank
def get_device_mesh(
data_parallel_size: int,
context_parallel_size: int,
tensor_parallel_size: int,
) -> DeviceMesh:
world_size = torch.distributed.get_world_size()
assert world_size == data_parallel_size * context_parallel_size * tensor_parallel_size
if context_parallel_size == 1:
if tensor_parallel_size == 1:
mesh = torch.arange(world_size)
device_mesh = DeviceMesh(
device_type="cuda",
mesh=mesh,
mesh_dim_names=("dp",),
)
else:
mesh = torch.arange(world_size).view(data_parallel_size, tensor_parallel_size)
device_mesh = DeviceMesh(
device_type="cuda",
mesh=mesh,
mesh_dim_names=("dp", "tp"),
)["dp"]
else:
if tensor_parallel_size == 1:
mesh = torch.arange(world_size).view(data_parallel_size, context_parallel_size)
mesh = mesh.swapdims(0, 1)
device_mesh = DeviceMesh(
device_type="cuda",
mesh=mesh,
mesh_dim_names=("cp", "dp"),
)
else:
mesh = torch.arange(world_size).view(data_parallel_size, context_parallel_size, tensor_parallel_size)
mesh = mesh.swapdims(0, 1)
device_mesh = DeviceMesh(
device_type="cuda",
mesh=mesh,
mesh_dim_names=("cp", "dp", "tp"),
)["cp", "dp"]
return device_mesh
def build_model(
in_features: int,
out_features: int,
data_parallel_size: int,
tensor_parallel_rank: int,
tensor_parallel_size: int,
context_parallel_size: int
):
if context_parallel_size == 1:
sharding_strategy = ShardingStrategy.FULL_SHARD
else:
sharding_strategy = ShardingStrategy.HYBRID_SHARD
device_mesh = get_device_mesh(data_parallel_size, context_parallel_size, tensor_parallel_size)
mixed_precision = MixedPrecision(param_dtype=torch.float32, reduce_dtype=torch.float32, buffer_dtype=torch.float32)
fsdp_cfg = {
"sharding_strategy": sharding_strategy,
"mixed_precision": mixed_precision,
"sync_module_states": False,
"use_orig_params": False, # flatten parameters
"device_mesh": device_mesh
}
with create_on_gpu():
with enable_wrap(wrapper_cls=FullyShardedDataParallel, **fsdp_cfg):
model = Linear(in_features, out_features, tensor_parallel_rank, tensor_parallel_size)
model = wrap(model.cuda())
model.train()
return model
def get_model_state(model, full_state: bool = False) -> Dict[str, Any]:
state_dict_options = StateDictOptions(
full_state_dict=full_state,
cpu_offload=True,
)
return get_model_state_dict(model, options=state_dict_options)
def set_model_state(model, model_state_dict, full_state: bool = False):
state_dict_options = StateDictOptions(
full_state_dict=full_state,
cpu_offload=True,
)
set_model_state_dict(model, model_state_dict, options=state_dict_options)
if __name__ == "__main__":
seed = 42
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
initialize_logger()
global_rank, world_size = init_torch_distributed()
tensor_parallel_size = 2
context_parallel_size = 1
data_parallel_size = world_size // (tensor_parallel_size * context_parallel_size)
dp_rank, cp_rank, tp_rank = get_parallel_ranks(global_rank, tensor_parallel_size, context_parallel_size)
logger.info(
f"Global rank: {global_rank} -- "
f"data parallel rank: {dp_rank}/{data_parallel_size} -- "
f"tensor parallel rank: {tp_rank}/{tensor_parallel_size} -- "
f"context parallel rank: {cp_rank}/{context_parallel_size}"
)
in_features = 1024
out_features = 4096
model = build_model(in_features, out_features, data_parallel_size, tp_rank, tensor_parallel_size, context_parallel_size)
x = torch.ones(2, in_features).cuda()
y = model(x).sum()
print(y)
checkpoint_dir = Path("saved_models/ckpt")
model_path = checkpoint_dir / f"sharded_model.tp{tp_rank:02d}"
sharded_model_state = get_model_state(model, full_state=False)
groups = torch.LongTensor(range(world_size)).reshape(-1, tensor_parallel_size)[:, tp_rank].tolist()
hsdp_group = torch.distributed.new_group(groups)
logger.info(
f"Global rank: {global_rank} -- "
f"tensor parallel rank: {tp_rank}/{tensor_parallel_size} -- "
f"hsdp group: {groups}"
)
dcp.save(sharded_model_state, checkpoint_id=model_path, process_group=hsdp_group)
logger.info(f"Global rank: {global_rank} -- Checkpoint done.")
torch.distributed.destroy_process_group()
```
### Versions
2.7.0
cc @LucasLLC @pradeepfn
| true
|
3,023,887,057
|
Softmax Decomp Causes Incorrect Gradients when Using `torch.compile` with `F.multi_head_attention_forward`
|
defaultd661
|
open
|
[
"high priority",
"triaged",
"module: correctness (silent)",
"oncall: pt2",
"module: decompositions",
"module: aotdispatch",
"module: sdpa",
"ubn"
] | 5
|
NONE
|
### 🐛 Describe the bug
When using `torch.compile` to compile a model that internally calls `torch.nn.functional.multi_head_attention_forward`, the computed gradients differ significantly from the ones obtained via eager mode.
### To Reproduce
```
import torch
import torch.nn as nn
import torch.nn.functional as F
class ReproMultihead(nn.Module):
def __init__(self):
super().__init__()
self.embed_dim = 256
self.num_heads = self.embed_dim // 64
self.in_proj_weight = nn.Parameter(torch.empty(3 * self.embed_dim,
self.embed_dim))
self.in_proj_bias = nn.Parameter(torch.empty(3 * self.embed_dim))
self.out_proj_weight = nn.Parameter(torch.empty(self.embed_dim,
self.embed_dim))
self.out_proj_bias = nn.Parameter(torch.empty(self.embed_dim))
nn.init.constant_(self.in_proj_weight, 0.1)
nn.init.constant_(self.in_proj_bias, 0.1)
nn.init.constant_(self.out_proj_weight, 0.1)
nn.init.constant_(self.out_proj_bias, 0.1)
def forward(self, x):
x_t = x.transpose(0, 1)
attn_output, _ = F.multi_head_attention_forward(query=x_t, key=x_t,
value=x_t, embed_dim_to_check=self.embed_dim, num_heads=self.
num_heads, in_proj_weight=self.in_proj_weight, in_proj_bias=
self.in_proj_bias, bias_k=None, bias_v=None, add_zero_attn=
False, dropout_p=0.0, out_proj_weight=self.out_proj_weight,
out_proj_bias=self.out_proj_bias, training=True,
key_padding_mask=None, need_weights=False, attn_mask=None,
use_separate_proj_weight=False, q_proj_weight=None,
k_proj_weight=None, v_proj_weight=None, static_k=None, static_v
=None, average_attn_weights=True, is_causal=False)
return attn_output.transpose(0, 1)
def test_bug():
torch.set_default_device('cuda')
torch.manual_seed(0)
model = ReproMultihead().cuda()
compiled_model = ReproMultihead().cuda()
compiled_model = torch.compile(compiled_model)
x = torch.randn((1, 512, 256), device='cuda', requires_grad=True)
x_compiled = x.clone().detach().requires_grad_(True)
out_eager = model(x)
out_compiled = compiled_model(x_compiled)
out_eager.sum().backward()
out_compiled.sum().backward()
weight_diff = torch.max(torch.abs(model.in_proj_weight.grad -
compiled_model.in_proj_weight.grad)).item()
print('weight_diff =', weight_diff)
bias_diff = torch.max(torch.abs(model.in_proj_bias.grad -
compiled_model.in_proj_bias.grad)).item()
print('bias_diff =', bias_diff)
if __name__ == '__main__':
test_bug()
```
### Output
```
weight_diff = 0.130126953125
bias_diff = 0.12890625
```
### Versions
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @SherlockNoMad @bdhirsh
| true
|
3,023,870,078
|
bizarre behavior with torch module's Attribute Error
|
ZiyaoLi
|
open
|
[
"module: nn",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
when executing the following code:
```python
import torch
class A(torch.nn.Module):
def __init__(self):
super().__init__()
@property
def foo(self):
return self.bar # attr error
a = A()
print(a.foo)
```
I obtain
```bash
Traceback (most recent call last):
File "test.py", line 12, in <module>
print(a.foo)
File "xxx/python3.8/site-packages/torch/nn/modules/module.py", line 1729, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'A' object has no attribute 'foo'
```
but the expected behavior would be `AttributeError: 'A' object has no attribute 'bar'`.
### Versions
versions is seemingly irrelevant. Anyhow i use
[conda] torch 2.4.0+cu118 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
3,023,831,198
|
Recompile issue after fp8 conversion
|
shiyang-weng
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We are enabling fp8 on mlp. To support fp8 we need to write some conversions.
After conversions find following recompile issue.
torch._dynamo.exc.RecompileError: Recompiling function forward in test_fp8.py:38
triggered by the following guard failure(s):
- 2/0: tensor 'input' size mismatch at index 1. expected 13, actual 512
On our test, input of first layer is 13 and input of second layer is 512.
```python
import os
os.environ["OMP_NUM_THREADS"] = "1"
os.environ["TORCHINDUCTOR_FREEZING"] = "1"
os.environ["TORCH_COMPILE_DEBUG"] = "1"
os.environ["TORCHDYNAMO_PRINT_GUARD_FAILS"] = "1"
from typing import Callable, List, Optional, Union
import torch
from torch import nn
class Perceptron(torch.nn.Module):
def __init__(
self,
in_size: int,
out_size: int,
bias: bool = True,
activation: Union[
torch.nn.Module,
Callable[[torch.Tensor], torch.Tensor],
] = torch.relu,
device: Optional[torch.device] = None,
dtype: torch.dtype = torch.float32,
) -> None:
super().__init__()
self._out_size = out_size
self._in_size = in_size
self._linear: nn.Linear = nn.Linear(
self._in_size,
self._out_size,
bias=bias,
device=device,
dtype=dtype,
)
self._activation_fn: Callable[[torch.Tensor], torch.Tensor] = activation
def forward(self, input: torch.Tensor) -> torch.Tensor:
return self._activation_fn(self._linear(input))
class MLP(torch.nn.Module):
def __init__(
self,
in_size: int,
layer_sizes: List[int],
bias: bool = True,
activation: Union[
str,
Callable[[], torch.nn.Module],
torch.nn.Module,
Callable[[torch.Tensor], torch.Tensor],
] = torch.relu,
device: Optional[torch.device] = None,
dtype: torch.dtype = torch.float32,
) -> None:
super().__init__()
if activation == "relu":
activation = torch.relu
elif activation == "sigmoid":
activation = torch.sigmoid
if not isinstance(activation, str):
self._mlp: torch.nn.Module = torch.nn.Sequential(
*[
Perceptron(
layer_sizes[i - 1] if i > 0 else in_size,
layer_sizes[i],
bias=bias,
activation=activation,
device=device,
dtype=dtype,
)
for i in range(len(layer_sizes))
]
)
else:
assert (
ValueError
), "This MLP only support str version activation function of relu, sigmoid, and swish_layernorm"
def forward(self, input: torch.Tensor) -> torch.Tensor:
return self._mlp(input)
class DenseArch(nn.Module):
def __init__(
self,
in_features: int,
layer_sizes: List[int],
device: Optional[torch.device] = None,
) -> None:
super().__init__()
self.model: nn.Module = MLP(
in_features, layer_sizes, bias=True, activation="relu", device=device
)
def forward(self, features: torch.Tensor) -> torch.Tensor:
return self.model(features)
def inc_convert(model):
model.eval()
dtype = torch.float
from torch.ao.quantization.fx._decomposed import quantize_per_tensor, dequantize_per_tensor
from torch.nn import functional as F
class FP8QDQLinear(torch.nn.Module):
def __init__(self, mod):
super().__init__()
self.mod = mod
def forward(self, input):
weight = dequantize_per_tensor(
input=self.mod.weight.data,
scale=self.mod.weight_scale,
zero_point=0,
quant_min=torch.finfo(torch.float8_e4m3fn).min,
quant_max=torch.finfo(torch.float8_e4m3fn).max,
dtype=self.mod.weight.data.dtype,
out_dtype=dtype,
)
q_input = quantize_per_tensor(
input=input,
scale=self.mod.scale,
zero_point=0,
quant_min=torch.finfo(torch.float8_e4m3fn).min,
quant_max=torch.finfo(torch.float8_e4m3fn).max,
dtype=torch.float8_e4m3fn,
)
dq_input = dequantize_per_tensor(
input=q_input,
scale=self.mod.scale,
zero_point=0,
quant_min=torch.finfo(torch.float8_e4m3fn).min,
quant_max=torch.finfo(torch.float8_e4m3fn).max,
dtype=q_input.dtype,
out_dtype=dtype,
)
out = torch.nn.functional.linear(dq_input, weight, self.mod.bias)
return out
hook_handles = []
import json
from collections import namedtuple
def generate_model_info(model):
mod_inst_info = namedtuple("ModInstInfo", ["name", "parent"])
parent_child_mod_dict = {}
def create_mod_info_recursion(parent):
for name, mod in parent.named_children():
parent_child_mod_dict[mod] = mod_inst_info(name=name, parent=parent)
create_mod_info_recursion(mod)
create_mod_info_recursion(model)
return parent_child_mod_dict
parent_child_mod_dict = generate_model_info(model)
with torch.no_grad():
for name, mod in model.named_modules():
mod_type_str = mod.__class__.__name__
if mod_type_str not in ["Linear", "EmbeddingBag"]:
continue
print(mod_type_str, name)
param = mod.weight
xmax = torch.max(param)
weight_scale = xmax / torch.finfo(torch.float8_e4m3fn).max
setattr(mod, "weight_scale", weight_scale)
q_param = torch.clamp((param / weight_scale), torch.finfo(torch.float8_e4m3fn).min, torch.finfo(torch.float8_e4m3fn).max).to(torch.float8_e4m3fn)
mod.weight.data = q_param
if mod_type_str in ["Linear"]:
scale = [1 / torch.finfo(torch.float8_e4m3fn).max]
assert len(scale) == 1
setattr(mod, "scale", scale[0])
patched_mod = FP8QDQLinear(mod)
parent = parent_child_mod_dict[mod].parent
name = parent_child_mod_dict[mod].name
setattr(parent, name, patched_mod)
from torch._inductor import config as inductor_config
from torch._dynamo import config
config.error_on_recompile = True
inductor_config.cpp_wrapper = True
inductor_config.max_autotune = False
inductor_config.freezing = True
inductor_config.aot_inductor.debug_compile = True
model = DenseArch(13,[512,256,128], "cpu")
example_inputs = (torch.randn(128, 13),)
print(model)
with torch.no_grad():
inc_convert(model)
ref = model(*example_inputs)
model = torch.compile(model)
model(*example_inputs)
test = model(*example_inputs)
```
### Versions
pytorch2.8 master branch
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
3,023,771,137
|
[Cutlass] Fix int check in example tensor creation
|
mlazos
|
closed
|
[
"Merged",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #152390
* #150909
* #150908
* #150907
* #151406
* #150906
* #151713
* #151405
* #150905
* __->__ #152306
* #152305
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,023,771,067
|
[Cutlass] Remove unused dtype conversion map
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150910
* #152390
* #150909
* #150908
* #150907
* #151406
* #150906
* #151713
* #151405
* #150905
* #152306
* __->__ #152305
Previously merged:
* #150904
* #150903
* #150346
* #150345
* #150344
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,023,758,310
|
Fix StringCoordView::substr after D73379178 / #151810
|
swolchok
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152304
Received complaint that we broke something. After a bunch of debugging, landed on this test + fix.
Differential Revision: [D73754877](https://our.internmc.facebook.com/intern/diff/D73754877/)
**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D73754877/)!
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,023,748,330
|
Fix redistribute new_local_tensor be None case
|
wanchaol
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152303
as titled, we can just set new_local_tensor to be the local tensor and
remove the None check, as there would be cases where there's no
transformation needed (i.e. src_placements and dst_placements are the same,
and we still want to return the original local_tensor)
cc @H-Huang @awgu @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,023,698,392
|
NCCL out of memory error after updating to PyTorch 2.7
|
BaconGabe
|
open
|
[
"oncall: distributed",
"triaged",
"module: nccl",
"module: regression"
] | 14
|
NONE
|
### 🐛 Describe the bug
After updating to PyTorch 2.7, using init process group with nccl and calling `DDP(model, device_ids=[rank])` results in a out of memory error. This makes absolutely no sense because it happens even when I am using extremely small amounts of memory, and DDP with nccl worked perfectly fine before the update on the same code.
Here is the error:
```
W0428 00:47:04.140000 51980 .venv/lib/python3.12/site-packages/torch/multiprocessing/spawn.py:169] Terminating process 52051 via signal SIGTERM
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/home/.../.venv/lib/python3.12/site-packages/torch/multiprocessing/spawn.py", line 90, in _wrap
fn(i, *args)
File "/home/.../example.py", line 39, in demo_basic
ddp_model = DDP(model, device_ids=[rank])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.../.venv/lib/python3.12/site-packages/torch/nn/parallel/distributed.py", line 835, in __init__
_verify_param_shape_across_processes(self.process_group, parameters)
File "/home/.../.venv/lib/python3.12/site-packages/torch/distributed/utils.py", line 282, in _verify_param_shape_across_processes
return dist._verify_params_across_processes(process_group, tensors, logger)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:3353, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.26.2
ncclUnhandledCudaError: Call to CUDA function failed.
Last error:
Cuda failure 2 'out of memory'
```
The demo code on how to use DDP provided by PyTorch produces the same error:
```python
import os
import sys
import tempfile
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank]) # HERE IS WHERE THE ERROR OCCURS
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
print(f"Finished running basic DDP example on rank {rank}.")
def run_demo(demo_fn, world_size):
mp.spawn(demo_fn,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__ == "__main__":
run_demo(demo_basic, 2)
```
### Versions
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 5090
GPU 2: NVIDIA GeForce RTX 4090
Nvidia driver version: 576.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 7980X 64-Cores
CPU family: 25
Model: 24
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 1
BogoMIPS: 6390.51
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 64 MiB (64 instances)
L3 cache: 32 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] pytorch-lightning==2.5.1.post0
[pip3] pytorch_optimizer==3.5.1
[pip3] torch==2.7.0+cu128
[pip3] torchaudio==2.7.0+cu128
[pip3] torchmetrics==1.7.1
[pip3] torchvision==0.22.0+cu128
[pip3] triton==3.3.0
[conda] Could not collect
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,023,620,150
|
Unexpected result from `torch.xpu.is_bf16_supported()` when XPU is unavailable
|
defaultd661
|
closed
|
[
"triaged",
"module: xpu"
] | 1
|
NONE
|
### 🐛 Describe the bug
When `torch.xpu.is_available()` returns `False`, calling `torch.xpu.is_bf16_supported()` still returns `True`, which is inconsistent with the expected behavior (should be `False`).
### To Reproduce
```
import torch
def test_bug():
print('torch.xpu.is_available() =', torch.xpu.is_available())
if not torch.xpu.is_available():
result = torch.xpu.is_bf16_supported()
print('result =', result)
if __name__ == '__main__':
test_bug()
```
### Output
```
torch.xpu.is_available() = False
result = True
```
### Versions
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,023,614,912
|
Unexpected behavior when using dist.all_reduce(x, op=dist.ReduceOp.SUM)
|
fhk357869050
|
open
|
[
"oncall: distributed",
"triaged",
"module: c10d"
] | 1
|
NONE
|
### 🐛 Describe the bug
```
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
import numpy as np
def exec_op(rank):
dist.init_process_group(backend='gloo', rank=rank, world_size=2, init_method=f'tcp://127.0.0.1:40001')
np.random.seed(1024 + rank)
x = np.random.uniform(-65504, 65504, [m, k]).astype(np.float16)
x = torch.from_numpy(x)
print(f"rank:{rank} before all_reduce x[7205]:{x[7205]}")
dist.all_reduce(x, op=dist.ReduceOp.SUM)
print(f"rank:{rank} after all_reduce x[7205]:{x[7205]}")
if __name__ == '__main__':
m, k = [24063328, 1]
p_list = []
for g_rank in range(2):
p = Process(target=exec_op, args=(g_rank,))
p_list.append(p)
for p in p_list:
p.start()
for p in p_list:
p.join()
```

about 0.007% points didn't match.

### Versions
python3.8.5
torch2.4.0
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,023,609,155
|
`torch.compile()` produces incorrect results for `asinh_()` operation on large/small values
|
defaultd661
|
open
|
[
"high priority",
"triaged",
"module: correctness (silent)",
"module: edge cases",
"oncall: pt2",
"module: inductor",
"oncall: cpu inductor"
] | 2
|
NONE
|
### 🐛 Describe the bug
### To Reproduce
```
import torch
import numpy as np
def test_bug():
x = torch.tensor([[-1e+30, 1e+30, -5e+28, 5e+28, -7.5e+29, 7.5e+29,
-2e+30, 2e+30, 0.0]], dtype=torch.float32).repeat(3, 1)
eager_tensor = x.clone()
eager_tensor.asinh_()
eager_np = eager_tensor.numpy()
print('eager_np =', eager_np)
compiled_tensor = x.clone()
compiled_func = torch.compile(lambda t: t.asinh_())
compiled_func(compiled_tensor)
compiled_np = compiled_tensor.numpy()
print('compiled_np =', compiled_np)
if __name__ == '__main__':
test_bug()
```
### Output
```
eager_np = [[-69.7707 69.7707 -66.77496 66.77496 -69.48302 69.48302
-70.463844 70.463844 0. ]
[-69.7707 69.7707 -66.77496 66.77496 -69.48302 69.48302
-70.463844 70.463844 0. ]
[-69.7707 69.7707 -66.77496 66.77496 -69.48302 69.48302
-70.463844 70.463844 0. ]]
compiled_np = [[-inf inf -inf inf -inf inf -inf inf 0.]
[-inf inf -inf inf -inf inf -inf inf 0.]
[-inf inf -inf inf -inf inf -inf inf 0.]]
```
### Versions
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,023,598,455
|
Enable the AMP precision with freezing for CPU nightly test
|
LifengWang
|
open
|
[
"triaged",
"open source",
"release notes: releng"
] | 1
|
CONTRIBUTOR
|
Hi, @desertfire. Since we recommend users to use AMP precision and run with `--freezing` for CPU x86 Inductor inference, we suggest adding the AMP freezing test to the CPU nightly tests.
cc @chuanqi129 @zxd1997066
| true
|
3,023,598,223
|
Flex attention: batch-index-dependent block mask causes error with changing batch size
|
zhihanyang2022
|
closed
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 1
|
NONE
|
### 🐛 Describe the bug
I'm trying to do attention with a *custom* attention mask that *depends on the batch index*.
My square attention mask has the following structure:
- First `n` rows is causal
- Afterwards everything is bidirectional
`n` is different for each batch index, and is specified through the tensor of integers named `cutoffs`.
During training, the last batch might be smaller. This causes an error related to flex attention.
_The error goes away_ if I remove `mode="max-autotune-no-cudagraphs"`. I'm hoping to include it (or other alternatives) because it's the best practice for speedup.
Below is a minimal example of the error:
```python
from torch.nn.attention.flex_attention import flex_attention, create_block_mask
import torch
import torch.nn.functional as F
# # Flags required to enable jit fusion kernels
torch._C._jit_set_profiling_mode(False)
torch._C._jit_set_profiling_executor(False)
torch._C._jit_override_can_fuse_on_cpu(True)
torch._C._jit_override_can_fuse_on_gpu(True)
@torch.compile(fullgraph=True, mode="max-autotune-no-cudagraphs")
def fused_flex_attention(q, k, v, mask=None):
return flex_attention(q, k, v, block_mask=mask)
def create_mixed_diffusion_mask(cutoffs):
def mixed_diffusion_mask(b, h, q_idx, kv_idx):
causal = q_idx >= kv_idx
block_identity = q_idx >= cutoffs[b]
return causal | block_identity
return mixed_diffusion_mask
large_batch_size = 256
large_qkv = torch.randn(large_batch_size, 8, 3, 128, 32).cuda()
large_cutoffs = torch.randint(0 ,128, (large_batch_size,)).cuda()
small_batch_size = 64
small_qkv = torch.randn(small_batch_size, 8, 3, 128, 32).cuda()
small_cutoffs = torch.randint(0 ,128, (small_batch_size,)).cuda()
block_mask = create_block_mask(create_mixed_diffusion_mask(large_cutoffs), B=large_batch_size, H=None, Q_LEN=128, KV_LEN=128)
fused_flex_attention(large_qkv[:, :, 0], large_qkv[:, :, 1], large_qkv[:, :, 2], mask=block_mask)
block_mask = create_block_mask(create_mixed_diffusion_mask(small_cutoffs), B=small_batch_size, H=None, Q_LEN=128, KV_LEN=128)
fused_flex_attention(small_qkv[:, :, 0], small_qkv[:, :, 1], small_qkv[:, :, 2], mask=block_mask)
```
```
Traceback (most recent call last):
File "/share/thickstun/zhihan/ELMO/test_flex_attention_2.py", line 38, in <module>
fused_flex_attention(small_qkv[:, :, 0], small_qkv[:, :, 1], small_qkv[:, :, 2], mask=block_mask)
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 663, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 760, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 745, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 1293, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 1119, in codegen_and_compile
graph.run(*example_inputs)
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/graph.py", line 877, in run
return super().run(*args)
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/graph.py", line 1527, in run_node
result = super().run_node(n)
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/fx/interpreter.py", line 240, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/graph.py", line 1198, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/graph.py", line 1188, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/lowering.py", line 465, in wrapped
out = decomp_fn(*args, **kwargs)
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/kernel/flex_attention.py", line 1533, in flex_attention
autotune_select_algorithm(
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/select_algorithm.py", line 2344, in autotune_select_algorithm
return _ALGORITHM_SELECTOR_CACHE(*args, **kwargs)
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/select_algorithm.py", line 1734, in __call__
inputs_key = create_inputs_key(input_nodes)
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/select_algorithm.py", line 1624, in create_inputs_key
return repr([AlgorithmSelectorCache.key_of(x) for x in input_nodes])
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/select_algorithm.py", line 1624, in <listcomp>
return repr([AlgorithmSelectorCache.key_of(x) for x in input_nodes])
File "/share/thickstun/zhihan/.conda/bd3lm/lib/python3.9/site-packages/torch/_inductor/select_algorithm.py", line 2306, in key_of
node.get_device().type,
torch._inductor.exc.InductorError: LoweringException: AttributeError: 'Symbol' object has no attribute 'get_device'
target: flex_attention
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg2_1', layout=FixedLayout('cuda:0', torch.float32, size=[s1, 8, 128, 32], stride=[8*s2, s2, 32, 1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='arg4_1', layout=FixedLayout('cuda:0', torch.float32, size=[s1, 8, 128, 32], stride=[8*s2, s2, 32, 1]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='arg7_1', layout=FixedLayout('cuda:0', torch.float32, size=[s5, 8, 128, 32], stride=[8*s2, s2, 32, 1]))
))
args[3]: Subgraph(name='sdpa_score0', graph_module=<lambda>(), graph=None)
args[4]: (128, 128, TensorBox(StorageBox(
InputBuffer(name='arg11_1', layout=FixedLayout('cuda:0', torch.int32, size=[s8, 1, 1], stride=[1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='arg9_1', layout=FixedLayout('cuda:0', torch.int32, size=[s7, 1, 1, 1], stride=[1, 1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='arg15_1', layout=FixedLayout('cuda:0', torch.int32, size=[s10, 1, 1], stride=[1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='arg17_1', layout=FixedLayout('cuda:0', torch.int32, size=[s11, 1, 1, 1], stride=[1, 1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='arg19_1', layout=FixedLayout('cuda:0', torch.int32, size=[s12, 1, 1], stride=[1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='arg21_1', layout=FixedLayout('cuda:0', torch.int32, size=[s13, 1, 1, 1], stride=[1, 1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='arg23_1', layout=FixedLayout('cuda:0', torch.int32, size=[s14, 1, 1], stride=[1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='arg25_1', layout=FixedLayout('cuda:0', torch.int32, size=[s15, 1, 1, 1], stride=[1, 1, 1, 1]))
)), 128, 128, Subgraph(name='sdpa_mask0', graph_module=<lambda>(), graph=None))
args[5]: 0.17677669529663687
args[6]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'WRITE_DQ': True, 'OUTPUT_LOGSUMEXP': True}
args[7]: ()
args[8]: (s9, TensorBox(StorageBox(
InputBuffer(name='arg13_1', layout=FixedLayout('cuda:0', torch.int64, size=[s9], stride=[1]))
)))
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
```
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-205-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 6000 Ada Generation
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 384
On-line CPU(s) list: 0-383
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 17
Model name: AMD EPYC 9654 96-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1479.987
CPU max MHz: 2400.0000
CPU min MHz: 1500.0000
BogoMIPS: 4800.06
Virtualization: AMD-V
L1d cache: 6 MiB
L1i cache: 6 MiB
L2 cache: 192 MiB
L3 cache: 768 MiB
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-lightning==2.5.1.post0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchmetrics==1.6.2
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] cuda-cudart 12.4.127 h99ab3db_0
[conda] cuda-cudart-dev 12.4.127 h99ab3db_0
[conda] cuda-cudart-dev_linux-64 12.4.127 hd681fbe_0
[conda] cuda-cudart-static 12.4.127 h99ab3db_0
[conda] cuda-cudart-static_linux-64 12.4.127 hd681fbe_0
[conda] cuda-cudart_linux-64 12.4.127 hd681fbe_0
[conda] cuda-cupti 12.4.127 h6a678d5_1
[conda] cuda-cupti-dev 12.4.127 h6a678d5_1
[conda] cuda-libraries 12.4.1 h06a4308_1
[conda] cuda-libraries-dev 12.4.1 h06a4308_1
[conda] cuda-libraries-static 12.4.1 h06a4308_1
[conda] cuda-nvrtc 12.4.127 h99ab3db_1
[conda] cuda-nvrtc-dev 12.4.127 h99ab3db_1
[conda] cuda-nvrtc-static 12.4.127 h99ab3db_1
[conda] cuda-nvtx 12.4.127 h6a678d5_1
[conda] cuda-opencl 12.4.127 h6a678d5_0
[conda] cuda-opencl-dev 12.4.127 h6a678d5_0
[conda] libcublas 12.4.5.8 h99ab3db_1
[conda] libcublas-dev 12.4.5.8 h99ab3db_1
[conda] libcublas-static 12.4.5.8 h99ab3db_1
[conda] libcufft 11.2.1.3 h99ab3db_1
[conda] libcufft-dev 11.2.1.3 h99ab3db_1
[conda] libcufft-static 11.2.1.3 h99ab3db_1
[conda] libcurand 10.3.5.147 h99ab3db_1
[conda] libcurand-dev 10.3.5.147 h99ab3db_1
[conda] libcurand-static 10.3.5.147 h99ab3db_1
[conda] libcusolver 11.6.1.9 h99ab3db_1
[conda] libcusolver-dev 11.6.1.9 h99ab3db_1
[conda] libcusolver-static 11.6.1.9 h99ab3db_1
[conda] libcusparse 12.3.1.170 h99ab3db_1
[conda] libcusparse-dev 12.3.1.170 h99ab3db_1
[conda] libcusparse-static 12.3.1.170 h99ab3db_1
[conda] libnvjitlink 12.4.127 h99ab3db_1
[conda] libnvjitlink-dev 12.4.127 h99ab3db_1
[conda] libnvjitlink-static 12.4.127 h99ab3db_1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-lightning 2.5.1.post0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchmetrics 1.6.2 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
3,023,579,086
|
[Break XPU] chunk_cat accuracy failed on XPU Inductor UT.
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
Since the PR #151263 landed, the Inductor UTs that related to `chunk_cat` got accuracy failures.
The root cause is #151263 start to support contiguous inputs which break the old assumption that all the inputs is contiguous. The implementation in torch-xpu-ops still use the old assumption and get failed.
### Versions
PyTorch version: 2.8.0a0+git817239
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,023,578,067
|
`vmap` not working on `torch.arange`, `torch.scalar_tensor`, and `torch.ones`
|
defaultd661
|
open
|
[
"triaged",
"module: vmap",
"module: functorch"
] | 0
|
NONE
|
### 🐛 Describe the bug
# torch.arange
### To Reproduce
```
import torch
from functools import partial
def test_bug():
batched_arange = torch.vmap(partial(torch.arange, step=1))
start = torch.tensor([1, 2, 3], dtype=torch.int64)
end = torch.tensor([25, 26, 27], dtype=torch.int64)
batched_arange(start, end)
if __name__ == '__main__':
test_bug()
```
### Output
```
RuntimeError: vmap: It looks like you're calling .item() on a Tensor. We don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. If error is occurring somewhere inside PyTorch internals, please file a bug report.
```
# torch.ones
### To Reproduce
```
import torch
from functools import partial
def test_bug():
batched_shapes = torch.tensor([[2, 3], [3, 4], [4, 5]], dtype=torch.int64)
def ones_from_shape(shape):
return torch.ones(shape[0], shape[1])
batched_ones = torch.vmap(ones_from_shape)
batched_ones(batched_shapes)
if __name__ == '__main__':
test_bug()
```
### Output
```
RuntimeError: vmap: It looks like you're calling .item() on a Tensor. We don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. If error is occurring somewhere inside PyTorch internals, please file a bug report.
```
# torch.scalar_tensor
### To Reproduce
```
import torch
def test_bug():
batched_scalar = torch.vmap(torch.scalar_tensor)
values = torch.tensor([1.0, 2.0, 3.0])
batched_scalar(values)
if __name__ == '__main__':
test_bug()
```
### Output
```
RuntimeError: vmap: It looks like you're calling .item() on a Tensor. We don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. If error is occurring somewhere inside PyTorch internals, please file a bug report.
```
### Versions
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
3,023,563,083
|
Unexpected overflow behavior when using `torch.addcmul`
|
defaultd661
|
open
|
[
"module: cpu",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
This issue is similar to the one reported in [#98691](https://github.com/pytorch/pytorch/issues/98691), where operations on mixed precision tensors lead to unexpected overflow behaviors.
### To Reproduce
```
def test_bug():
import torch
input_tensor = torch.zeros([1], dtype=torch.float16, device='cpu')
tensor1 = torch.tensor([0.01], dtype=torch.float16, device='cpu')
tensor2 = torch.tensor(65536, dtype=torch.float32, device='cpu')
result = torch.addcmul(input_tensor, tensor1, tensor2, value=1)
print(result)
if __name__ == '__main__':
test_bug()
```
### Output
```
tensor([inf], dtype=torch.float16)
```
### Versions
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,023,536,059
|
`torch.sparse.log_softmax` output mismatch between CPU and CUDA
|
defaultd661
|
open
|
[
"module: sparse",
"triaged",
"topic: bug fixes"
] | 1
|
NONE
|
### 🐛 Describe the bug
When applying `torch.sparse.log_softmax` on a sparse tensor, the outputs on CPU and CUDA are inconsistent.
### To Reproduce
```
import torch
from torch.sparse import log_softmax as sparse_log_softmax
def test_bug():
a = torch.rand(4, 3)
b = a - 10000000.0
b_sparse = b.to_sparse()
cpu_out_sparse = sparse_log_softmax(b_sparse, dim=1).to_dense()
print('cpu_out_sparse =', cpu_out_sparse)
b_sparse_cuda = b_sparse.to('cuda')
cuda_out_sparse = sparse_log_softmax(b_sparse_cuda, dim=1).to('cpu'
).to_dense()
print('cuda_out_sparse =', cuda_out_sparse)
if __name__ == '__main__':
test_bug()
```
### Output
```
cpu_out_sparse = tensor([[-2., -1., -1.],
[-1., -1., -1.],
[-1., -1., -1.],
[-1., -1., -2.]])
cuda_out_sparse = tensor([[-1.8620, -0.8620, -0.8620],
[-1.0986, -1.0986, -1.0986],
[-1.0986, -1.0986, -1.0986],
[-0.8620, -0.8620, -1.8620]])
```
### Versions
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
3,023,534,525
|
`torch==2.6` broke `nn.Module.dtype` typing
|
jamesbraza
|
open
|
[
"module: typing",
"triaged",
"module: regression"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
With the below Python 3.12 code, `torch==2.5.1`, and `mypy==1.15.0` there are no type errors:
```python
import torch
from torch import nn
module: nn.Module
with torch.autocast(device_type=module.device.type, dtype=module.dtype):
...
```
Then with Python 3.13, `torch==2.6.0` or `torch==2.7.0`, and `mypy==1.15.0` there are type errors:
```none
a.py:5:33: error: Argument "device_type" to "autocast" has incompatible type "overloaded function | Callable[[dtype | str], Module]"; expected "str" [arg-type]
with torch.autocast(device_type=module.device.type, dtype=module.dtype):
^~~~~~~~~~~~~~~~~~
a.py:5:59: error: Argument "dtype" to "autocast" has incompatible type "Tensor | Module"; expected "dtype | None" [arg-type]
with torch.autocast(device_type=module.device.type, dtype=module.dtype):
^~~~~~~~~~~~
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.0.13.3)
CMake version: version 4.0.1
Libc version: N/A
Python version: 3.13.1 (main, Dec 9 2024, 11:00:45) [Clang 16.0.0 (clang-1600.0.26.4)] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit-Mach-O
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] torch==2.7.0
[conda] Could not collect
cc @ezyang @malfet @xuzhao9 @gramster
| true
|
3,023,511,854
|
Windows CUDA Build Failure: Ambiguous std in cuda_vectorized_test.cu (CUDA 12.6/MSVC 2019)
|
jifferyfeng
|
closed
|
[
"oncall: pt2"
] | 0
|
NONE
|
### 🐛 Describe the bug
When building PyTorch from source on Windows, the compilation fails with the following error in cuda_vectorized_test.cu:
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\xtree(1394): error C2872: “std”: ambiguous symbol
Error Context:
The issue occurs during the compilation of cuda_vectorized_test.cu with NVCC (CUDA 12.6). The error suggests a namespace conflict with std, where the compiler cannot resolve whether std refers to the standard library or another definition (possibly from C10/cuda/CUDAStream.h).
Affected Components:
File: aten/src/ATen/test/cuda_vectorized_test.cu
Related Headers:
C10/cuda/CUDAStream.h
Google Test headers (gtest/internal/gtest-internal.h)
MSVC STL headers (xtree, map)
Build Environment:
OS: Windows
CUDA: v12.6
MSVC: 2019 Community (v14.29.30133)
PyTorch Commit: Likely recent (from source checkout)
### Error logs
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\xtree(1394): error C2872: “std”:
C:/pytorch\c10/cuda/CUDAStream.h(261): note: may be “std”
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\xtree(1394): note: or “std”
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\xtree(1391): note: “unsigned __int64 std::_Tree<std::_Tmap_traits<_Kty,_Ty,_Pr,_Alloc,false>>::count(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &) const”
with
[
_Kty=std::basic_string<char,std::char_traits<char>,std::allocator<char>>,
_Ty=testing::internal::CodeLocation,
_Pr=std::less<void>,
_Alloc=std::allocator<std::pair<const std::basic_string<char,std::char_traits<char>,std::allocator<char>>,testing::internal::CodeLocation>>
]
C:/pytorch/cmake/../third_party/googletest/googletest/include\gtest/internal/gtest-internal.h(603): note: “unsigned __int64 std::_Tree<std::_Tmap_traits<_Kty,_Ty,_Pr,_Alloc,false>>::count(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &) const”
with
[
_Kty=std::basic_string<char,std::char_traits<char>,std::allocator<char>>,
_Ty=testing::internal::CodeLocation,
_Pr=std::less<void>,
_Alloc=std::allocator<std::pair<const std::basic_string<char,std::char_traits<char>,std::allocator<char>>,testing::internal::CodeLocation>>
]
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\map(73): note: “std::_Tree<std::_Tmap_traits<_Kty,_Ty,_Pr,_Alloc,false>>”
with
[
_Kty=std::basic_string<char,std::char_traits<char>,std::allocator<char>>,
_Ty=testing::internal::CodeLocation,
_Pr=std::less<void>,
_Alloc=std::allocator<std::pair<const std::basic_string<char,std::char_traits<char>,std::allocator<char>>,testing::internal::CodeLocation>>
]
C:/pytorch/cmake/../third_party/googletest/googletest/include\gtest/internal/gtest-internal.h(623): note: “std::map<std::basic_string<char,std::char_traits<char>,std::allocator<char>>,testing::internal::CodeLocation,std::less<void>,std::allocator<std::pair<const std::basic_string<char,std::char_traits<char>,std::allocator<char>>,testing::internal::CodeLocation>>>”
[7267/7574] Building CXX object test_jit\CMakeFiles\test_jit.dir\test_class_import.cpp.obj
ninja: build stopped: subcommand failed.
### Versions
1.27.0
cc @chauhang @penguinwu
| true
|
3,023,487,281
|
[Intel GPU][PT2.8]scaled_dot_product_attention returns wrong output
|
LuFinch
|
open
|
[
"triaged",
"module: xpu"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Using nightly build PT2.8, this sample code will return wrong output:
```
import torch
from datasets import load_dataset
from transformers import pipeline, Wav2Vec2Processor
model_id = "facebook/hubert-large-ls960-ft"
device = "xpu"
torch_dtype = torch.float16
generator = pipeline(
"automatic-speech-recognition",
model=model_id,
device=device,
torch_dtype=torch_dtype,
)
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
input_data = ds[0]['audio']['array']
with torch.inference_mode():
output = generator(input_data)
print(f"output: {output}")
```
while using stock stable pytorch 2.6 version, the output is correct. After investigation, find the output of scaled_dot_product_attention API is wrong
### Versions
```
PyTorch version: 2.6.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 47 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 6
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd sgx_lc fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] pytorch-triton-xpu==3.2.0
[pip3] torch==2.6.0+xpu
[pip3] torchaudio==2.6.0+xpu
[pip3] torchvision==0.21.0+xpu
[conda] Could not collect
```
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,023,478,343
|
[inductor] Skip isinf check for FP8 E4M3 dtype
|
sarckk
|
open
|
[
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"release notes: inductor (aoti)"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152289
Both Float8E4M3FN and Float8E4M3FNUZ [do not support representing infinity](https://github.com/openxla/stablehlo/blob/main/rfcs/20230321-fp8_fnuz.md), so skip `isinf()` check in inductor.
Fixes #149002. New UT passes with `python test/inductor/test_torchinductor.py NanCheckerTest`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,023,477,390
|
[1/N] Use std::filesystem
|
cyyever
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 7
|
COLLABORATOR
|
Maybe it is time to use std::filesystem because CXX11 ABI is now the default. The changes are for jit and distributed code.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,023,453,811
|
[cudagraphs] Fix issue in collecting static_input_idxs
|
anijain2305
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152287
related to https://github.com/pytorch/pytorch/issues/152275
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,023,396,775
|
[DTensor] enable SimpleFSDP's composability with Tensor Parallel
|
ruisizhang123
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"module: dynamo"
] | 5
|
CONTRIBUTOR
|
This PR adds support for SimpleFSDP's composability with Tensor Parallel. This is done by enabling a DTensor redistribution from the FSDP submesh toward TP submesh in `distribute_tensor` API.
1. **Correctness**: The end-to-end SimpleFSDP TP integration has been proved to work in the PR from this fork: tianyu-l/pytorch_intern24#25. Per the discussion with Tianyu, this PR also adds _StridedShard following FSDP2 to be compatible with distributed checkpointing. The newly benchmarked results demonstrated it works properly in this torchtitan PR: https://github.com/pytorch/torchtitan/pull/1148.
2. **Example Usage**: There is an example in TorchTian's SimpleFSDP implementation: https://github.com/pytorch/torchtitan/pull/1148.
In the example below, given an input DTensor `tensor` sharded in `fully_shard` mode (FSDP) with placement `(Shard(0),)`. If the `device_mesh` is a 2D mesh with FSDP & TP dim, this `tensor` is re-distributed from FSDP placement to the TP placement.
```python
distribute_tensor(tensor, device_mesh, param_sharding)
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,023,390,323
|
Error after successful build: No module named 'torch._C._distributed_c10d'
|
henrydwright
|
open
|
[
"oncall: distributed",
"module: build",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
Built wheel from source on Windows (arm64) with USE_DISTRIBUTED=0 and USE_CUDA=0 by running `python setup.py bdist_wheel -v`. No errors during build or install.
Aside from unrelated warning from cpuinfo, below works fine
```python
import torch
x = torch.rand(5,3)
print(x)
```
When attempting to download and run model using `transformers` I get a runtime error
```python
import transformers
import torch
from transformers import AutoTokenizer
model = "meta-llama/Llama-3.1-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model = model,
torch_dtype=torch.float16,
device_map="auto"
)
```
Error with call stack as follows
```
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\transformers\utils\import_utils.py", line 1967, in _get_module
return importlib.import_module("." + module_name, self.__name__)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313-arm64\Lib\importlib\__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 1026, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\transformers\pipelines\__init__.py", line 49, in <module>
from .audio_classification import AudioClassificationPipeline
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\transformers\pipelines\audio_classification.py", line 21, in <module>
from .base import Pipeline, build_pipeline_init_args
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\transformers\pipelines\base.py", line 69, in <module>
from ..modeling_utils import PreTrainedModel
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\transformers\modeling_utils.py", line 41, in <module>
import torch.distributed.tensor
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\torch\distributed\tensor\__init__.py", line 4, in <module>
import torch.distributed.tensor._ops # force import all built-in dtensor ops
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\torch\distributed\tensor\_ops\__init__.py", line 2, in <module>
from ._conv_ops import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\torch\distributed\tensor\_ops\_conv_ops.py", line 5, in <module>
from torch.distributed.tensor._dtensor_spec import DTensorSpec, TensorMeta
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\torch\distributed\tensor\_dtensor_spec.py", line 6, in <module>
from torch.distributed.tensor.placement_types import (
...<4 lines>...
)
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\torch\distributed\tensor\placement_types.py", line 8, in <module>
import torch.distributed._functional_collectives as funcol
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\torch\distributed\_functional_collectives.py", line 9, in <module>
import torch.distributed.distributed_c10d as c10d
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\torch\distributed\distributed_c10d.py", line 23, in <module>
from torch._C._distributed_c10d import (
...<22 lines>...
)
ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\Dev\trch-whl-test\trans-torch.py", line 9, in <module>
pipeline = transformers.pipeline(
^^^^^^^^^^^^^^^^^^^^^
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\transformers\utils\import_utils.py", line 1955, in __getattr__
module = self._get_module(self._class_to_module[name])
File "D:\Dev\trch-whl-test\.venv\Lib\site-packages\transformers\utils\import_utils.py", line 1969, in _get_module
raise RuntimeError(
...<2 lines>...
) from e
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
```
When looking in `.venv\Lib\site-packages\torch` I see a `_C` folder containing `_distributed_c10d.pyi`. I can't see what's wrong, but am admittedly a newbie at Python w/ compiled libraries.
### Versions
```
Error in cpuinfo: Unknown chip model name 'Snapdragon(R) X 12-core X1E80100 @ 3.40 GHz'.
Please add new Windows on Arm SoC/chip support to arm/windows/init.c!
Collecting environment information...
PyTorch version: 2.7.0a0+git1341794
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home (10.0.26100 ARM 64-bit Processor)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.29.5-msvc4
Libc version: N/A
Python version: 3.13.2 (tags/v3.13.2:4f8bb39, Feb 4 2025, 16:24:41) [MSC v.1942 64 bit (ARM64)] (64-bit runtime)
Python platform: Windows-11-10.0.26100-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Name: Snapdragon(R) X 12-core X1E80100 @ 3.40 GHz
Manufacturer: Qualcomm Technologies Inc
Family: 280
Architecture: 12
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3417
MaxClockSpeed: 3417
L2CacheSize: 36864
L2CacheSpeed: None
Revision: 513
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] torch==2.7.0a0+git1341794
[conda] Could not collect
```
plus if of interest
```
cl version: Microsoft (R) C/C++ Optimizing Compiler Version 19.42.34433 for ARM64
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @malfet @seemethere
| true
|
3,023,361,580
|
[inductor] set correct precompile start time
|
sarckk
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152284
Fixes #148777
With num_worker set to 1, ran script in #148777
before:
```
Precompiling benchmark choice TritonTemplateCaller took 0.19s
Precompiling benchmark choice TritonTemplateCaller took 0.38s
Precompiling benchmark choice TritonTemplateCaller took 0.53s
Precompiling benchmark choice TritonTemplateCaller took 0.90s
Precompiling benchmark choice TritonTemplateCaller took 1.29s
Precompiling benchmark choice TritonTemplateCaller took 20.78s
Precompiling benchmark choice TritonTemplateCaller took 25.42s
Precompiling benchmark choice TritonTemplateCaller took 25.92s
Precompiling benchmark choice TritonTemplateCaller took 27.21s
Precompiling benchmark choice TritonTemplateCaller took 48.76s
Precompiling benchmark choice TritonTemplateCaller took 53.66s
Precompiling benchmark choice TritonTemplateCaller took 63.12s
Precompiling benchmark choice TritonTemplateCaller took 69.53s
Precompiling benchmark choice TritonTemplateCaller took 71.24s
Precompiling benchmark choice TritonTemplateCaller took 75.57s
Precompiling benchmark choice TritonTemplateCaller took 97.58s
Precompiling benchmark choice TritonTemplateCaller took 107.71s
Precompiling benchmark choice TritonTemplateCaller took 117.27s
Precompiling benchmark choice TritonTemplateCaller took 126.30s
FX codegen and compilation took 133.733s
```
after:
```
Precompiling benchmark choice TritonTemplateCaller took 0.18s
Precompiling benchmark choice TritonTemplateCaller took 0.18s
Precompiling benchmark choice TritonTemplateCaller took 0.14s
Precompiling benchmark choice TritonTemplateCaller took 0.35s
Precompiling benchmark choice TritonTemplateCaller took 0.39s
Precompiling benchmark choice TritonTemplateCaller took 19.54s
Precompiling benchmark choice TritonTemplateCaller took 4.69s
Precompiling benchmark choice TritonTemplateCaller took 0.52s
Precompiling benchmark choice TritonTemplateCaller took 1.28s
Precompiling benchmark choice TritonTemplateCaller took 20.96s
Precompiling benchmark choice TritonTemplateCaller took 4.81s
Precompiling benchmark choice TritonTemplateCaller took 9.40s
Precompiling benchmark choice TritonTemplateCaller took 6.34s
Precompiling benchmark choice TritonTemplateCaller took 1.93s
Precompiling benchmark choice TritonTemplateCaller took 4.39s
Precompiling benchmark choice TritonTemplateCaller took 21.91s
Precompiling benchmark choice TritonTemplateCaller took 10.10s
Precompiling benchmark choice TritonTemplateCaller took 9.55s
Precompiling benchmark choice TritonTemplateCaller took 9.15s
FX codegen and compilation took 133.246s
```
Also tested async triton compile path by setting num_workers > 1
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,023,280,301
|
Forward compatibility in torch.export
|
lminer
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
Are there any plans to guarantee forward compatibility in torch.export once it leaves beta? I have models that need to be converted to coreml and to litert, both of which are pinned to specific and conflicting versions of pytorch. It is useful to be able to export in the training environment and then perform the conversion in a separate environment in order to manage this dependency hell. This is currently possible with torchscript, but not with torch.export, which is a problem because google's tool for pytorch->litert only works with torch.export.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,023,272,806
|
[MPS] col2im kernel implementation
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 4
|
COLLABORATOR
|
Fixes #151820
Also requested in #141287
Mainly based on the cuda kernel implementations
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,023,223,796
|
[CI] Add xpu inductor test into periodic workflow
|
chuanqi129
|
open
|
[
"triaged",
"open source",
"release notes: releng",
"ciflow/periodic",
"keep-going"
] | 7
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
3,023,158,314
|
Update `torch/nn/modules/conv.py` to use Literal for support padding modes
|
Skylion007
|
open
|
[
"good first issue",
"module: typing",
"triaged",
"actionable"
] | 7
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
It would be great to update `torch/nn/modules/conv.py` to use typing.Literal instead of just `str` to denote with padding modes are actually supported by various operations.
for example instead of
`padding_mode : str`
do
`padding_mode: Literal["valid", "same"]` etc to the type checker can catch bugs for the code is actually ran.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @malfet @xuzhao9 @gramster
| true
|
3,023,037,578
|
Make scaler.step() return if step was skipped or not
|
pyphan1
|
closed
|
[
"module: optimizer",
"triaged"
] | 5
|
NONE
|
### 🚀 The feature, motivation and pitch
Make calling scaler.step(optimizer) return if the step was skipped or not instead of always returning None, or make it print when it skips a step
for example we can use:
stepped = scaler.step(optimizer)
if not stepped:
print('Step was skipped because of an underflow or overflow')
it is easy to implement returning the stepping state, it should be returned from grad_scaler._maybe_opt_step
for example, some models apply lazy regularization or lazy loss types that are expensive to compute, imagine that you're training a model with an expensive loss type that you apply every 16 steps, you also multiply the loss by 16 to make it as strong as if it is applied in every step, now imagine that your loss grew 4x in the step that you add this expensive loss type in, after scaling the loss with the scaler that used to scale normal loss you might get NaNs/infs in the loss due to an overflow/underflow, and the step will be skipped automatically, however you will never know and this can happen every time you add the lazy loss, so it will effectively be like you never used this necessary loss function.
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
3,023,028,552
|
MPS: Conv1d fails with NotImplementedError for output_channels > 65536
|
ehartford
|
open
|
[
"module: convolution",
"triaged",
"module: mps"
] | 4
|
NONE
|
### 🐛 Describe the bug
Running torch.nn.functional.conv1d (or torch.nn.Conv1d) on the MPS backend results in the following error when the number of output channels exceeds 65536:
`NotImplementedError: Output channels > 65536 not supported at the MPS device.`
This limitation prevents certain common model architectures, such as standard Wav2Vec2 implementations which utilize Conv1d layers with high channel counts in their feature extraction components, from running natively on the MPS device.
The current workaround involves either using the global PYTORCH_ENABLE_MPS_FALLBACK=1 environment variable or implementing targeted code changes to move the specific conv1d operation and its inputs/outputs to the CPU, both of which negatively impact performance compared to native MPS execution.
Please consider adding support for conv1d operations with output channels > 65536 on the MPS backend to improve hardware acceleration coverage and performance for models relying on such layers.
Reproduce:
```
import torch
import torch.nn.functional as F
# Check for MPS availability
if not torch.backends.mps.is_available():
print("MPS device not available. This snippet requires an Apple Silicon Mac with PyTorch built with MPS support.")
exit()
if not torch.backends.mps.is_built():
print("PyTorch was not built with MPS support. This snippet requires an Apple Silicon Mac with PyTorch built with MPS support.")
exit()
device = torch.device("mps")
print(f"Using device: {device}")
# Define parameters
batch_size = 1
in_channels = 1
length = 1024
out_channels_problematic = 65537 # > 65536
kernel_size = 3
# Create input and weight tensors
try:
input_tensor = torch.randn(batch_size, in_channels, length, device=device)
# Weight shape: (out_channels, in_channels, kernel_size)
weight_tensor = torch.randn(out_channels_problematic, in_channels, kernel_size, device=device)
print(f"Input tensor shape: {input_tensor.shape}, device: {input_tensor.device}")
print(f"Weight tensor shape: {weight_tensor.shape}, device: {weight_tensor.device}")
# Attempt the problematic conv1d operation
print(f"\nAttempting F.conv1d with out_channels={out_channels_problematic}...")
output = F.conv1d(input_tensor, weight_tensor)
print("Operation succeeded unexpectedly.") # Should not reach here
except NotImplementedError as e:
print(f"\nSuccessfully reproduced the expected error:")
print(f" Type: {type(e)}")
print(f" Message: {e}")
except Exception as e:
print(f"\nCaught an unexpected error:")
print(f" Type: {type(e)}")
print(f" Message: {e}")
```
Environment:
PyTorch Version: 2.5.1
macOS Version: Sequoiq 15.4.1
Hardware: Apple Silicon (M-series chip)
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.4.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.31.1
Libc version: N/A
Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:35:25) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-15.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Max
Versions of relevant libraries:
[pip3] mypy_extensions==1.1.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnx-weekly==1.19.0.dev20250425
[pip3] onnx2torch==1.5.15
[pip3] onnx2torch-py313==1.6.0
[pip3] onnxruntime==1.21.1
[pip3] pytorch-wpe==0.0.1
[pip3] rotary-embedding-torch==0.6.5
[pip3] torch==2.5.1
[pip3] torch-complex==0.4.4
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[conda] libopenvino-pytorch-frontend 2025.0.0 h286801f_3 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] onnx2torch 1.5.15 pypi_0 pypi
[conda] onnx2torch-py313 1.6.0 pypi_0 pypi
[conda] pytorch-wpe 0.0.1 pypi_0 pypi
[conda] rotary-embedding-torch 0.6.5 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torch-complex 0.4.4 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,022,983,557
|
Fix initGdsBindings declaration
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Move initGdsBindings into the correct namespace.
| true
|
3,022,953,307
|
`setup.py develop` command is disappearing soon from `setuptools`
|
rgommers
|
open
|
[
"high priority",
"module: build",
"oncall: releng",
"triaged",
"topic: devs"
] | 12
|
COLLABORATOR
|
PyTorch still uses the `python setup.py develop` command to build PyTorch and work with it during development and in CI, in multiple places (see [this code search query](https://github.com/search?q=repo%3Apytorch%2Fpytorch%20%22setup.py%20develop%22&type=code) and the main development instructions at https://github.com/pytorch/pytorch?tab=readme-ov-file#install-pytorch). That will be breaking soon (current timeline is "a few months"), and it seems fairly urgent to remove all usages of it.
Context:
- All `python setup.py xxx` commands were deprecated by the `setuptools` project years ago, see the _Info_ block under https://setuptools.pypa.io/en/latest/userguide/quickstart.html#basic-use and [this blog post from 2021](https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html).
- This deprecation was ignored by many, because it was so wide-ranging, without a timeline, and without good like-for-like replacements. Ignoring it was fine at the time, but not anymore.
- The removal of `develop` was already about to be merged, together with the `easy_install` removal which `develop` relied on, when I asked for a change in plan to limit the disruption to PyTorch and other projects [here](https://github.com/pypa/setuptools/pull/2908#issuecomment-2817325643). Luckily that suggestion was accepted, which gives us a bit of time to prepare for the change.
- Note that in the next `setuptools` release, `setup.py develop` will still be there but be backed by `pip` rather than `easy_install` (see https://github.com/pypa/setuptools/pull/4955), so there may already be some changes in behavior that could affect PyTorch.
Suggested way forward:
- The closest replacement for `python setup.py develop` is an editable install without build isolation and verbose output: `pip install -e . -v --no-build-isolation`, so that is a reasonable short-term change to make probably.
- Longer-term, PyTorch has a lot of needs for dealing with compiled code and it may make sense to put a good new developer-focused CLI in place, rather than going through `pip` for every rebuild command. Going through `pip` has downsides: the build command is too verbose and hard to remember, there's a few seconds of overhead, extra logging to stdout, and forgetting say the build isolation flag will cause full rebuilds. Options include (non-exclusive, can do multiple of these), in order of amount of work from lower to higher:
- Document how to invoke rebuilds directly with `ninja` for simple cases (this skips changes to `setup.py` content and `cmake` re-runs). Or invoke `cmake` directly (may be a trickier command, not 100% sure) and then `ninja`.
- Adopt a dedicated developer CLI tool like [spin](https://github.com/scientific-python/spin/) with `build`, `test`, etc. commands that wrap the underlying tools (`pip`, `cmake`, `ninja`, `pytest`, etc.) and provides a clean and self-documenting UX. NumPy, SciPy, scikit-image
- Move away from the legacy build backend (see https://github.com/pytorch/pytorch/blob/cbcc03c2ad11fbf1080f6a1025cc3f7aee0c858d/pyproject.toml#L15-L16) to `build-backend = "setuptools.build_meta"`
- Move as much logic as possible out of `setup.py` to insulate against more `setup.py xxx` commands disappearing over time.
- Change away from `setuptools` to `scikit-build-core` (a significant upgrade for CMake-using projects).
A key point for the last few bullet points is that currently PyTorch does _not_ use the `[project]` table in `pyproject.toml`, and hence has not opted into using PEP 517 build isolation. Build isolation is generally a bad thing for developing on packages heavy on compiled code like PyTorch, and contributors must remember to turn it off when going through `pip` (and `uv` and other build frontends). I'm not sure if PyTorch wants to make this change now, however it's the only thing that the Python packaging community considers and supports. This is why in NumPy and SciPy we went with a tool like `spin` that doesn't try to manage environments, but insulates from build frontends and defaulting to build isolation.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere @atalman @albanD
| true
|
3,022,891,143
|
[cudagraphs][HF][torch 2.7] Excessive cudagraph re-recording for HF LLM models
|
anijain2305
|
open
|
[
"high priority",
"triaged",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
`transformers` repo has temporarily pinned the torch version to be <2.7 (HF [PR](https://github.com/huggingface/transformers/pull/37760) to block 2.7)
I find that there is cudagraph recording on each invocation. The issue is present on the `main` branch as well. Here is the [tlparse](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpi4sxqg/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000), you can look at the perfetto traces where frequent cudagraph recording is observed.
Rerecording issue is - `CheckInvariantStatus.StaticInputIdxMismatch`. This could be related to some missing piece from Dynamo and AOTAutograd to Inductor. cc'ing @BoyuanFeng @eellison @mlazos @zou3519
To repro, build `transformers` from source and run this script
```
import copy
import os
import torch
from torch.utils import benchmark
from transformers import AutoTokenizer, AutoModelForCausalLM
os.environ["TOKENIZERS_PARALLELISM"] = "false"
# Benchmarking settings
BSZ = [1, 4]
NEW_TOK = [16, 256]
N_ITER = 10
MODEL_ID = "google/gemma-2-2b-it"
# MODEL_ID = "Qwen/Qwen2.5-0.5B-Instruct"
CACHE_IMPLEMENTATION = "hybrid"
# CACHE_IMPLEMENTATION = "static"
# debug: run only the first batch_size/max_new_tokens pair
BSZ = [BSZ[0]]
NEW_TOK = [NEW_TOK[0]]
# Other constants
FRANCE_ARTICLE = (
"<s>Marseille, France (CNN)The French prosecutor leading an investigation into the crash of Germanwings Flight "
"9525 insisted Wednesday that he was not aware of any video footage from on board the plane. Marseille "
"prosecutor Brice Robin told CNN that \"so far no videos were used in the crash investigation.\" He added, \"A "
"person who has such a video needs to immediately give it to the investigators.\" Robin\'s comments follow claims "
"by two magazines, German daily Bild and French Paris Match, of a cell phone video showing the harrowing final "
"seconds from on board Germanwings Flight 9525 as it crashed into the French Alps. All 150 on board were killed. "
"Paris Match and Bild reported that the video was recovered from a phone at the wreckage site. The two "
"publications described the supposed video, but did not post it on their websites. The publications said that "
"they watched the video, which was found by a source close to the investigation. \"One can hear cries of 'My God' "
"in several languages,\" Paris Match reported. \"Metallic banging can also be heard more than three times, "
"perhaps of the pilot trying to open the cockpit door with a heavy object. Towards the end, after a heavy "
"shake, stronger than the others, the screaming intensifies. Then nothing.\" \"It is a very disturbing scene,\" "
"said Julian Reichelt, editor-in-chief of Bild online. An official with France\'s accident investigation agency, "
"the BEA, said the agency is not aware of any such video. Lt. Col. Jean-Marc Menichini, a French Gendarmerie "
"spokesman in charge of communications on rescue efforts around the Germanwings crash site, told CNN that the "
"reports were \"completely wrong\" and \"unwarranted.\" Cell phones have been collected at the site, he said, "
"but that they \"hadn\'t been exploited yet.\" Menichini said he believed the cell phones would need to be sent "
"to the Criminal Research Institute in Rosny sous-Bois, near Paris, in order to be analyzed by specialized "
"technicians working hand-in-hand with investigators. But none of the cell phones found so far have been sent "
"to the institute, Menichini said. Asked whether staff involved in the search could have leaked a memory card "
"to the media, Menichini answered with a categorical \"no.\" Reichelt told \"Erin Burnett: Outfront\" that he "
"had watched the video and stood by the report, saying Bild and Paris Match are \"very confident\" that the clip "
"is real. He noted that investigators only revealed they\'d recovered cell phones from the crash site after "
"Bild and Paris Match published their reports. \"That is something we did not know before. ... Overall we can "
"say many things of the investigation weren\'t revealed by the investigation at the beginning,\" he said. What "
"was mental state of Germanwings co-pilot? German airline Lufthansa confirmed Tuesday that co-pilot Andreas "
"Lubitz had battled depression years before he took the controls of Germanwings Flight 9525, which he\'s "
"accused of deliberately crashing last week in the French Alps. Lubitz told his Lufthansa flight training "
"school in 2009 that he had a \"previous episode of severe depression,\" the airline said Tuesday. Email "
"correspondence between Lubitz and the school discovered in an internal investigation, Lufthansa said, "
"included medical documents he submitted in connection with resuming his flight training. The announcement "
"indicates that Lufthansa, the parent company of Germanwings, knew of Lubitz's battle with depression, allowed "
"him to continue training and ultimately put him in the cockpit. Lufthansa, whose CEO Carsten Spohr previously "
"said Lubitz was 100% fit to fly, described its statement Tuesday as a \"swift and seamless clarification\" and "
"said it was sharing the information and documents -- including training and medical records -- with public "
"prosecutors. Spohr traveled to the crash site Wednesday, where recovery teams have been working for the past "
"week to recover human remains and plane debris scattered across a steep mountainside. He saw the crisis center "
"set up in Seyne-les-Alpes, laid a wreath in the village of Le Vernet, closer to the crash site, where grieving "
"families have left flowers at a simple stone memorial. Menichini told CNN late Tuesday that no visible human "
"remains were left at the site but recovery teams would keep searching. French President Francois Hollande, "
"speaking Tuesday, said that it should be possible to identify all the victims using DNA analysis by the "
"end of the week, sooner than authorities had previously suggested. In the meantime, the recovery of the "
"victims' personal belongings will start Wednesday, Menichini said. Among those personal belongings could be "
"more cell phones belonging to the 144 passengers and six crew on board. Check out the latest from our "
"correspondents . The details about Lubitz's correspondence with the flight school during his training were "
"among several developments as investigators continued to delve into what caused the crash and Lubitz\'s "
"possible motive for downing the jet. A Lufthansa spokesperson told CNN on Tuesday that Lubitz had a valid "
"medical certificate, had passed all his examinations and \"held all the licenses required.\" Earlier, a "
"spokesman for the prosecutor\'s office in Dusseldorf, Christoph Kumpa, said medical records reveal Lubitz "
"suffered from suicidal tendencies at some point before his aviation career and underwent psychotherapy before "
"he got his pilot's license. Kumpa emphasized there's no evidence suggesting Lubitz was suicidal or acting "
"aggressively before the crash. Investigators are looking into whether Lubitz feared his medical condition "
"would cause him to lose his pilot's license, a European government official briefed on the investigation told "
"CNN on Tuesday. While flying was \"a big part of his life,\" the source said, it\'s only one theory being "
"considered. Another source, a law enforcement official briefed on the investigation, also told CNN that "
"authorities believe the primary motive for Lubitz to bring down the plane was that he feared he would not "
"be allowed to fly because of his medical problems. Lubitz's girlfriend told investigators he had seen an eye "
"doctor and a neuropsychologist, both of whom deemed him unfit to work recently and concluded he had "
"psychological issues, the European government official said. But no matter what details emerge about his "
"previous mental health struggles, there's more to the story, said Brian Russell, a forensic psychologist. "
"\"Psychology can explain why somebody would turn rage inward on themselves about the fact that maybe they "
"weren't going to keep doing their job and they're upset about that and so they're suicidal,\" he said. \"But "
"there is no mental illness that explains why somebody then feels entitled to also take that rage and turn it "
"outward on 149 other people who had nothing to do with the person's problems.\" Germanwings crash compensation: "
"What we know . Who was the captain of Germanwings Flight 9525? CNN's Margot Haddad reported from Marseille and "
"Pamela Brown from Dusseldorf, while Laura Smith-Spark wrote from London. CNN's Frederik Pleitgen, Pamela "
"Boykoff, Antonia Mortensen, Sandrine Amiel and Anna-Maja Rappard contributed to this report."
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, padding_side="left")
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, torch_dtype=torch.bfloat16, device_map="auto")
prompt_length = tokenizer([FRANCE_ARTICLE], return_tensors="pt").input_ids.shape[1]
label_ms_per_token = f"Throughput (time/foward pass, prompt = {prompt_length} tokens)"
label_first_step = f"First call (time, prompt = {prompt_length} tokens)"
def print_results(all_results):
print("\n")
compare = benchmark.Compare(all_results)
compare.trim_significant_figures()
compare.colorize(rowwise = True)
compare.print()
def time_generate_call(model, task, ms_per_token, first_step, compile=False):
for bsz in BSZ:
for max_new_tokens in NEW_TOK:
input_ids = tokenizer([FRANCE_ARTICLE] * bsz, return_tensors="pt").to("cuda")
description = f"batch size, max_new_tokens: {bsz, max_new_tokens}"
task_spec_ms_per_token = benchmark.TaskSpec(
stmt="", setup="", description=task, label=label_ms_per_token, sub_label=description
)
task_spec_ms_first_step = benchmark.TaskSpec(
stmt="", setup="", description=task, label=label_first_step, sub_label=description
)
# generate EXACTLY `max_new_tokens` tokens (no early termination due to `eos_token_id`)
generation_kwargs = {
"max_new_tokens": max_new_tokens,
"min_new_tokens": max_new_tokens,
"eos_token_id": None,
"do_sample": False,
"cache_implementation": CACHE_IMPLEMENTATION if compile else None
}
generation_config = copy.deepcopy(model.generation_config)
generation_config.update(**generation_kwargs)
torch.compiler.reset()
results = []
for _ in range(N_ITER):
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
gen_out = model.generate(**input_ids, generation_config=generation_config)
end.record()
torch.cuda.synchronize()
total_time = start.elapsed_time(end) / 1000 # time in seconds
time_per_forward = total_time / max_new_tokens
assert gen_out.shape[1] == max_new_tokens + prompt_length
results.append(time_per_forward)
ms_per_token.append(benchmark.Measurement(1, results[3:], task_spec_ms_per_token, metadata=None))
first_step.append(benchmark.Measurement(
1, [results[0] * max_new_tokens], task_spec_ms_first_step, metadata=None)
)
print_results(ms_per_token)
print_results(first_step)
print("*" * 80)
ms_per_token = []
first_step = []
# eager
with torch.compiler.set_stance("force_eager"):
time_generate_call(model, "eager", ms_per_token, first_step)
# compiled
time_generate_call(model, "compiled", ms_per_token, first_step, compile=True)
```
### Error logs
_No response_
### Versions
NA
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
3,022,841,048
|
[Dynamo] Replace `unimplemented` with `unimplemented_v2` in `torch/_dynamo/variables/misc.py` [1/2]
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 7
|
CONTRIBUTOR
|
Part of #147913
Replace `unimplemented` with`unimplemented_v2` in `torch/_dynamo/variables/misc.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,022,795,301
|
Fix constant folding cloning constants
|
muchulee8
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152273
Summary:
Bug fix for #135060
Simple review:
https://github.com/pytorch/pytorch/pull/135060/files#diff-f23386709ff7e1235b15e18f835a48e5124e0ddd596aeb33c201daad1abbedd7R357
We mistakenly typed get_attr into getattr.
This causes constants never get untagged, and forces all constants get
cloned twice which greatly increases the memory consumption.
Test Plan:
python test/inductor/test_aot_inductor.py -k test_empty_constant_folding
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
3,022,682,201
|
[AOTInductor] Propagate ConstantType for main graph.
|
muchulee8
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152272
Summary:
We need to make sure all named_parameters and named_buffers be
propagated if we use runtime constant folding.
Test Plan:
python test/inductor/test_aot_inductor.py -k test_constant_type_propagation
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
3,022,597,820
|
Fix clang-tidy suppression in torch/csrc/jit
|
cyyever
|
closed
|
[
"oncall: jit",
"open source",
"Merged",
"NNC",
"ciflow/trunk",
"release notes: jit"
] | 4
|
COLLABORATOR
|
Remove some clang-tidy suppression in torch/csrc/jit by applying fixes or refactoring.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,022,584,271
|
Question about that support of torch.compile for a custom CUDA operator?
|
HiIcy
|
open
|
[
"module: docs",
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 7
|
NONE
|
I have a model that uses custom CUDA operators. Now I want to modify it to support torch.compile. However, when I refer to this link https://pytorch.org/tutorials/advanced/cpp_custom_ops.html#conclusion, after modification, although it appears to support compile from the profiler, there is no change in performance. I would like to ask what the main purpose of this feature is and whether it has any impact on performance?
cc @svekars @sekyondaMeta @AlannaBurke @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
3,022,560,480
|
Arbitrary Code Execution Risk in `torch.distributed.utils.overload` When Misused in Type Annotations
|
vwrewsge
|
open
|
[
"oncall: distributed",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
## Summary
The `overload` decorator imported from `torch.distributed.utils` can be misused to execute arbitrary system commands via malicious type annotations. This may create a security vulnerability: **arbitrary code execution upon parsing the file**, even if the malicious function is never called.
## Steps to Reproduce
```python
from torch.distributed.utils import overload
import os
@overload
def malicious_func(x: os.system("ls")) -> None:
...
# A more dangerous example would be:
# @overload
# def malicious_func(x: os.system("rm -rf ~/important_data")) -> None:
# ...
# A more covert attack method
# @overload
# def malicious_func(x: os.system("curl http://malicious.site/malware.sh | bash")) -> None:
# """
# This will download and execute a malicious script
# """
```
No call to `malicious_func` is needed; the side effect happens as soon as the Python interpreter evaluates the type annotations.
## Impact
This behavior can be overlooked during code review, as type annotations are often considered safe. Attackers can craft malicious `.py` files that execute arbitrary code at load time.
### Versions
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,022,359,486
|
[MPSInductor] Fix masked_fill decomp
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152268
* #152266
By adding `mps` to the list of accelerators that can work with CPU scalars
Fixes `GPUTests.test_masked_fill_promotion_mps`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,022,341,850
|
[ROCm] Maxpool backward NHWC Perf Improvement targeting Resnet scenarios
|
amd-hhashemi
|
open
|
[
"module: rocm",
"open source",
"release notes: cuda"
] | 4
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,022,068,035
|
[MPSInductor][BE] Only include headers when needed
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152268
* __->__ #152266
Store headers used by shader in `MetalKernel.headers`
Add headers when function depending on it gets invoked
Generate majority of a special ops from template
Delete two unused functors: `entr` and `xlog1py` as they are decomposed by inductor anyway
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,022,025,574
|
[BE]: Cleanup traceutils with fmtlib
|
Skylion007
|
closed
|
[
"oncall: distributed",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 12
|
COLLABORATOR
|
Simplify code and make it faster.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,022,022,369
|
Add private config to broadcast rank0 decision from the partitioner to all ranks
|
fmassa
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 14
|
MEMBER
|
Summary: This PR adds a private configuration to the partitioner that ensures that the decision taken is the same across all ranks. This is a temporary workaround, as when size_hints are also taken into account in compiler collectives this workaround will not be needed anymore.
Test Plan:
This has been tested on some internal models, but I haven't added any tests in PyTorch (yet?)
T
Differential Revision: D73666017
| true
|
3,022,011,880
|
FSDP OOM during initialization
|
fingertap
|
closed
|
[
"oncall: distributed",
"module: memory usage",
"triaged"
] | 7
|
NONE
|
### 🐛 Describe the bug
When trying to train Llama 4 with FSDP, I found that the peak memory explodes during the initialization of FSDP. The following mini-repro exposes this bug.
```python
from functools import partial
import torch
import torch.distributed as dist
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
from transformers import Qwen2ForCausalLM, AutoConfig
from transformers.models.qwen2.modeling_qwen2 import Qwen2DecoderLayer
def main():
model_path: str = "Qwen/Qwen2.5-7B-Instruct"
with torch.device("meta"):
config = AutoConfig.from_pretrained(model_path, attn_implementation="flash_attention_2")
module = Qwen2ForCausalLM(config)
fsdp_module = FSDP(
module,
auto_wrap_policy=partial(
transformer_auto_wrap_policy,
transformer_layer_cls={Qwen2DecoderLayer,},
),
param_init_fn=lambda x: x.to_empty(device=torch.device("cuda"), recurse=False),
)
if dist.get_rank() == 0:
print("Peak memory", torch.cuda.max_memory_allocated() // 1024**3, "GB")
torch.cuda.empty_cache()
print("After init, model memory", torch.cuda.memory_allocated() // 1024**3, "GB")
if __name__ == "__main__":
dist.init_process_group(backend="nccl")
torch.cuda.set_device(dist.get_rank())
main()
dist.destroy_process_group()
```
The output is
```text
Peak memory 11 GB
After init, model memory 3 GB
```
Expected behavior will be that during initialization, the peak memory is still 3 GB. Due to this issue, I cannot load in the Llama4 model with FSDP.
I find that FSDP actually materializes all the modules and then shard them all, which is not expected. This implementation is not consistent with your paper, where you claim that the init won't OOM as it will init the modules layer by layer, similar to the forward phase.
I think the implementation should be that
```
for module in modules_to_materialize:
param_init_fn(module)
init_param_handle(module)
shard_module(module)
```
I know that currently you are supporting FSDP2 and do not have plans for updating FSDP1. But I think this is a severe issue for FSDP, limiting its usage to larger models. Considering the huge number of users, a fix for this issue should be really considered.
I absolutely would like to contribute, but I need more detailed instructions. Can you please take a look? Any suggestions and inputs are appreciated. @scw @svenstaro @JackDanger @infil00p @tmm1
### Versions
<details>
<summary>An 8-GPU instance (H800) with torch2.6.0+cu124</summary>
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H800
GPU 1: NVIDIA H800
GPU 2: NVIDIA H800
GPU 3: NVIDIA H800
GPU 4: NVIDIA H800
GPU 5: NVIDIA H800
GPU 6: NVIDIA H800
GPU 7: NVIDIA H800
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-79
Off-line CPU(s) list: 80-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer-python==0.2.2.post1+cu124torch2.6
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] flashinfer-python 0.2.2.post1+cu124torch2.6 pypi_0 pypi
[conda] numpy 2.2.5 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
</details>
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @chauhang @penguinwu
| true
|
3,021,999,506
|
`iter()` and `reversed()` do not raise `StopIteration` when exhausted in torch.compile
|
guilhermeleobas
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
The expected behavior is to raise `StopIteration` after the iterator is exhausted, but inside Dynamo, the iterator is not being properly exhausted ~when `(force_)unpack_var_sequence(...)` is called~.
Reproducer:
```python
import torch
@torch.compile(backend="eager", fullgraph=True)
def foo_iter(t):
it = iter([1, 2, 3])
_ = list(it) # consume all elements
try:
next(it)
except StopIteration:
return t.sin()
else:
assert False, "Expected StopIteration"
@torch.compile(backend="eager", fullgraph=True)
def foo_reversed(t):
rev = reversed([1, 2, 3])
_ = list(rev) # consume all elements
try:
next(rev)
except StopIteration:
return t.sin()
else:
assert False, "Expected StopIteration"
t = torch.tensor([1.0])
assert foo_iter(t) == t.sin()
assert foo_reversed(t) == t.sin()
```
### Versions
PyTorch main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
3,021,982,230
|
Context Parallel -- unsharded output doesn't match output without CP.
|
sen-ppl
|
open
|
[
"oncall: distributed",
"triaged"
] | 26
|
NONE
|
### 🐛 Describe the bug
Hello, I wrapped my model CP context manager and see my attention module's un-sharded outputs are different from the outputs when CP size = 1. The un-sharded inputs (Q,K,V) are the same as (Q,K,V) when CP is off.
```
# entry point file
cp_buffers = [x, y] + [m.self_attn.rotary_emb.cos_cached for m in model.model.layers.values()]\
+ [m.self_attn.rotary_emb.sin_cached for m in model.model.layers.values()]
cp_buffer_seq_dims = [1, 1] + [0] * len(model.model.layers) * 2 # cp on sequence dimensions
cp_no_restore_buffers = {x, y}
cp_context = context_parallel(
mesh=config.cp_mesh,
buffers=cp_buffers,
buffer_seq_dims=cp_buffer_seq_dims,
no_restore_buffers=cp_no_restore_buffers,
)
with cp_context():
preds = model(x)
from torch.distributed.tensor.experimental._attention import context_parallel_unshard
(preds, y) = context_parallel_unshard(config.cp_mesh, [preds, y], [2, 1])
# model file: the layers are short-cut so the model returns first attn_output
# query_states = ...
# value_states = ...
# key_states = ...
# return query_states, value_states, key_states for debug
attn_output = scaled_dot_product_attention(
query=query_states,
key=key_states,
value=value_states,
scale=self.softmax_scale,
is_causal=True,
)
return attn_output
```
Since the un-sharded Q,K,V can recover correctly, I think I'm using the unsharding function correctly. What may lead to the difference in cp attn_output? Thanks!
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,021,980,672
|
Configurable logging for cpp_extensions.py
|
msaroufim
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: bc breaking",
"topic: not user facing"
] | 13
|
MEMBER
|
Today `cpp_extensions` makes heavy use of printing to stderr, this makes our life harder in KernelBot where we typically rely on stderr to only surface real errors but instead today cpp_extensions leverages stderr for updates that could be qualified as INFO, WARNING, ERROR
Now instead we'll recommend users of our cpp extension system to do something like
```python
import logging
cpp_ext_logger = logging.getLogger("torch.utils.cpp_extension")
cpp_ext_logger.setLevel(logging.WARNING)
```
While this dramatically reduces log spew, it can be viewed as a BC breaking change if people were relying on certain strings being present in stdout or stderr
Considering different teams might want to silence errors differently, this PR proposes replacing all `print()` statements with `logging` statements with the same heuristics that the python logging module recommends
1. DEBUG: For things like detailed compilation steps or reading filepaths - by default gets logged on stdout
2. INFO: Build progress - by default gets logged on stdout
3. WARNING: Surfacing issues that might cause bad performance or slow compilation times - by default gets logged on stdout
4. ERROR: Problems that prevent proper functioning - by default gets logged on stdout
Note that warnings.warn is a different library and is not hooked up to the python logging module by default
So the goal of this PR is to make it possible for teams to set the logging that is most appropriate to them. One annoying thing is logger throws ruff errors if you try to use it in conjunction with f strings or .format so have to use old school %s
An unrelated improvement I'd be happy to push to a seperate PR is adding support for "native" in `TORCH_CUDA_ARCH_LIST` which would just pick the ARCH for the current device
An example of what's in stderr today
```
Using /root/.cache/torch_extensions/py311_cu124 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py311_cu124/grayscale/build.ninja...
/usr/local/lib/python3.11/site-packages/torch/utils/cpp_extension.py:2059: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
Building extension module grayscale...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
Loading extension module grayscale...
/usr/local/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py:679: UserWarning: Graph break due to unsupported builtin grayscale.PyCapsule.grayscale. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
torch._dynamo.utils.warn_once(msg)
```
Whereas after this PR users can do
`python benchmark_load_inline.py > >(tee stdout.txt) 2> >(tee stderr.txt >&2)`
```python
import os
import sys
from pathlib import Path
import shutil
import tempfile
import torch
from torch.utils.cpp_extension import load_inline
import logging
cpp_ext_logger = logging.getLogger("torch.utils.cpp_extension")
cpp_ext_logger.setLevel(logging.WARNING)
os.environ["TORCH_CUDA_ARCH_LIST"] = "native"
cpp_code = """
torch::Tensor to_gray(torch::Tensor input);
"""
cuda_kernel_code = """
torch::Tensor to_gray(torch::Tensor input) {
auto output = torch::epty({input.size(0), input.size(1)}, input.options());
return output ;
}
"""
# Avoid caching results
with tempfile.TemporaryDirectory() as build_dir:
cuda_module = load_inline(
name="to_gray_cuda",
cpp_sources=cpp_code,
cuda_sources=cuda_kernel_code,
functions=["to_gray"],
with_cuda=True,
verbose=True,
extra_cflags=["-std=c++17"], # "-ftime-report", "-H"],
extra_cuda_cflags=["-arch=sm_89"],
build_directory=build_dir,
)
```
## New logs
### On failure
Which gives a much more reasonable stdout
```
[1/3] /usr/local/cuda-12.8/bin/nvcc --generate-dependencies-with-compile --dependency-output cuda.cuda.o.d -DTORCH_EXTENSION_NAME=to_gray_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /home/marksaroufim/pytorch/torch/include -isystem /home/marksaroufim/pytorch/torch/include/torch/csrc/api/include -isystem /usr/local/cuda-12.8/include -isystem /usr/local/cuda/targets/x86_64-linux/include -isystem /home/marksaroufim/.conda/envs/nv/include/python3.10 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -arch=sm_89 -std=c++17 -c /tmp/tmpbg_xzv0r/cuda.cu -o cuda.cuda.o
FAILED: cuda.cuda.o
/usr/local/cuda-12.8/bin/nvcc --generate-dependencies-with-compile --dependency-output cuda.cuda.o.d -DTORCH_EXTENSION_NAME=to_gray_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /home/marksaroufim/pytorch/torch/include -isystem /home/marksaroufim/pytorch/torch/include/torch/csrc/api/include -isystem /usr/local/cuda-12.8/include -isystem /usr/local/cuda/targets/x86_64-linux/include -isystem /home/marksaroufim/.conda/envs/nv/include/python3.10 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -arch=sm_89 -std=c++17 -c /tmp/tmpbg_xzv0r/cuda.cu -o cuda.cuda.o
/tmp/tmpbg_xzv0r/cuda.cu(6): error: namespace "torch" has no member "epty"
auto output = torch::epty({input.size(0), input.size(1)}, input.options());
^
1 error detected in the compilation of "/tmp/tmpbg_xzv0r/cuda.cu".
[2/3] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=to_gray_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /home/marksaroufim/pytorch/torch/include -isystem /home/marksaroufim/pytorch/torch/include/torch/csrc/api/include -isystem /usr/local/cuda-12.8/include -isystem /usr/local/cuda/targets/x86_64-linux/include -isystem /home/marksaroufim/.conda/envs/nv/include/python3.10 -fPIC -std=c++17 -std=c++17 -c /tmp/tmpbg_xzv0r/main.cpp -o main.o
ninja: build stopped: subcommand failed.
```
And stderr
```
Traceback (most recent call last):
File "/home/marksaroufim/pytorch/torch/utils/cpp_extension.py", line 2874, in _run_ninja_build
subprocess.run(
File "/home/marksaroufim/.conda/envs/nv/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/marksaroufim/load_inline_slow/benchmark_load_inline.py", line 30, in <module>
cuda_module = load_inline(
File "/home/marksaroufim/pytorch/torch/utils/cpp_extension.py", line 2261, in load_inline
return _jit_compile(
File "/home/marksaroufim/pytorch/torch/utils/cpp_extension.py", line 2367, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/marksaroufim/pytorch/torch/utils/cpp_extension.py", line 2528, in _write_ninja_file_and_build_library
_run_ninja_build(
File "/home/marksaroufim/pytorch/torch/utils/cpp_extension.py", line 2892, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'to_gray_cuda'
```
### On success
stdout
```
[1/3] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=to_gray_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /home/marksaroufim/pytorch/torch/include -isystem /home/marksaroufim/pytorch/torch/include/torch/csrc/api/include -isystem /usr/local/cuda-12.8/include -isystem /usr/local/cuda/targets/x86_64-linux/include -isystem /home/marksaroufim/.conda/envs/nv/include/python3.10 -fPIC -std=c++17 -std=c++17 -c /tmp/tmpxv_ovlrf/main.cpp -o main.o
[2/3] /usr/local/cuda-12.8/bin/nvcc --generate-dependencies-with-compile --dependency-output cuda.cuda.o.d -DTORCH_EXTENSION_NAME=to_gray_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /home/marksaroufim/pytorch/torch/include -isystem /home/marksaroufim/pytorch/torch/include/torch/csrc/api/include -isystem /usr/local/cuda-12.8/include -isystem /usr/local/cuda/targets/x86_64-linux/include -isystem /home/marksaroufim/.conda/envs/nv/include/python3.10 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -arch=sm_89 -std=c++17 -c /tmp/tmpxv_ovlrf/cuda.cu -o cuda.cuda.o
[3/3] c++ main.o cuda.cuda.o -shared -L/home/marksaroufim/pytorch/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/usr/local/cuda-12.8/lib64 -lcudart -o to_gray_cuda.so
```
And an empty stderr as expected
| true
|
3,021,974,374
|
[BE] Remove dangling # in contributing.md
|
msaroufim
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
MEMBER
|
I frequently come to CONTRIBUTING.md to copy paste the below snippet to rebuild pytorch which in zsh gives this error because zsh interprets # as a command. These comments add nothing so just removing
```
error: pathspec 'sync' did not match any file(s) known to git
error: pathspec 'the' did not match any file(s) known to git
error: pathspec 'submodules' did not match any file(s) known to git
Building wheel torch-2.8.0a0+git9c01c87
invalid command name '#'
```
```
git submodule update --init --recursive # very important to sync the submodules
python setup.py develop # then try running the command again
git submodule update --init --recursive
python setup.py develop
```
| true
|
3,021,969,136
|
Do not r edirect warnings to stderr in cpp_extension.py
|
msaroufim
|
closed
|
[
"module: cpu"
] | 3
|
MEMBER
|
- **divup op**
- **update**
- **update**
- **update**
- **cu**
- **update**
- **update**
- **simply templates**
- **update**
- **update**
- **update**
- **old**
- **le ci est vert**
- **Trigger build**
- **Do not redirect warnings to stderr in cpp_extension.py**
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,021,946,302
|
[FR] Support BSHM-layout scaled_dot_product_attention without transpose.
|
ghostplant
|
open
|
[
"triaged",
"module: sdpa"
] | 15
|
NONE
|
What's the plan to support direct computation given (Batch, Seq, Head, Model_dim) Q/K/V tensors, without additional expensive back-and-forth?
```sh
q = torch.randn([b, s, h, m])
k = torch.randn([b, s, h, m])
v = torch.randn([b, s, h, m])
scores = torch.nn.functional.scaled_dot_product_attention(q, k, v, no_transpose=True)
```
| true
|
3,021,853,799
|
Move code out of individual token linters
|
rec
|
open
|
[
"module: bc-breaking",
"open source",
"topic: not user facing",
"suppress-bc-linter"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152256
* #148959
* #151906
cc @ezyang @gchanan
| true
|
3,021,789,824
|
Pytorch 2.7.0 with XPU (silently) crashing
|
blaz-r
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
I've installed the latest version of pytorch 2.7.0 with xpu support on a Windows 11 Intel NUC. When I try to use the xpu in pytorch the program just silently fails.
If I run `torch.xpu._is_compiled()` I get True, but just running `torch.xpu.is_available()` fails.
I also tried running the code with `SYCL_UR_TRACE=-1` and it just prints the following before closing with exit code 0xC0000005:
```
---> DLL_PROCESS_ATTACH ur_win_proxy_loader.dll
---> DLL_PROCESS_ATTACH syclx.dll
```
I have an Arc A770M with driver version 32.0.101.6739 and my CPU is i7 12700H that also has Iris Xe integrated.
Maybe I'm missing something obvious here, but I'm really not sure how to fix this.
### Versions
PyTorch version: 2.7.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro (10.0.26100 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.14 | packaged by Anaconda, Inc. | (main, May 6 2024, 19:44:50) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 12th Gen Intel(R) Core(TM) i7-12700H
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2300
MaxClockSpeed: 2300
L2CacheSize: 11776
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==1.9.5
[pip3] pytorch-triton-xpu==3.3.0
[pip3] torch==2.7.0+xpu
[pip3] torchmetrics==0.10.3
[pip3] torchvision==0.22.0+xpu
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-lightning 1.9.5 pypi_0 pypi
[conda] pytorch-triton-xpu 3.3.0 pypi_0 pypi
[conda] torch 2.7.0+xpu pypi_0 pypi
[conda] torchmetrics 0.10.3 pypi_0 pypi
[conda] torchvision 0.22.0+xpu pypi_0 pypi
| true
|
3,021,772,579
|
Fix typos in multiple files
|
co63oc
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fix typos in multiple files
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,021,631,345
|
Aborted (core dumped) in torch.flipud
|
cx104906
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
Reproduce
```
curl -L -o 002-args "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000002-args"
curl -L -o 002-kwargs "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000002-kwargs"
python run.py
```
run.py:
```
import torch
import pickle
print(torch.__version__)
mylist = torch.load("xxx/002-args",weights_only=True)
mydict = torch.load("xxx/002-kwargs",weights_only=True)
print("test......")
torch.flipud(*mylist,**mydict)
```
Output
```
>python run.py
2.6.0+cpu
/home/cas/anaconda3/envs/py310/lib/python3.10/site-packages/torch/_utils.py:410: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
device=storage.device,
test......
Segmentation fault (core dumped)
```
### Versions
python testcrash/collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (GCC) 11.2.0
Clang version: 12.0.1
CMake version: version 3.22.2
Libc version: glibc-2.27
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
字节序: Little Endian
CPU: 32
在线 CPU 列表: 0-31
每个核的线程数: 1
每个座的核数: 32
座: 1
NUMA 节点: 1
厂商 ID: GenuineIntel
CPU 系列: 6
型号: 85
型号名称: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
步进: 7
CPU MHz: 2095.076
BogoMIPS: 4190.15
虚拟化: VT-x
超管理器厂商: KVM
虚拟化类型: 完全
L1d 缓存: 32K
L1i 缓存: 32K
L2 缓存: 4096K
L3 缓存: 16384K
NUMA 节点0 CPU: 0-31
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.7.101
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.2.10.91
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.0.1
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.4.91
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.14.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.7.91
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.14.1
[pip3] torch==2.6.0+cpu
[pip3] torchaudio==2.6.0+cpu
[pip3] torchvision==0.21.0+cpu
[pip3] triton==3.2.0
[conda] torch 2.6.0+cpu pypi_0 pypi
[conda] torchaudio 2.6.0+cpu pypi_0 pypi
[conda] torchvision 0.21.0+cpu pypi_0 pypi
| true
|
3,021,628,861
|
[Inductor] weird reordering behavior with `wait_tensor`
|
YouJiacheng
|
closed
|
[] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Case 1: return the average
`wait` is NOT pushed to the end
```python
@torch.compile
def foo(x: Tensor, y: Tensor):
x_avg = fcol.all_reduce(x, "avg", "0")
y_sq = y * y
return x_avg, y_sq
```
```python
def call(args):
arg0_1, arg1_1 = args
args.clear()
assert_size_stride(arg0_1, (1024, ), (1, ))
assert_size_stride(arg1_1, (1024, ), (1, ))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf0 = empty_strided_cuda((1024, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [tensor], Original ATen: [_c10d_functional.all_reduce]
stream0 = get_raw_stream(0)
triton_poi_fused_all_reduce_0.run(arg0_1, buf0, 1024, stream=stream0)
del arg0_1
# Topologically Sorted Source Nodes: [tensor], Original ATen: [_c10d_functional.all_reduce]
torch.ops._c10d_functional.all_reduce_.default(buf0, 'avg', '0')
# Topologically Sorted Source Nodes: [x_avg], Original ATen: [_c10d_functional.wait_tensor]
torch.ops._c10d_functional.wait_tensor.default(buf0)
buf5 = empty_strided_cuda((1024, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [y_sq], Original ATen: [aten.mul]
stream0 = get_raw_stream(0)
triton_poi_fused_mul_1.run(arg1_1, buf5, 1024, stream=stream0)
del arg1_1
return (buf0, buf5, )
```
Case 2: assign the average to an attribute
`wait` IS pushed to the end
```python
@torch.compile
def foo(x: Tensor, y: Tensor):
x.avg = fcol.all_reduce(x, "avg", "0")
y_sq = y * y
return None, y_sq
```
```python
def call(args):
arg0_1, arg1_1 = args
args.clear()
assert_size_stride(arg0_1, (1024, ), (1, ))
assert_size_stride(arg1_1, (1024, ), (1, ))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf0 = empty_strided_cuda((1024, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [tensor], Original ATen: [_c10d_functional.all_reduce]
stream0 = get_raw_stream(0)
triton_poi_fused_all_reduce_0.run(arg0_1, buf0, 1024, stream=stream0)
del arg0_1
# Topologically Sorted Source Nodes: [tensor], Original ATen: [_c10d_functional.all_reduce]
torch.ops._c10d_functional.all_reduce_.default(buf0, 'avg', '0')
buf3 = empty_strided_cuda((1024, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [y_sq], Original ATen: [aten.mul]
stream0 = get_raw_stream(0)
triton_poi_fused_mul_1.run(arg1_1, buf3, 1024, stream=stream0)
del arg1_1
# Topologically Sorted Source Nodes: [wait_tensor], Original ATen: [_c10d_functional.wait_tensor]
torch.ops._c10d_functional.wait_tensor.default(buf0)
return (buf3, buf0, )
```
Case 3: Case 2 but `torch._inductor.config.reorder_for_locality = False`
`wait` is NOT pushed to the end
```python
def call(args):
arg0_1, arg1_1 = args
args.clear()
assert_size_stride(arg0_1, (1024, ), (1, ))
assert_size_stride(arg1_1, (1024, ), (1, ))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf0 = empty_strided_cuda((1024, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [tensor], Original ATen: [_c10d_functional.all_reduce]
stream0 = get_raw_stream(0)
triton_poi_fused_all_reduce_0.run(arg0_1, buf0, 1024, stream=stream0)
del arg0_1
# Topologically Sorted Source Nodes: [tensor], Original ATen: [_c10d_functional.all_reduce]
torch.ops._c10d_functional.all_reduce_.default(buf0, 'avg', '0')
# Topologically Sorted Source Nodes: [wait_tensor], Original ATen: [_c10d_functional.wait_tensor]
torch.ops._c10d_functional.wait_tensor.default(buf0)
buf5 = empty_strided_cuda((1024, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [y_sq], Original ATen: [aten.mul]
stream0 = get_raw_stream(0)
triton_poi_fused_mul_1.run(arg1_1, buf5, 1024, stream=stream0)
del arg1_1
return (buf5, buf0, )
```
Full code:
```python
import os
os.environ["TORCHINDUCTOR_FX_GRAPH_CACHE"] = "0"
import torch
from torch import Tensor
import torch._inductor.codecache
import torch._inductor.graph
from torch._logging._internal import trace_structured
import torch.distributed as dist
import torch.distributed._functional_collectives as fcol
# torch._inductor.config.reorder_for_locality = False
@torch.compile
def foo(x: Tensor, y: Tensor):
x_avg = fcol.all_reduce(x, "avg", "0")
y_sq = y * y
return x_avg, y_sq
# @torch.compile
# def foo(x: Tensor, y: Tensor):
# x.avg = fcol.all_reduce(x, "avg", "0")
# y_sq = y * y
# return None, y_sq
rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
device = torch.device("cuda", int(os.environ["LOCAL_RANK"]))
dist.init_process_group(backend="nccl", device_id=device)
def _patched_trace_structured(name, *args, **kwargs):
if args:
metadata_fn, *_ = args
else:
metadata_fn = kwargs.get("metadata_fn", lambda: {})
if name == "inductor_output_code":
print(f"inductor_output_code: {metadata_fn().get('filename', 'Unknown')}")
trace_structured(name, *args, **kwargs)
if rank == 0:
torch._inductor.codecache.trace_structured = _patched_trace_structured # type: ignore
torch._inductor.graph.trace_structured = _patched_trace_structured # type: ignore
with device:
x = torch.ones(1024)
y = torch.ones(1024)
foo(x, y)
dist.destroy_process_group()
```
### Versions
Collecting environment information...
PyTorch version: 2.8.0.dev20250425+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.9 (main, Mar 11 2025, 17:26:57) [Clang 20.1.0 ] (64-bit runtime)
Python platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 168
On-line CPU(s) list: 0-74
Off-line CPU(s) list: 75-167
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 42
Socket(s): 2
Stepping: 8
BogoMIPS: 5199.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.9 MiB (84 instances)
L1i cache: 2.6 MiB (84 instances)
L2 cache: 168 MiB (84 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-83
NUMA node1 CPU(s): 84-167
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250425+cu126
[conda] Could not collect
| true
|
3,021,557,990
|
Windows inductor genarated code without function declaration, and compile failed on MSVC.
|
xuhancn
|
open
|
[
"module: windows",
"oncall: pt2",
"module: inductor",
"oncall: cpu inductor"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
Reproducer:
```cmd
pytest -v test\inductor\test_cpu_cpp_wrapper.py -k test_add_complex4_cpu_cpp_wrapper -s
```
### Error logs
Error message:
```cmd
_____________________________________________________________________________________________________________ TestCppWrapper.test_add_complex4_cpu_cpp_wrapper ______________________________________________________________________________________________________________
Traceback (most recent call last):
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\unittest\case.py", line 59, in testPartExecutor
yield
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\unittest\case.py", line 591, in run
self._callTestMethod(testMethod)
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\unittest\case.py", line 549, in _callTestMethod
method()
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\testing\_internal\common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "D:\xu_git\dnnl_cb\pytorch\test\inductor\test_torchinductor.py", line 13253, in new_test
return value(self)
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "D:\xu_git\dnnl_cb\pytorch\test\inductor\test_cpu_cpp_wrapper.py", line 119, in fn
_, code = test_torchinductor.run_and_get_cpp_code(
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\utils.py", line 2496, in run_and_get_cpp_code
result = fn(*args, **kwargs)
File "D:\xu_git\dnnl_cb\pytorch\test\inductor\test_torchinductor.py", line 13253, in new_test
return value(self)
File "D:\xu_git\dnnl_cb\pytorch\test\inductor\test_torchinductor.py", line 1402, in test_add_complex4
_, code = run_and_get_code(fn, x, y)
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\utils.py", line 1735, in run_and_get_code
result = fn(*args, **kwargs)
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_dynamo\eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\compile_fx.py", line 860, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\compile_fx.py", line 844, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\compile_fx.py", line 1453, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\compile_fx.py", line 1340, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\graph.py", line 2209, in compile_to_module
return self._compile_to_module()
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\graph.py", line 2256, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\codecache.py", line 2998, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\runtime\compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "C:\Users\Xuhan\AppData\Local\Temp\tmpjmp6rdlg\q5\cq54dbjs2iea7zsl6gnbt6kz3imssksjeq2obtywlnqixdarur7u.py", line 92, in <module>
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\codecache.py", line 2489, in load_pybinding
return cls.load_pybinding_async(*args, **kwargs)()
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\codecache.py", line 2481, in future
result = get_result()
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\codecache.py", line 2290, in load_fn
result = worker_fn()
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\codecache.py", line 2318, in _worker_compile_cpp
cpp_builder.build()
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\cpp_builder.py", line 1687, in build
run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\cpp_builder.py", line 358, in run_compile_cmd
_run_compile_cmd(cmd_line, cwd)
File "C:\Users\Xuhan\.conda\envs\win_inductor_debug\lib\site-packages\torch\_inductor\cpp_builder.py", line 353, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._inductor.exc.InductorError: CppCompileError: C++ compile error
Command:
cl /I C:/Users/Xuhan/.conda/envs/win_inductor_debug/Include /I C:/Users/Xuhan/.conda/envs/win_inductor_debug/lib/site-packages/torch/include /I C:/Users/Xuhan/.conda/envs/win_inductor_debug/lib/site-packages/torch/include/torch/csrc/api/include /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /D CPU_CAPABILITY_AVX512 /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/Xuhan/AppData/Local/Temp/tmpjmp6rdlg/ur/curqwv3aubebyu6fei3tpqfjgmtsai75cyg4a2t74vnw3k33f5l7.cpp /arch:AVX512 /FeC:/Users/Xuhan/AppData/Local/Temp/tmpjmp6rdlg/ur/curqwv3aubebyu6fei3tpqfjgmtsai75cyg4a2t74vnw3k33f5l7.pyd /LD /link /LIBPATH:C:/Users/Xuhan/.conda/envs/win_inductor_debug/libs /LIBPATH:C:/Users/Xuhan/.conda/envs/win_inductor_debug/lib/site-packages/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib
Output:
Microsoft (R) C/C++ Optimizing Compiler Version 19.43.34808 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
cl : Command line warning D9025 : overriding '/openmp' with '/openmp:experimental'
curqwv3aubebyu6fei3tpqfjgmtsai75cyg4a2t74vnw3k33f5l7.cpp
C:/Users/Xuhan/AppData/Local/Temp/tmpjmp6rdlg/ur/curqwv3aubebyu6fei3tpqfjgmtsai75cyg4a2t74vnw3k33f5l7.cpp(35): error C3861: 'cpp_fused_add_0': identifier not found
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
To execute this test, run the following from the base repo dir:
python test\inductor\test_cpu_cpp_wrapper.py TestCppWrapper.test_add_complex4_cpu_cpp_wrapper
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
Genarated code:
```cmd
#include <torch/csrc/inductor/cpp_wrapper/cpu.h>
extern "C" __declspec(dllexport);
CACHE_TORCH_DTYPE(float16);
CACHE_TORCH_DTYPE(complex32);
CACHE_TORCH_DEVICE(cpu);
void inductor_entry_impl(
AtenTensorHandle*
input_handles, // array of input AtenTensorHandle; handles
// are stolen; the array itself is borrowed
AtenTensorHandle*
output_handles // array for writing output AtenTensorHandle; handles
// will be stolen by the caller; the array itself is
// borrowed)
) {
py::gil_scoped_release release;
auto inputs = steal_from_raw_handles_to_raii_handles(input_handles, 2);
auto arg0_1 = std::move(inputs[0]);
auto arg1_1 = std::move(inputs[1]);
// Topologically Sorted Source Nodes: [d], Original ATen: [aten.add]
AtenTensorHandle buf1_handle;
AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_cpu_view_dtype(arg0_1, cached_torch_dtype_float16, &buf1_handle));
RAIIAtenTensorHandle buf1(buf1_handle);
// Topologically Sorted Source Nodes: [d], Original ATen: [aten.add]
AtenTensorHandle buf3_handle;
AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_cpu_view_dtype(arg1_1, cached_torch_dtype_float16, &buf3_handle));
RAIIAtenTensorHandle buf3(buf3_handle);
static constexpr int64_t int_array_2[] = {8LL, 2LL};
static constexpr int64_t int_array_3[] = {2LL, 1LL};
AtenTensorHandle buf4_handle;
AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_empty_strided(2, int_array_2, int_array_3, cached_torch_dtype_float16, cached_torch_device_type_cpu, 0, &buf4_handle));
RAIIAtenTensorHandle buf4(buf4_handle);
cpp_fused_add_0((const half*)(buf1.data_ptr()), (const half*)(buf3.data_ptr()), (half*)(buf4.data_ptr()));
arg0_1.reset();
arg1_1.reset();
buf1.reset();
buf3.reset();
// Topologically Sorted Source Nodes: [add_2], Original ATen: [aten.add]
static constexpr int64_t int_array_0[] = {16LL, };
static constexpr int64_t int_array_1[] = {1LL, };
static constexpr int64_t int_array_4[] = {16LL, };
static constexpr int64_t int_array_5[] = {1LL, };
AtenTensorHandle buf6_handle;
AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_cpu_view_dtype(wrap_with_raii_handle_if_needed(reinterpret_tensor_wrapper(buf4, 1, int_array_4, int_array_5, 0LL)), cached_torch_dtype_complex32, &buf6_handle));
RAIIAtenTensorHandle buf6(buf6_handle);
output_handles[0] = buf6.release();
} // inductor_entry_impl
#include "C:/Users/Xuhan/AppData/Local/Temp/tmpi9ivu60k/do/cdoggdcp7ux2jv5ebkajvacaprabp6b4h4m2o3zifjj6xwp2kz4n.h"
extern "C" __declspec(dllexport) void cpp_fused_add_0(const half* in_ptr0,
const half* in_ptr1,
half* out_ptr0)
{
#pragma omp parallel num_threads(8)
{
int tid = omp_get_thread_num();
{
#pragma omp for
for(int64_t x0=static_cast<int64_t>(0LL); x0<static_cast<int64_t>(16LL); x0+=static_cast<int64_t>(16LL))
{
{
if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(16LL)))
{
auto tmp0 = at::vec::Vectorized<half>::loadu(in_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
auto tmp2 = at::vec::Vectorized<half>::loadu(in_ptr1 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
auto tmp1 = at::vec::convert<float>(tmp0);
auto tmp3 = at::vec::convert<float>(tmp2);
auto tmp4 = tmp1 + tmp3;
auto tmp5 = tmp4 + tmp4;
auto tmp6 = at::vec::convert<half>(tmp5);
tmp6.store(out_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
}
}
}
}
}
}
// Python bindings to call inductor_entry_cpp():
#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <sstream>
#include <cstdlib>
#ifndef _MSC_VER
#if __cplusplus < 202002L
// C++20 (earlier) code
// https://en.cppreference.com/w/cpp/language/attributes/likely
#define likely(x) __builtin_expect(!!(x), 1)
#define unlikely(x) __builtin_expect(!!(x), 0)
#endif
#else
#define likely(x) (x)
#define unlikely(x) (x)
#endif
// This is defined in guards.cpp so we don't need to import PyTorch headers that are slooow.
// We manually link it below to workaround issues with fbcode build.
static void* (*_torchinductor_pyobject_tensor_data_ptr)(PyObject* obj);
template <typename T> static inline T parse_arg(PyObject* args, size_t n) {
static_assert(std::is_pointer_v<T>, "arg type must be pointer or long");
return static_cast<T>(_torchinductor_pyobject_tensor_data_ptr(PyTuple_GET_ITEM(args, n)));
}
template <> inline int64_t parse_arg<int64_t>(PyObject* args, size_t n) {
auto result = PyLong_AsSsize_t(PyTuple_GET_ITEM(args, n));
if(unlikely(result == -1 && PyErr_Occurred()))
throw std::runtime_error("expected int arg");
return result;
}
template <> inline uintptr_t parse_arg<uintptr_t>(PyObject* args, size_t n) {
auto result = PyLong_AsVoidPtr(PyTuple_GET_ITEM(args, n));
if(unlikely(result == reinterpret_cast<void*>(-1) && PyErr_Occurred()))
throw std::runtime_error("expected int arg");
return reinterpret_cast<uintptr_t>(result);
}
#include <torch/csrc/inductor/aoti_torch/c/shim.h>
static inline std::vector<AtenTensorHandle> unpack_tensor_handle_list(PyObject* pyvec) {
std::vector<AtenTensorHandle> result;
size_t result_len = PyList_GET_SIZE(pyvec);
result.reserve(result_len);
for (size_t i = 0; i < result_len; i++) {
// AtenTensorHandle is essentially a pointer
void* elem = PyCapsule_GetPointer(PyList_GET_ITEM(pyvec, i), NULL);
result.push_back(reinterpret_cast<AtenTensorHandle>(elem));
}
return result;
}
static inline PyObject* pack_tensor_handle_list(const std::array<AtenTensorHandle, 1>& arr) {
PyObject* result = PyList_New(1);
for (size_t i = 0; i < 1; i++) {
PyObject *elem =
arr[i] == nullptr
? Py_None
// Store AtenTensorHandle as PyCapsulate
: PyCapsule_New(reinterpret_cast<void*>(arr[i]), NULL, NULL);
PyList_SET_ITEM(result, i, elem);
}
return result;
}
template <> inline std::vector<AtenTensorHandle> parse_arg<std::vector<AtenTensorHandle>>(PyObject* args, size_t n) {
return unpack_tensor_handle_list(PyTuple_GET_ITEM(args, n));
}
PyObject* inductor_entry_cpp(std::vector<AtenTensorHandle>&& input_handles) {
// For outputs, we only allocate an array to hold returned tensor handles,
// not the actual output tensor storage.
std::array<AtenTensorHandle, 1> output_handles{};
try {
inductor_entry_impl(input_handles.data(), output_handles.data());
if (PyErr_Occurred()) {
return nullptr;
}
return pack_tensor_handle_list(output_handles);
} catch(std::exception const& e) {
PyErr_SetString(PyExc_RuntimeError, e.what());
return nullptr;
} catch(...) {
PyErr_SetString(PyExc_RuntimeError, "unhandled error");
return nullptr;
}
}
static PyObject* inductor_entry_cpp_py(PyObject* self, PyObject* args) {
try {
if(unlikely(!PyTuple_CheckExact(args)))
throw std::runtime_error("tuple args required");
if(unlikely(PyTuple_GET_SIZE(args) != 1))
throw std::runtime_error("requires 1 args");
return inductor_entry_cpp(parse_arg<std::vector<AtenTensorHandle>>(args, 0));
} catch(std::exception const& e) {
PyErr_SetString(PyExc_RuntimeError, e.what());
return nullptr;
} catch(...) {
PyErr_SetString(PyExc_RuntimeError, "unhandled error");
return nullptr;
}
}
static PyMethodDef py_methods[] = {
{"inductor_entry_cpp", inductor_entry_cpp_py, METH_VARARGS, ""},
{NULL, NULL, 0, NULL}};
static struct PyModuleDef py_module =
{PyModuleDef_HEAD_INIT, "inductor_entry_cpp", NULL, -1, py_methods};
PyMODINIT_FUNC PyInit_inductor_entry_cpp(void) {
const char* str_addr = std::getenv("_TORCHINDUCTOR_PYOBJECT_TENSOR_DATA_PTR");
if(!str_addr) {
PyErr_SetString(PyExc_RuntimeError, "_TORCHINDUCTOR_PYOBJECT_TENSOR_DATA_PTR must be set");
return nullptr;
}
std::istringstream iss(str_addr);
uintptr_t addr = 0;
iss >> addr;
_torchinductor_pyobject_tensor_data_ptr =
reinterpret_cast<decltype(_torchinductor_pyobject_tensor_data_ptr)>(addr);
PyObject* module = PyModule_Create(&py_module);
if (module == NULL) {
return NULL;
}
#ifdef Py_GIL_DISABLED
PyUnstable_Module_SetGIL(module, Py_MOD_GIL_NOT_USED);
#endif
return module;
}
```
----
Debug log:
After manually add function declaration for `cpp_fused_add_0`, we can fix this issue.
Code change:
<img width="577" alt="Image" src="https://github.com/user-attachments/assets/6a7c3f11-bc38-4e63-92ce-ecf721d37ec0" />
Build log:
<img width="1900" alt="Image" src="https://github.com/user-attachments/assets/f7fa1aad-1219-4efa-8d97-b5efcdd8a407" />
### Versions
Main branch.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,021,460,946
|
Reapply "Rewrite the guts of torch::jit::Lexer to speed it up (#151850)"
|
swolchok
|
closed
|
[
"oncall: jit",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: jit",
"ci-no-td",
"ciflow/s390"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152250
Almost-exact reapply of #151850 (adding minor reviewer nits) . AFAICT it was reverted unnecessarily.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,021,431,771
|
[DTensor] [distributed]: Operator aten.masked_fill_.Scalar does not have a sharding strategy registered
|
dest1n1s
|
open
|
[
"oncall: distributed",
"triaged"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Hi, currently the operator `aten.masked_fill_.Scalar` does not have a sharding strategy registered. It's a rather common operator which will be called in the backward of `torch.norm`. As a workaround, I need to do:
```python
def norm(x: torch.Tensor, device_mesh: DeviceMesh | None):
if not isinstance(x, DTensor):
return torch.norm(x, p=2, dim=0, keepdim=False)
else:
x = x.to_local()
norm = torch.norm(x, p=2, dim=0, keepdim=False)
return DTensor.from_local(norm, device_mesh=device_mesh, placements=[Shard(0)])
```
It seems better to add a dedicated sharding strategy to this operator.
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,021,407,813
|
[refactor] refactor dense implementation of auto_functionalized_v2 for better clarity
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151067
* __->__ #152248
* #152247
* #152246
* #152245
* #152244
* #152073
* #152072
Abstracts away two helper functions (get_mutable_args_from_schema and _generate_new_op_kwargs_from_bases) to make the code better organized and more re-usable.
| true
|
3,021,407,720
|
[hop] make materialize_as_graph's include and exclude dispatch key set optional
|
ydwu4
|
closed
|
[
"Merged",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151067
* #152248
* __->__ #152247
* #152246
* #152245
* #152244
* #152073
* #152072
| true
|
3,021,407,680
|
[hop][schema] allow adding kw_only info to schema argument
|
ydwu4
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151067
* #152248
* #152247
* __->__ #152246
* #152245
* #152244
* #152073
* #152072
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,021,407,650
|
[hop][be] make check_input_alias_and_mutation_return_ouputs create new fake mode
|
ydwu4
|
closed
|
[
"Merged",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151067
* #152248
* #152247
* #152246
* __->__ #152245
* #152244
* #152073
* #152072
| true
|
3,021,407,573
|
[HOP][be] make supports_input_mutation and aliasisng a class field
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151067
* #152248
* #152247
* #152246
* #152245
* __->__ #152244
* #152073
* #152072
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,021,346,429
|
py_limited_api=True in PyTorch2.7 will break the build of extensions
|
airMeng
|
closed
|
[
"module: cpp-extensions",
"triaged",
"module: regression"
] | 10
|
NONE
|
### 🐛 Describe the bug
When we compile the torch extension like SGLang does, we need to set ```py_limited_api=True``` to maintain the compatibility between different python version, for example what the SGLang does https://github.com/sgl-project/sglang/blob/main/sgl-kernel/setup_cpu.py#L75C1-L85C2
```python
ext_modules = [
Extension(
name="sgl_kernel.common_ops",
sources=sources,
include_dirs=include_dirs,
extra_compile_args=extra_compile_args,
libraries=libraries,
extra_link_args=extra_link_args,
py_limited_api=True,
),
]
```
However, after upgrading to PyTorch2.7, there will be unknown descriptor like
```shell
~/miniforge3/envs/sgl/lib/python3.9/site-packages/torch/include/pybind11/pytypes.h:947:13: error: use of undeclared identifier 'PyInstanceMethod_Check'
947 | if (PyInstanceMethod_Check(value.ptr())) {
| ^
~/miniforge3/envs/sgl/lib/python3.9/site-packages/torch/include/pybind11/pytypes.h:948:21: error: use of undeclared identifier 'PyInstanceMethod_GET_FUNCTION'
948 | value = PyInstanceMethod_GET_FUNCTION(value.ptr());
| ^
~/miniforge3/envs/sgl/lib/python3.9/site-packages/torch/include/pybind11/pytypes.h:949:20: error: use of undeclared identifier 'PyMethod_Check'
949 | } else if (PyMethod_Check(value.ptr())) {
| ^
~/miniforge3/envs/sgl/lib/python3.9/site-packages/torch/include/pybind11/pytypes.h:950:21: error: use of undeclared identifier 'PyMethod_GET_FUNCTION'
950 | value = PyMethod_GET_FUNCTION(value.ptr());
| ^
~/miniforge3/envs/sgl/lib/python3.9/site-packages/torch/include/pybind11/pytypes.h:1261:57: error: use of undeclared identifier 'PyListObject'
1261 | sequence_fast_readonly(handle obj, ssize_t n) : ptr(PySequence_Fast_ITEMS(obj.ptr()) + n) {}
```
But if we set ```py_limited_api=False```, the issue gone.
### Versions
```shell
Collecting environment information...
PyTorch version: 2.7.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.0 | packaged by conda-forge | (default, Nov 20 2021, 02:24:10) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468V
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2401.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.6.0+cpu
[pip3] torchao==0.9.0
[pip3] torchaudio==2.6.0a0+c670ad8
[pip3] torchdata==0.11.0
[pip3] torchtext==0.16.0a0+b0ebddc
[pip3] torchvision==0.21.0+cpu
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.7.0+cpu pypi_0 pypi
[conda] torchao 0.9.0 pypi_0 pypi
[conda] torchaudio 2.6.0a0+c670ad8 pypi_0 pypi
[conda] torchdata 0.11.0 pypi_0 pypi
[conda] torchtext 0.16.0a0+b0ebddc pypi_0 pypi
[conda] torchvision 0.21.0+cpu pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
@mingfeima @EikanWang for awareness
cc @malfet @zou3519 @xmfan
| true
|
3,021,300,247
|
Enable 8byte vector loading for fp16/bf16
|
jeetkanjani7
|
open
|
[
"fb-exported",
"release notes: cuda"
] | 6
|
CONTRIBUTOR
|
Test Plan:
Tested via local benchmarks and e2e runs. The bandwidth improves by 2.5x on A100 80GB gpu for 2 byte data types (fp16, bf16). Also significant improvement in e2e qps.
{F1977455759}
Differential Revision: D73225699
| true
|
3,021,290,251
|
[export] Preserve custom metadata for tensor constants
|
yiming0416
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 9
|
CONTRIBUTOR
|
Summary:
Fixes https://github.com/pytorch/pytorch/issues/151476
The `custom_meta` collected from `mod` has keys that follow name of nodes in `mod`, which are inconsistent with the node names after the naming pass. For example a constant `b` will become `c_b`.
Test Plan: buck2 run caffe2/test:test_export -- -r test_run_decompositions_keep_tensor_constant_metadata
Differential Revision: D73703068
| true
|
3,021,289,026
|
Updates to build on Noble (Ubuntu24.04) and py3.12
|
jithunnair-amd
|
open
|
[
"module: rocm",
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
TODO:
- [ ] Add a build job for Ubuntu24.04 + py3.12
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,021,274,866
|
Fix an incorrect link markup
|
koyuki7w
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (ddp)"
] | 3
|
CONTRIBUTOR
|
Remove extra whitespace so the link works correctly.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,021,269,788
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 40
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.