id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,034,855,767
|
static cuda launcher causes `RuntimeError: CUDA driver error: invalid device context` in torchtitan CI
|
bdhirsh
|
closed
|
[
"oncall: pt2",
"module: inductor",
"compile-cache"
] | 1
|
CONTRIBUTOR
|
Here's a recent torchtitan CI job failure: https://github.com/pytorch/torchtitan/actions/runs/14691831856/job/41228192364#step:14:617
the repro command from torchtitan according to @tianyu-l is:
```
./run_train.sh --training.compile --activation_checkpoint.mode selective --activation_checkpoint.selective_ac_option op
```
I confirmed that manually turning off the static cuda launcher in torchtitan fixes CI: https://github.com/pytorch/torchtitan/pull/1156/files
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @oulgen @jamesjwu @masnesral
| true
|
3,034,853,080
|
[dynamic shapes] use try-catch instead of guard_or_true for reshape_view_helper
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"ciflow/pull"
] | 17
|
CONTRIBUTOR
|
Test Plan: test_export
Differential Revision: D74033649
| true
|
3,034,823,243
|
[export] Add draft-export docs
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Sample page: https://docs-preview.pytorch.org/pytorch/pytorch/152637/draft_export.html
cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @suo @ydwu4 @penguinwu
| true
|
3,034,810,915
|
Switch to metal kernel for mul
|
skotapati
|
closed
|
[
"open source",
"release notes: mps",
"ciflow/mps"
] | 2
|
COLLABORATOR
|
Draft PR
| true
|
3,034,800,361
|
TestFlexAttentionCUDA.test_GQA_score_mod7_cuda_float16 fails on h100
|
BoyuanFeng
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 0
|
CONTRIBUTOR
|
command to repro. This fails on h100.
```
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_GQA_score_mod7_cuda_float16
```
Error:
```
File "/data/users/boyuan/pytorch/test/inductor/test_flex_attention.py", line 412, in _check_equal
self.assertTrue(False, "Output/Grad with NaN")
AssertionError: False is not true : Output/Grad with NaN
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang
| true
|
3,034,780,601
|
Incorrect strides for `nonzero_static` compilation
|
GMNGeoffrey
|
open
|
[
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: pt2-dispatcher"
] | 0
|
NONE
|
### 🐛 Describe the bug
I am getting an output from `nonzero_static` with incorrect strides after being `torch.compile`'d. In older versions of torch, this manifests as some sort of runtime failure (I first encountered it as a GPU crash, which wasn't fun). In the latest stable and nightly versions, I'm instead seeing a somewhat helpful error message that points us in the right direction.
The failing minimal reproducer code is
```python
import torch
def test_nonzero_static_miscompile(device):
L = 5
eye = torch.eye(L, device=device)
nonzero_static = eye.nonzero_static(size=L)
i_static = nonzero_static[:, 0]
j_static = nonzero_static[:, 1]
nonzero = eye.nonzero()
print(f"{nonzero_static.shape=} {nonzero_static=}")
print(f"{nonzero.shape=} {nonzero=}")
assert torch.equal(nonzero_static, nonzero), f"{nonzero_static=}\n{nonzero=}"
i = nonzero[:, 0]
j = nonzero[:, 1]
print(f"{i_static.shape=} {i_static=}")
print(f"{j_static.shape=} {j_static=}")
print(f"{i.shape=} {i=}")
print(f"{j.shape=} {j=}")
assert torch.equal(i_static, i), f"{i_static=}\n{i=}"
assert torch.equal(j_static, j), f"{j_static=}\n{j=}"
def main():
torch.compile(test_nonzero_static_miscompile, backend="inductor")("cuda")
if __name__ == "__main__":
main()
```
It only happens for the inductor backend (not in eager or cudagraphs. I didn't try the external backends you have to separately install). It happens independent of the `mode` argument. It only happens when compile for the 'cuda' device
This fails on all the versions of torch I've tried, which include the latest nightly for CUDA 12.6 (2.8.0.dev20250501+cu126), the latest stable for CUDA 12.6 (2.7.0+cu126), a custom build from NVIDIA (2.7.0a0+7c8ec84dab.nv25.03), and a custom ROCm build from AMD (2.6.0a0+gitfc899bf)
### Error logs
On newer torch versions, I get this error
<details><summary><code>AssertionError: expected size 5==5, stride 1==2 at dim=0; expected size 2==2, stride 5==1 at dim=1</code></summary>
```
Traceback (most recent call last):
File "/home/gcmn/src/torch_nonzero_static_miscompile/nonzero_static_miscompile.py", line 36, in <module>
main()
File "/home/gcmn/src/torch_nonzero_static_miscompile/nonzero_static_miscompile.py", line 33, in main
torch.compile(test_nonzero_static_miscompile, backend="inductor")('cuda')
File "/home/gcmn/src/torch_nonzero_static_miscompile/torch-nightly.venv/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 663, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/gcmn/src/torch_nonzero_static_miscompile/nonzero_static_miscompile.py", line 3, in test_nonzero_static_miscompile
def test_nonzero_static_miscompile(device):
File "/home/gcmn/src/torch_nonzero_static_miscompile/torch-nightly.venv/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 857, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/gcmn/src/torch_nonzero_static_miscompile/torch-nightly.venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1221, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/gcmn/src/torch_nonzero_static_miscompile/torch-nightly.venv/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 330, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gcmn/src/torch_nonzero_static_miscompile/torch-nightly.venv/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/gcmn/src/torch_nonzero_static_miscompile/torch-nightly.venv/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 497, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gcmn/src/torch_nonzero_static_miscompile/torch-nightly.venv/lib/python3.11/site-packages/torch/_inductor/output_code.py", line 584, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_gcmn/a4/ca4peqblkoanf3bgsmuoqay74uq2wqx5znkpvunwb2rvklycj435.py", line 95, in call
assert_size_stride(buf2, (5, 2), (2, 1))
AssertionError: expected size 5==5, stride 1==2 at dim=0; expected size 2==2, stride 5==1 at dim=1
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
```
</details>
On older versions, it just hits my own assertion errors. Notably the errors are when comparing slices, not when comparing the full output tensors
<details><summary><code>AssertionError: i_static=tensor([0, 2, 4, 1, 3], device='cuda:0') i=tensor([0, 1, 2, 3, 4], device='cuda:0')</code></summary
```
nonzero_static.shape=torch.Size([5, 2]) nonzero_static=tensor([[0, 0],
[1, 1],
[2, 2],
[3, 3],
[4, 4]], device='cuda:0')
nonzero.shape=torch.Size([5, 2]) nonzero=tensor([[0, 0],
[1, 1],
[2, 2],
[3, 3],
[4, 4]], device='cuda:0')
i_static.shape=torch.Size([5]) i_static=tensor([0, 2, 4, 1, 3], device='cuda:0')
j_static.shape=torch.Size([5]) j_static=tensor([1, 3, 0, 2, 4], device='cuda:0')
i.shape=torch.Size([5]) i=tensor([0, 1, 2, 3, 4], device='cuda:0')
j.shape=torch.Size([5]) j=tensor([0, 1, 2, 3, 4], device='cuda:0')
Traceback (most recent call last):
File "/home/gcmn/src/torch_nonzero_static_miscompile/nonzero_static_miscompile.py", line 36, in <module>
main()
File "/home/gcmn/src/torch_nonzero_static_miscompile/nonzero_static_miscompile.py", line 33, in main
torch.compile(test_nonzero_static_miscompile, backend="inductor")('cuda')
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/gcmn/src/torch_nonzero_static_miscompile/nonzero_static_miscompile.py", line 3, in test_nonzero_static_miscompile
def test_nonzero_static_miscompile(device):
File "/home/gcmn/src/torch_nonzero_static_miscompile/nonzero_static_miscompile.py", line 27, in torch_dynamo_resume_in_test_nonzero_static_miscompile_at_10
assert torch.equal(i_static, i), f"{i_static=}\n{i=}"
^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: i_static=tensor([0, 2, 4, 1, 3], device='cuda:0')
i=tensor([0, 1, 2, 3, 4], device='cuda:0')
```
</details>
### Versions
<details><summary>Torch nightly (2.8.0.dev20250501+cu126)</summary>
Collecting environment information...
PyTorch version: 2.8.0.dev20250501+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.31.0
Libc version: glibc-2.36
Python version: 3.11.2 (main, Nov 30 2024, 21:22:50) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.1.0-27-cloud-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 12.5.40
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.35
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250501+cu126
[pip3] torchaudio==2.6.0.dev20250501+cu126
[pip3] torchvision==0.22.0.dev20250501+cu126
[conda] Could not collect
</details>
<details><summary>Torch stable (2.7.0+cu126)</summary>
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.31.0
Libc version: glibc-2.36
Python version: 3.11.2 (main, Nov 30 2024, 21:22:50) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.1.0-27-cloud-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 12.5.40
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.35
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.7.0
[pip3] torchaudio==2.7.0
[pip3] torchvision==0.22.0
[pip3] triton==3.3.0
[conda] Could not collect
</details>
Since this is reproducible on stable and nightly, I didn't include full details from all the other versions I tried.
cc @chauhang @penguinwu @eellison @zou3519 @bdhirsh
| true
|
3,034,772,647
|
[ca] wrap flex attention tests with compiled autograd
|
xmfan
|
open
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152633
* #152119
* #151962
* #151731
* #151860
* #149707
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,034,767,709
|
DISABLED test_torchvision_models_efficientnet_v2_l (__main__.TestVisionTracing)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 1
|
NONE
|
Platforms: asan, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_torchvision_models_efficientnet_v2_l&suite=TestVisionTracing&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41498178561).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_torchvision_models_efficientnet_v2_l`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_fx.py", line 4953, in run_test
script = torch.jit.script(graph)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_script.py", line 1443, in script
ret = _script_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_script.py", line 1152, in _script_impl
return torch.jit._recursive.create_script_module(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 557, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 626, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_script.py", line 653, in _construct
init_fn(script_module)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 602, in init_fn
scripted = create_script_module_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 626, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_script.py", line 653, in _construct
init_fn(script_module)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 602, in init_fn
scripted = create_script_module_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 626, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_script.py", line 653, in _construct
init_fn(script_module)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 602, in init_fn
scripted = create_script_module_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 626, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_script.py", line 653, in _construct
init_fn(script_module)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 602, in init_fn
scripted = create_script_module_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 626, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_script.py", line 653, in _construct
init_fn(script_module)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 602, in init_fn
scripted = create_script_module_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 626, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_script.py", line 653, in _construct
init_fn(script_module)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 602, in init_fn
scripted = create_script_module_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 630, in create_script_module_impl
create_methods_and_properties_from_stubs(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 466, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 1027, in compile_unbound_method
create_methods_and_properties_from_stubs(concrete_type, (stub,), ())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 466, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(
RuntimeError:
builtin cannot be used as a value:
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 540
return F.conv2d(
F.pad(
input, self._reversed_padding_repeated_twice, mode=self.padding_mode
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
),
weight,
'Conv2d._conv_forward' is being compiled since it was called from 'Conv2d.forward'
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 554
def forward(self, input: Tensor) -> Tensor:
return self._conv_forward(input, self.weight, self.bias)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_UBSAN=1 PYTORCH_TEST_WITH_SLOW=1 PYTORCH_TEST_SKIP_FAST=1 python test/test_fx.py TestVisionTracing.test_torchvision_models_efficientnet_v2_l
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_fx.py`
cc @clee2000 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,034,665,581
|
Fix two error messages involving Tensor.dense()
|
mhogervo
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 5
|
CONTRIBUTOR
|
Two error messages in the codebase instruct the user to use `Tendor.dense()`. This method doesn't exist, but `Tensor.to_dense()` does, and this is what the user should be using instead.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,034,561,957
|
[ROCm] Initial AITER Integration for mha_bwd asm kernels
|
alugorey
|
open
|
[
"module: rocm",
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Generates AITER plumbing via cmake. Calls into fav3 asm bwd CK kernels.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,034,508,195
|
[DCP] Add 30min timeout for IPC communications in async checkpointing
|
MeetVadakkanchery
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 6
|
CONTRIBUTOR
|
Summary:
### Diff Context
- Sometime background process can be stuck processing async checkpoint request, and trainer shutdown can occur before the background process completes.
- Fix, timeout the thread while reading the IPC queue for a response from background process.
Differential Revision: D74017700
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @LucasLLC @pradeepfn
| true
|
3,034,485,136
|
Make PGO code state not sensitive to file path by hashing file content when the file is available.
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152628
In some internal frameworks, on second attempts the actual code is copied to a different path than previous attempts.
but its still the same. PGO will not work on those cased due to the following, sate entries before this PR used to be identified by (filepath, function name, line number).
after this PR they are identified by (hash(filepath) , function name, line number). This way PGO will work for those jobs on future attempts and re-compilations of static versions will be avoided.
Sometimes we do not have access to the source code, (file does not exists)
This seems to happen mostly when we re-trace a compiled function but generally it can happen .
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,034,474,489
|
[v2.7.1] Release Tracker
|
atalman
|
open
|
[
"oncall: releng",
"triaged",
"release tracker"
] | 11
|
CONTRIBUTOR
|
This issue is for tracking cherry-picks to the release branch. Following is [release branch](https://github.com/pytorch/pytorch/tree/release/2.7) for the 2.7.1 release.
Our plan from this point is roughly the following:
* Phase 1 (until 5/19): Cherry-pick post deadline (End of day 5PM PST)
* Phase 2 (after 5/19): Perform extended integration/stability/performance testing based on Release Candidate builds.
**Only issues that have ‘cherry-picks’ in this tracker will be considered for the release.**
## Cherry-Pick Criteria
**Phase 1 (until 5/19):**
Please note: **No feature work allowed for cherry picks**. The Releng team relies on the cherry pick process to manage risk to release quality, i.e. by porting a small set of commit from trunk that are "must-have" into the release branch, we limit the change to the minimal to address pressing issues. Thus, not everything a developer land into the trunk will make it into the release. So, please consider the criteria below and follow the cherry picking process. Only low-risk changes may be cherry-picked from master:
1. Fixes to regressions against the most recent release (e.g. 2.7.0 for 2.7.1 release; see [module: regression issue list](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22module%3A+regression%22+))
2. Low risk critical fixes for: silent correctness, backwards compatibility, crashes, deadlocks, (large) memory leaks
3. Critical Fixes to new features being introduced in 2.7.0 release
4. Documentation improvements
5. Release branch specific changes (e.g. blocking ci fixes, change version identifiers)
Any other change requires special dispensation from the release managers (currently @atalman, @ZainRizvi , @seemethere @huydhn, @malfet). If this applies to your change please write "Special Dispensation" in the "Criteria Category:" template below and explain.
**Phase 2 (after 5/19):**
Note that changes here require us to rebuild a Release Candidate and restart extended testing (likely delaying the release). Therefore, the only accepted changes are **Release-blocking** critical fixes for: [silent correctness](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+correctness+%28silent%29%22), [backwards compatibility](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+bc-breaking%22+), [crashes](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+crash%22+), [deadlocks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+deadlock%22+), (large) [memory leaks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+memory+usage%22+)
Changes will likely require a discussion with the larger release team over VC or Slack.
## Cherry-Pick Process
1. Ensure your PR has landed in master. This does not apply for release-branch specific changes (see Phase 1 criteria).
2. Create (but do not land) a PR against the [release branch](https://github.com/pytorch/pytorch/tree/release/2.7).
<details>
```bash
# Find the hash of the commit you want to cherry pick
# (for example, abcdef12345)
git log
git fetch origin release/2.7
git checkout release/2.7
git cherry-pick abcdef12345
# Submit a PR based against 'release/2.7' either:
# via the GitHub UI
git push my-fork
# via the GitHub CLI
gh pr create --base release/2.7
```
</details>
3. Make a request below with the following format:
```
Link to landed trunk PR (if applicable):
*
Link to release branch PR:
*
Criteria Category:
*
```
1. Someone from the release team will reply with approved / denied or ask for more information.
2. If approved, someone from the release team will merge your PR once the tests pass. **Do not land the release branch PR yourself.**
**NOTE: Our normal tools (ghstack / ghimport, etc.) do not work on the release branch.**
See [HUD 2.7](https://hud.pytorch.org/hud/pytorch/pytorch/release%2F2.7/1?per_page=50)
### Versions
2.7.1
| true
|
3,034,453,325
|
[Dynamo] Guard serialization for TENSOR_SUBCLASS_METADATA_MATCH
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/pull"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152872
* #152865
* #152730
* #152729
* #152728
* #152727
* #152725
* #152724
* #152704
* __->__ #152626
This PR updates `GuardsStatePickler.reducer_override()` in `torch/_dynamo/guards.py` to handle reconstruction of traceable wrapper subclasses. It's intended to work recursively and handle any level of subclass instance nesting (e.g. subclass instances that contain subclass instances, etc.)
This PR tests the guard on several traceable wrapper tensor subclasses:
* `LocalSubclass`: used to ensure the correct error message is thrown when the subclass is not defined globally
* `torch.testing._internal.two_tensor.TwoTensor`: defines None for its extra metadata
* `SubclassWithMeta`: stores non-trivial extra metadata
* `SubclassWithCustomMetadataGuard`: stores non-trivial extra metadata and defines a custom `__metadata_guard__` classmethod
* `SubclassWithSubclassInnerTensors`: used to test recursiveness; this subclass contains subclass inner tensor components
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,034,451,300
|
[CP] Fix the offsets to KV in backward
|
fegin
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152625
This is more semantically correct even though we currently assumed KV have the same lengths.
cc @H-Huang @awgu @wanchaol @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,034,423,623
|
[pytree] make `tree_*` functions accept both Python and C++ `PyTreeSpec`
|
XuehaiPan
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: pytree",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148328
* #148180
* #137400
* __->__ #152624
cc @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,034,421,736
|
modded-nanogpt flaky NCCL hang starting 3/30 nightly
|
xmfan
|
open
|
[
"needs reproduction",
"oncall: distributed",
"triaged"
] | 8
|
MEMBER
|
### 🐛 Describe the bug
From @YouJiaCheng,
> I evaluated performance of other nightly releases:
time and peak memory allocated
0208:≈1470s, 50380MiB
0209:≈1483s, 50380MiB
0301:1484-1487s, 50380MiB
0310:1482-1484s, 52129MiB
0315:1498-1500s, 52129MiB
0330:NCCL Hang first run
0401:NCCL Hang first run
0410:NCCL Hang first run
0414:1496-1497s, 52129MiB, NCCL Hang after 5 successful runs
0415:~1498s, 52129MiB, NCCL Hang 2 in 3 runs

<details>
<summary>Code</summary>
```python
import os
import sys
with open(sys.argv[0]) as f:
code = f.read() # read the code of this file ASAP, for logging
import uuid
import time
import copy
from dataclasses import dataclass
from functools import lru_cache
from pathlib import Path
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True"
import torch
torch.empty(1, device="cuda", requires_grad=True).backward() # prevents a bug on some systems
from torch import Tensor, nn
import torch.nn.functional as F
import torch.distributed as dist
# use of FlexAttention contributed by @KoszarskyB
from torch.nn.attention.flex_attention import BlockMask, flex_attention
torch._inductor.config.coordinate_descent_tuning = True # we have banned this flag for new records because it causes compilation to take 30min
torch._dynamo.config.compiled_autograd = True
# -----------------------------------------------------------------------------
# Muon optimizer
def zeropower_via_newtonschulz5(G: Tensor) -> Tensor:
"""
Newton-Schulz iteration to compute the zeroth power / orthogonalization of G. We opt to use a
quintic iteration whose coefficients are selected to maximize the slope at zero. For the purpose
of minimizing steps, it turns out to be empirically effective to keep increasing the slope at
zero even beyond the point where the iteration no longer converges all the way to one everywhere
on the interval. This iteration therefore does not produce UV^T but rather something like US'V^T
where S' is diagonal with S_{ii}' ∈ [1 - l, 1 + r], which turns out not to hurt model
performance at all relative to UV^T, where USV^T = G is the SVD.
"""
assert G.ndim >= 2 # batched Muon implementation by @scottjmaddox, and put into practice in the record by @YouJiacheng
X = G.bfloat16()
if G.size(-2) > G.size(-1):
X = X.mT
# Ensure spectral norm is at most 1
X = X / (X.norm(dim=(-2, -1), keepdim=True) + 1e-7)
# Perform the NS iterations
for a, b, c in [
(4.0848, -6.8946, 2.9270),
(3.9505, -6.3029, 2.6377),
(3.7418, -5.5913, 2.3037),
(2.8769, -3.1427, 1.2046),
(2.8366, -3.0525, 1.2012),
]:
A = X @ X.mT
B = b * A + c * A @ A # quintic computation strategy adapted from suggestion by @jxbz, @leloykun, and @YouJiacheng
X = a * X + B @ X
if G.size(-2) > G.size(-1):
X = X.mT
return X
@torch.compile
def update(acc_bf16_view_u16: Tensor, mantissa: Tensor, momentum_buffer: Tensor, grad: Tensor, momentum: Tensor, eff_lr: Tensor, eff_weight_decay: Tensor):
assert acc_bf16_view_u16.dtype == mantissa.dtype == torch.uint16
grad = grad.float()
momentum_buffer.copy_(momentum * momentum_buffer + (1 - momentum) * grad)
v = zeropower_via_newtonschulz5(momentum * momentum_buffer + (1 - momentum) * grad)
acc_m_u32 = (acc_bf16_view_u16.to(torch.uint32) << 16) | mantissa.to(torch.uint32)
acc_m_u32.view(torch.float32).mul_(1 - eff_weight_decay)
acc_m_u32.view(torch.float32).add_(other=v, alpha=-eff_lr)
acc_bf16_view_u16.copy_((acc_m_u32 >> 16).to(torch.uint16))
mantissa.copy_(acc_m_u32.to(torch.uint16))
class Muon(torch.optim.Optimizer):
"""
Muon - MomentUm Orthogonalized by Newton-schulz
https://kellerjordan.github.io/posts/muon/
Muon internally runs standard SGD-momentum, and then performs an orthogonalization post-
processing step, in which each 2D parameter's update is replaced with the nearest orthogonal
matrix. To efficiently orthogonalize each update, we use a Newton-Schulz iteration, which has
the advantage that it can be stably run in bfloat16 on the GPU.
Warning: This optimizer should not be used for the embedding layer, the final fully connected layer,
or any {0,1}-D parameters; those should all be optimized by a standard method (e.g., AdamW).
"""
def __init__(self, params, lr=0.02, weight_decay=0.01, momentum=0.95, rank=0, world_size=1):
self.rank = rank
self.world_size = world_size
defaults = dict(lr=lr, weight_decay=weight_decay, momentum=momentum)
super().__init__(params, defaults)
assert all(p.dtype == torch.bfloat16 for group in self.param_groups for p in group["params"])
@torch.no_grad()
def step(self):
futures: list[torch.Future] = []
for group in self.param_groups:
params: list[Tensor] = group["params"]
params_pad = params + [torch.empty_like(params[-1])] * self.world_size
momentum = torch._as_tensor_fullprec(group["momentum"])
for base_i in range(len(params))[::self.world_size]:
if base_i + self.rank < len(params):
p = params[base_i + self.rank]
state = self.state[p]
if len(state) == 0:
state["mantissa"] = torch.zeros_like(p, dtype=torch.uint16)
state["momentum_buffer"] = torch.zeros_like(p, dtype=torch.float32)
update(
p.view(torch.uint16), state["mantissa"], state["momentum_buffer"],
p.grad, momentum,
eff_lr=torch._as_tensor_fullprec(group["lr"] * max(1, p.size(-2) / p.size(-1)) ** 0.5),
eff_weight_decay=torch._as_tensor_fullprec(group["lr"] * group["weight_decay"] * getattr(p, "wd_mul", 1.0)),
)
futures.append(dist.all_gather(params_pad[base_i:base_i + self.world_size], params_pad[base_i + self.rank], async_op=True).get_future())
# for group in self.param_groups:
# params: list[Tensor] = group["params"]
# momentum = torch._as_tensor_fullprec(group["momentum"])
# for base_i in range(len(params))[::self.world_size]:
# p = params[min(base_i + self.rank, len(params) - 1)]
# state = self.state[p]
# if len(state) == 0:
# state["mantissa"] = torch.zeros_like(p, dtype=torch.uint16)
# state["momentum_buffer"] = torch.zeros_like(p, dtype=torch.float32)
# update(
# p.view(torch.uint16), state["mantissa"], state["momentum_buffer"],
# p.grad, momentum,
# eff_lr=torch._as_tensor_fullprec(group["lr"] * max(1, p.size(-2) / p.size(-1)) ** 0.5),
# eff_weight_decay=torch._as_tensor_fullprec(group["lr"] * group["weight_decay"] * getattr(p, "wd_mul", 1.0)),
# )
# p_list = [params[min(base_i + i, len(params) - 1)] for i in range(self.world_size)]
# futures.append(dist.all_gather(p_list, p_list[self.rank], async_op=True).get_future())
torch.futures.collect_all(futures).wait()
# -----------------------------------------------------------------------------
# PyTorch nn.Module definitions for the model
def norm(x: Tensor):
return F.rms_norm(x, (x.size(-1),))
@torch.no_grad()
def init_linear(w: Tensor):
std = 0.5 * (w.size(-1) ** -0.5) # 0.5 is a bit better than the default 1/sqrt(3)
bound = (3 ** 0.5) * std
return w.uniform_(-bound, bound)
class Rotary(nn.Module):
def __init__(self, dim: int, max_seq_len: int):
super().__init__()
# half-truncate RoPE by @YouJiacheng (w/ base freq tuning)
angular_freq = (1 / 1024) ** torch.linspace(0, 1, steps=dim//4, dtype=torch.float32)
angular_freq = torch.cat([angular_freq, angular_freq.new_zeros(dim//4)])
t = torch.arange(max_seq_len, dtype=torch.float32)
theta = torch.einsum("i,j -> ij", t, angular_freq)
self.cos = nn.Buffer(theta.cos(), persistent=False)
self.sin = nn.Buffer(theta.sin(), persistent=False)
def forward(self, x_BTHD: Tensor):
assert self.cos.size(0) >= x_BTHD.size(-3)
cos, sin = self.cos[None, :x_BTHD.size(-3), None, :], self.sin[None, :x_BTHD.size(-3), None, :]
x1, x2 = x_BTHD.to(dtype=torch.float32).chunk(2, dim=-1)
y1 = x1 * cos + x2 * sin
y2 = x1 * (-sin) + x2 * cos
return torch.cat((y1, y2), 3).type_as(x_BTHD)
class CausalSelfAttention(nn.Module):
def __init__(self, dim: int, num_heads: int, max_seq_len: int, head_dim=128):
super().__init__()
self.num_heads = num_heads
self.head_dim = head_dim
hdim = num_heads * head_dim
# merged QKV weights: suggested by many, implemented by @fernbear.bsky.social, and further improved by @YouJiacheng
# https://x.com/hi_tysam/status/1879699187107033311
self.qkvo_w = nn.Parameter(init_linear(torch.empty(4, hdim, dim)).bfloat16())
self.qkvo_w.detach()[3].zero_() # out zero init suggested by @Grad62304977
self.rotary = Rotary(head_dim, max_seq_len)
# scale the attention logits by given constant, instead of the default head_dim**-0.5, by @leloykun
# inspired by learnable scalars used by @brendanh0gan https://x.com/hi_tysam/status/1879693583898591283
self.attn_scale = 0.12
def forward(self, x: Tensor, ve: Tensor | None, block_mask: BlockMask, lambdas: Tensor):
B, T = x.size(0), x.size(1) # batch size, sequence length
assert B == 1, "Must use batch size = 1 for FlexAttention"
q, k, v = F.linear(x, self.qkvo_w[:3].flatten(end_dim=1)).view(B, T, 3 * self.num_heads, self.head_dim).chunk(3, dim=-2)
q, k = norm(q), norm(k) # QK norm @Grad62304977
q, k = self.rotary(q), self.rotary(k)
v = norm(v)
if ve is not None:
v = lambdas[0] * v + lambdas[1] * ve.view_as(v) # @KoszarskyB & @Grad62304977
else: # skip mid-layers token value embeddings by @YouJiacheng
v = lambdas[0] * v
y = flex_attention(q.transpose(1, 2), k.transpose(1, 2), v.transpose(1, 2), block_mask=block_mask, scale=self.attn_scale).transpose(1, 2)
y = y.contiguous().view(B, T, self.num_heads * self.head_dim) # re-assemble all head outputs side by side
y = F.linear(y, self.qkvo_w[3])
return y
class MLP(nn.Module):
def __init__(self, dim: int):
super().__init__()
hdim = 4 * dim
self.fc_w = nn.Parameter(init_linear(torch.empty(hdim, dim)).bfloat16())
self.proj_w = nn.Parameter(torch.zeros(dim, hdim).bfloat16())
self.fc_w.wd_mul = 2.0
self.proj_w.wd_mul = 2.0
def forward(self, x: Tensor):
x = F.linear(x, self.fc_w)
x = F.relu(x).square() # https://arxiv.org/abs/2109.08668v2; ~1-2% better than GELU; suggested by @SKYLINEZ007 and @Grad62304977
x = F.linear(x, self.proj_w)
return x
class Block(nn.Module):
def __init__(self, dim: int, num_heads: int, max_seq_len: int, layer_idx: int):
super().__init__()
# skip attention of blocks.7 (the 8th layer) by @YouJiacheng
self.attn = CausalSelfAttention(dim, num_heads, max_seq_len) if layer_idx != 7 else None
self.mlp = MLP(dim)
def forward(self, x: Tensor, ve: Tensor | None, x0: Tensor, block_mask: BlockMask, lambdas: Tensor, sa_lambdas: Tensor):
x = lambdas[0] * x + lambdas[1] * x0
if self.attn is not None:
x = x + self.attn(x, ve, block_mask, sa_lambdas)
x = x + self.mlp(norm(x))
return x
# -----------------------------------------------------------------------------
# The main model
def next_multiple_of_n(v: float | int, *, n: int):
return next(x for x in range(n, int(v) + 1 + n, n) if x >= v)
class GPT(nn.Module):
def __init__(self, vocab_size: int, num_layers: int, num_heads: int, model_dim: int, max_seq_len: int):
super().__init__()
self.embed = nn.Embedding(vocab_size, model_dim)
# token value embeddings by @KoszarskyB - inspired by @Grad62304977's value residual implementation following https://arxiv.org/abs/2410.17897
# value embedding code simplification inspired by @ragulpr https://github.com/KellerJordan/modded-nanogpt/pull/78
self.value_embeds = nn.ModuleList([nn.Embedding(vocab_size, model_dim) for _ in range(3)])
self.blocks = nn.ModuleList([Block(model_dim, num_heads, max_seq_len, i) for i in range(num_layers)])
# there are only 50257 unique GPT-2 tokens; we extend to nearest multiple of 128 for efficiency.
# suggested to me by @Grad62304977. this originates from Karpathy's experiments.
self.lm_head_w = nn.Parameter(torch.zeros(next_multiple_of_n(vocab_size, n=128), model_dim))
# Add learnable skip connection weights for decoder layers
assert num_layers % 2 == 0
self.scalars = nn.Parameter(torch.cat([
torch.ones(num_layers), # skip_weights
*[torch.tensor([1.0, 0.0]) for _ in range(num_layers)], # block lambdas
*[torch.tensor([0.5, 0.5]) for _ in range(num_layers)], # SA lambdas
]))
def create_blockmasks(self, input_seq: Tensor, sliding_window_num_blocks: Tensor):
BLOCK_SIZE = 128
docs = (input_seq == 50256).cumsum(0)
def document_causal(b, h, q_idx, kv_idx):
causal_mask = q_idx >= kv_idx
document_mask = docs[q_idx] == docs[kv_idx]
return causal_mask & document_mask
def dense_to_ordered(dense_blockmask: Tensor):
num_blocks = dense_blockmask.sum(dim=-1, dtype=torch.int32)
indices = dense_blockmask.argsort(dim=-1, descending=False, stable=True).flip(-1).to(torch.int32)
return num_blocks[None, None].contiguous(), indices[None, None].contiguous()
# manual block mask creation by @YouJiacheng
assert len(input_seq) % BLOCK_SIZE == 0
NUM_BLOCKS = len(input_seq) // BLOCK_SIZE
block_idx = torch.arange(NUM_BLOCKS, dtype=torch.int32, device="cuda")
causal_blockmask_any = block_idx[:, None] >= block_idx
causal_blockmask_all = block_idx[:, None] > block_idx
docs_low = docs.view(-1, BLOCK_SIZE)[:, 0].contiguous()
docs_high = docs.view(-1, BLOCK_SIZE)[:, -1].contiguous()
document_blockmask_any = (docs_low[:, None] <= docs_high) & (docs_high[:, None] >= docs_low)
document_blockmask_all = (docs_low[:, None] == docs_high) & (docs_high[:, None] == docs_low)
blockmask_any = causal_blockmask_any & document_blockmask_any
blockmask_all = causal_blockmask_all & document_blockmask_all
partial_kv_num_blocks, partial_kv_indices = dense_to_ordered(blockmask_any & ~blockmask_all)
full_kv_num_blocks, full_kv_indices = dense_to_ordered(blockmask_all)
def build_bm(window_size_blocks: Tensor) -> BlockMask:
return BlockMask.from_kv_blocks(
torch.clamp_max(partial_kv_num_blocks, torch.clamp_min(window_size_blocks - full_kv_num_blocks, 1)),
partial_kv_indices,
torch.clamp_max(full_kv_num_blocks, window_size_blocks - 1),
full_kv_indices,
BLOCK_SIZE=BLOCK_SIZE,
mask_mod=document_causal,
)
# Long-short SWA block masks by @leloykun & @YouJiacheng, adapated from suggestion by @Grad62304977, following Gemma 2 paper
return build_bm(sliding_window_num_blocks), build_bm(sliding_window_num_blocks // 2)
def forward(self, input_seq: Tensor, target_seq: Tensor, sliding_window_num_blocks: Tensor):
assert input_seq.ndim == 1
ve = [value_embed(input_seq) for value_embed in self.value_embeds]
# 012 ... 012 structure on token value embeddings by @YouJiacheng, improved on @leloykun's U-net structure
ve = [ve[0], ve[1], ve[2]] + [None] * (len(self.blocks) - 6) + [ve[0], ve[1], ve[2]]
assert len(ve) == len(self.blocks)
long_bm, short_bm = self.create_blockmasks(input_seq, sliding_window_num_blocks)
block_masks = [long_bm, short_bm, short_bm, short_bm, long_bm, short_bm, short_bm, short_bm, short_bm, short_bm, short_bm, long_bm, short_bm, short_bm, short_bm, long_bm]
assert len(block_masks) == len(self.blocks)
x = x0 = norm(self.embed(input_seq)[None]) # use of norm here by @Grad62304977
skip_connections = []
skip_map = {
9: 6,
10: 4,
11: 2,
}
skip_weights = self.scalars[:len(self.blocks)]
lambdas = self.scalars[1 * len(self.blocks): 3 * len(self.blocks)].view(-1, 2)
sa_lambdas = self.scalars[3 * len(self.blocks): 5 * len(self.blocks)].view(-1, 2)
for i in range(len(self.blocks)):
if i in skip_map:
x = x + skip_weights[skip_map[i]] * skip_connections[skip_map[i]]
x = self.blocks[i](x, ve[i], x0, block_masks[i], lambdas[i], sa_lambdas[i])
skip_connections.append(x)
x = norm(x)
if self.training:
logits: Tensor = F.linear(x.flatten(end_dim=1), self.lm_head_w.bfloat16()).float()
loss = F.cross_entropy(15 * logits * torch.rsqrt(logits.square() + 225), target_seq)
return loss
loss = 0
for i in range(4):
logits: Tensor = F.linear(x.flatten(end_dim=1).chunk(4)[i], self.lm_head_w.bfloat16()).float()
loss += F.cross_entropy(15 * logits * torch.rsqrt(logits.square() + 225), target_seq.chunk(4)[i]) / 4
return loss
# -----------------------------------------------------------------------------
# Our own simple Distributed Data Loader
def _load_data_shard(file: Path):
header = torch.from_file(str(file), False, 256, dtype=torch.int32) # header is 256 int32
assert header[0] == 20240520, "magic number mismatch in the data .bin file"
assert header[1] == 1, "unsupported version"
num_tokens = int(header[2]) # number of tokens (claimed)
with file.open("rb", buffering=0) as f:
tokens = torch.empty(num_tokens, dtype=torch.uint16, pin_memory=True) # avoid pin_memory copy by @YouJiacheng
f.seek(256 * 4)
nbytes = f.readinto(tokens.numpy()) # avoid bytes->array copy by @YouJiacheng
assert nbytes == 2 * num_tokens, "number of tokens read does not match header"
return tokens
def distributed_data_generator(filename_pattern: str, batch_size: int, rank : int, world_size : int):
files = sorted(Path.cwd().glob(filename_pattern))
assert batch_size % world_size == 0
local_batch_size = batch_size // world_size
file_iter = iter(files) # use itertools.cycle(files) instead if you want to do multi-epoch training
tokens, pos = _load_data_shard(next(file_iter)), 0
while True:
if pos + batch_size + 1 >= len(tokens):
tokens, pos = _load_data_shard(next(file_iter)), 0
buf = tokens[pos + rank * local_batch_size:][:local_batch_size + 1]
inputs = buf[:-1].to(device="cuda", dtype=torch.int32, non_blocking=True) # no sync on host side;
targets = buf[1:].to(device="cuda", dtype=torch.int64, non_blocking=True) # H2D in another stream isn't helpful.
pos += batch_size
yield inputs, targets
# -----------------------------------------------------------------------------
# int main
@dataclass
class Hyperparameters:
# data
train_files = "data/fineweb10B/fineweb_train_*.bin" # input .bin to train on
val_files = "data/fineweb10B/fineweb_val_*.bin" # input .bin to eval validation loss on
val_tokens = 10485760 # how many tokens of validation data? it's important to keep this fixed for consistent comparisons
train_seq_len = 64*1024 # FlexAttention sequence length
val_seq_len = 4*64*1024 # FlexAttention sequence length for validation
# optimization
num_iterations = 5960 # number of iterations to run
cooldown_frac = 0.7 # fraction of training spent cooling down the learning rate
# architecture
vocab_size = 50257
# evaluation and logging
val_loss_every = 125 # every how many steps to evaluate val loss? 0 for only at the end
save_checkpoint = False
args = Hyperparameters()
run_id = int(os.environ.get("RUN_ID", 0))
# torchrun sets these env variables
rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
assert world_size == 8 # this code is designed for 8xH100
assert torch.cuda.is_available()
device = torch.device("cuda", int(os.environ["LOCAL_RANK"]))
torch.cuda.set_device(device)
dist.init_process_group(backend="nccl", device_id=device)
dist.barrier()
master_process = (rank == 0) # this process will do logging, checkpointing etc.
# begin logging
if master_process:
run_id_full = f"{run_id:03d}_{uuid.uuid4()}"
os.makedirs("logs", exist_ok=True)
logfile = f"logs/{run_id_full}.txt"
print(logfile)
def print0(s, console=False):
if master_process:
with open(logfile, "a") as f:
if console:
print(s)
print(s, file=f)
from torch._logging._internal import trace_structured # noqa: E402
import torch._inductor.codecache # noqa: E402
import torch._inductor.graph # noqa: E402
def _patched_trace_structured(name, *args, **kwargs):
if name == "inductor_output_code":
match args, kwargs:
case (metadata_fn, *_), _:
filename = metadata_fn().get("filename", "Unknown")
case _, {"metadata_fn": metadata_fn}:
filename = metadata_fn().get("filename", "Unknown")
case _:
filename = "Unknown"
print0(f"inductor_output_code: {filename}")
trace_structured(name, *args, **kwargs)
torch._inductor.codecache.trace_structured = _patched_trace_structured
torch._inductor.graph.trace_structured = _patched_trace_structured
# begin by printing this file (the Python code)
print0(code)
print0("="*100)
# log information about the hardware/software environment this is running on
print0(f"Running Python {sys.version}")
print0(f"Running PyTorch {torch.version.__version__} compiled for CUDA {torch.version.cuda}")
def nvidia_smi():
import subprocess # avoid top level import
return subprocess.run(["nvidia-smi"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True).stdout
print0(nvidia_smi())
print0("="*100)
########################################
# Construct model and optimizer #
########################################
model: nn.Module = GPT(vocab_size=args.vocab_size, num_layers=16, num_heads=8, model_dim=1024,
max_seq_len=max(args.train_seq_len, args.val_seq_len)).cuda()
for m in model.modules():
if isinstance(m, nn.Embedding):
m.bfloat16()
for param in model.parameters():
dist.broadcast(param.detach(), 0)
# collect the parameters to optimize
hidden_matrix_params = sorted((p for p in model.blocks.parameters() if p.ndim >= 2), key=lambda x: x.size(), reverse=True)
embed_params = [*model.embed.parameters(), *model.value_embeds.parameters()]
scalar_params = [model.scalars]
head_params: list[nn.Parameter] = [model.lm_head_w]
# sanity check
params_collections = [hidden_matrix_params, embed_params, scalar_params, head_params]
optimized_parameters_set = {p for params in params_collections for p in params}
assert optimized_parameters_set == {*model.parameters()}
assert len(optimized_parameters_set) == sum(len(lst) for lst in params_collections)
# init the optimizer(s)
adam_param_groups = [dict(params=head_params, lr=1/320), dict(params=embed_params, lr=0.3), dict(params=scalar_params, lr=0.015)]
# small adam epsilon by @YouJiacheng. this is an alternate method of fixing the world_size dependence
# discovered by @fernbear.bsky.social https://x.com/hi_tysam/status/1879692937589875094
optimizer1 = torch.optim.AdamW(adam_param_groups, betas=(0.8, 0.95), eps=1e-10, weight_decay=0.0, fused=True)
optimizer2 = Muon(hidden_matrix_params, lr=0.025, momentum=0.95, rank=rank, world_size=world_size)
optimizers: list[torch.optim.Optimizer] = [optimizer1, optimizer2]
def opt_params(opt: torch.optim.Optimizer) -> list[nn.Parameter]:
return [p for group in opt.param_groups for p in group["params"]]
opt2params = {opt: opt_params(opt) for opt in optimizers}
for opt in optimizers:
for group in opt.param_groups:
group["initial_lr"] = group["lr"]
# learning rate schedule: stable then decay
def get_lr(step: int):
x = step / args.num_iterations # progress in training
assert 0 <= x < 1
if x < 1 - args.cooldown_frac:
return 1.0
else:
return (1 - x) / args.cooldown_frac
# attention window size schedule: linearly increase
@lru_cache(1)
def get_window_size_blocks_helper(window_size: int):
return torch.tensor(window_size // 128, dtype=torch.int32, pin_memory=True).cuda(non_blocking=True)
def get_window_size_blocks(step: int):
x = step / args.num_iterations # progress in training
assert 0 <= x <= 1
# Linearly increase the block-wise sliding window size over training 128 -> 1792
# increase by @fernbear.bsky.social; block-wise by @YouJiacheng
factor = 4 * x ** 3 - 6 * x ** 2 + 3 * x
window_size = next_multiple_of_n(3456 * factor, n=128)
return get_window_size_blocks_helper(window_size)
model: nn.Module = torch.compile(model, dynamic=False)
########################################
# Warmup kernels #
########################################
# Warmup the training kernels, then re-initialize the state so we aren't cheating
warmup_steps = 10
initial_state = copy.deepcopy(dict(model=model.state_dict(), optimizers=[opt.state_dict() for opt in optimizers]))
for _ in range(warmup_steps):
inputs = targets = torch.randint(0, args.vocab_size, size=(args.train_seq_len,), device="cuda")
model(inputs.to(torch.int32), targets, get_window_size_blocks(0)).backward()
for param in model.parameters():
dist.all_reduce(param.grad, op=dist.ReduceOp.AVG)
for opt in optimizers:
opt.step()
model.zero_grad(set_to_none=True)
model.load_state_dict(initial_state["model"])
for opt, opt_state in zip(optimizers, initial_state["optimizers"]):
opt.load_state_dict(opt_state)
del initial_state
########################################
# Training and validation #
########################################
torch.cuda.reset_peak_memory_stats()
train_loader = distributed_data_generator(args.train_files, world_size * args.train_seq_len, rank, world_size)
training_time_ms = 0
# start the clock
dist.barrier()
t0 = time.perf_counter()
# begin training
train_steps = args.num_iterations
for step in range(train_steps + 1):
last_step = (step == train_steps)
# --------------- VALIDATION SECTION -----------------
if last_step or (args.val_loss_every > 0 and step % args.val_loss_every == 0):
# stop the clock
dist.barrier()
training_time_ms += 1000 * (time.perf_counter() - t0)
model.eval()
val_batch_size = world_size * args.val_seq_len
assert args.val_tokens % val_batch_size == 0
val_steps = args.val_tokens // val_batch_size
val_loader = distributed_data_generator(args.val_files, val_batch_size, rank, world_size)
val_loss = 0
with torch.no_grad():
for _ in range(val_steps):
inputs, targets = next(val_loader)
val_loss += model(inputs, targets, get_window_size_blocks(step))
val_loss /= val_steps
del val_loader
dist.all_reduce(val_loss, op=dist.ReduceOp.AVG)
print0(f"step:{step}/{train_steps} val_loss:{val_loss:.6f} train_time:{training_time_ms:.0f}ms step_avg:{training_time_ms/max(step, 1):.2f}ms", console=True)
model.train()
# start the clock again
dist.barrier()
t0 = time.perf_counter()
if last_step:
if master_process and args.save_checkpoint:
log = dict(step=step, code=code, model=model.state_dict(), optimizers=[opt.state_dict() for opt in optimizers])
os.makedirs(f"logs/{run_id_full}", exist_ok=True)
torch.save(log, f"logs/{run_id_full}/state_step{step:06d}.pt")
# the last step only has the validation loop, so break to avoid training
break
# --------------- TRAINING SECTION -----------------
inputs, targets = next(train_loader)
model(inputs, targets, get_window_size_blocks(step)).backward()
opt2futures = {
opt: [dist.all_reduce(p.grad, op=dist.ReduceOp.AVG, async_op=True).get_future() for p in params]
for opt, params in opt2params.items()
}
# set optimization hyperparameters
for opt in optimizers:
for group in opt.param_groups:
group["lr"] = group["initial_lr"] * get_lr(step)
for group in optimizer2.param_groups:
frac = min(step / 300, 1) # momentum warmup for muon
group["momentum"] = (1 - frac) * 0.85 + frac * 0.95
# step the optimizers
for opt in optimizers:
torch.futures.collect_all(opt2futures[opt]).wait()
opt.step()
# null the gradients
model.zero_grad(set_to_none=True)
# logging
approx_training_time_ms = training_time_ms + 1000 * (time.perf_counter() - t0)
print0(f"step:{step+1}/{train_steps} train_time:{approx_training_time_ms:.0f}ms step_avg:{approx_training_time_ms/(step + 1):.2f}ms", console=True)
print0(f"peak memory allocated: {torch.cuda.max_memory_allocated() // 1024 // 1024} MiB "
f"reserved: {torch.cuda.max_memory_reserved() // 1024 // 1024} MiB", console=True)
dist.destroy_process_group()
```
</details>
### Versions
nightlies since 3/30
Collecting environment information...
PyTorch version: 2.7.0.dev20250209+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.9 (main, Feb 5 2025, 19:10:45) [Clang 19.1.6 ] (64-bit runtime)
Python platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 168
On-line CPU(s) list: 0-161
Off-line CPU(s) list: 162-167
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 42
Socket(s): 2
Stepping: 8
BogoMIPS: 5199.53
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.9 MiB (84 instances)
L1i cache: 2.6 MiB (84 instances)
L2 cache: 168 MiB (84 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-83
NUMA node1 CPU(s): 84-167
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250209+cu126
[conda] Could not collect
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,034,417,696
|
Parameterized CUDA Graph Launch
|
galv
|
open
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
This is a follow on to #137318 .
The main concern with that PR was robustness: We had no real way of knowing whether or not a particular 8-byte aligned 8-byte value in a parameter was really a pointer. This made it basically impossible to 100% guarantee correctness of replacing the arguments of a cuda graph, since you might accidentally replace a false positive. However, the compiler does in fact know the types of every argument to a cuda kernel; the only way this is exposed right now is via dwarf debug information, so I parsed the dwarf debug information.
Disappointingly, this PR cannot be built on its own. I am currently depending upon a private API to get the ELF file holding a given cuda kernel (since that elf file contains the dwarf debug information we need) that sits in a header file called etbl_private.h (which I did not upload). Exposing internal APIs publicly will take time.
Another challenge with this PR is that it requires you to build all CUDA code with nvcc's "-G" flag. This creates dwarf debug symbols, but at the cost of disabling optimizations. This both slows down compilation and means we don't benefit from this change yet, since your kernels will be way slower.
A final concern is that this doesn't work with code generated via compilers other than nvcc. In particular, triton doesn't seem to generate dwarf debug symbols. I'm not very concerned about this right now, since triton's kernels apparently have very simple parameters (just basic types no structs): https://github.com/triton-lang/triton/blob/bb60d9efe15534c1d0514e19cdf5da14d38779a2/third_party/nvidia/backend/driver.py#L100-L122
That may all sound underwhelming, but I hope this paves the way towards calling cuda graphs as if they are functions. (i.e., you will no longer have to worry about an errant graph.replay() call clobbering an output value from a previous graph.replay()). The ability to reliably detect pointers in a cuda graph also would allow us to prevent cuda graphed code from "hogging" memory from non cuda graphed code. This is because we could "unmap" the physical memory backing virtual addresses of temporary allocations in a cuda graph. It remains to be seen whether this can be done perfomantly @eellison .
I hope this dovetails nicely with @BoyuanFeng 's work on inductor graph partition and removing all of the constraints and workarounds in cudagraph trees.
@eellison @BoyuanFeng @leijurv @zdevito
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,034,386,674
|
Stop proxy-ing autograd.Function.ctx into the graph
|
zou3519
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152621
The reason why we did this before is because that's how our older
autograd.Function x Dynamo interaction work, but we've since adopted
newer designs that don't actually need the autograd.Function.ctx proxied
into the graph.
We still need a fx.Proxy for the autograd.Function.ctx object, so
whenever we do I create one via discard_graph_changes.
Test Plan:
- existing tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,034,379,833
|
BE: Swap functorch --> torch._higher_order_ops
|
seemethere
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4
|
MEMBER
|
Summary: Discovered when attempting to resolve arvr builds, should resolve issues around utilizing functorch through export.
Test Plan:
```
buck2 test arvr/mode/linux/opt //arvr/libraries/xrrp/ml/python/test:convert_to_etvk_test
```
Differential Revision: D74013898
| true
|
3,034,352,816
|
[fbgemm] Implement __obj_flatten__ for LinearPackedParamsBase
|
hl475
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 19
|
CONTRIBUTOR
|
Differential Revision: D73991241
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,034,342,808
|
[CUDA][TF32] Account for TF32 in `test_conv2d_same_padding`
|
eqy
|
closed
|
[
"module: cuda",
"module: convolution",
"open source",
"Merged",
"module: tf32",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
cc @ptrblck @msaroufim @jerryzh168 @zasdfgbnm
| true
|
3,034,298,667
|
Pytorch Profiler crashes while using it with Pytorch Lightning module
|
MKaczkow
|
open
|
[
"oncall: profiler"
] | 0
|
NONE
|
### 🐛 Describe the bug
Pytorch Profiler crashes while using it with pytorch-lightning. I am attempting to profile some experiments, but keep getting errors like shown below. I've searched forum and gh issues and I'm aware of the following:
* [issue](https://github.com/pytorch/pytorch/issues/98124) (not relevant -> different cause of error as sugested by message)
* [issue](https://github.com/pytorch/pytorch/issues/68846) (not relevant -> different cause of error as sugested by message)
* [forum post](https://discuss.pytorch.org/t/why-is-tensorboard-reporting-no-tensorcores/168473) (not relevant -> profiler runs, but output not in tensorboard)
Suspecting / judging from error message, that the problem is related to context management in profiler, I've tried 2 ways of launching it, *v1 -> distinct-context-per-stage* and *v2 -> single-context-for-experiment*, but neither have succeded. Remaining parts of experiment, like dataloaders, model, etc. are provided in the environment and so far worked correctly (listed example setup at the very end of this issue, as it's quite a lot of code).
Will be grateful for any ideas / debugging tips 🙂
Relevant code snippet v1:
```python
trainer = pl.Trainer(
log_every_n_steps=100,
max_epochs=max_epochs,
devices=1,
accelerator=accelerator,
enable_checkpointing=False,
num_sanity_val_steps=0, # to avoid adding unnecessary item to validation_epoch_embedding_norms
)
###########################
# Pre-training
###########################
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
record_shapes=True,
profile_memory=True,
with_stack=True,
) as prof:
with record_function("pretraining-validation"):
# perform extra 'validation' epoch to see if untrained model does anything useful
trainer.validate(model, dataloader_val_simclr)
###########################
# Training
###########################
with record_function("training-phase"):
trainer.fit(
model=model,
train_dataloaders=dataloader_train_simclr,
val_dataloaders=dataloader_val_simclr,
)
###########################
# Testing
###########################
with record_function("testing-final"):
trainer.test(
model,
dataloaders=dataloader_test_simclr,
)
```
Code snippet v2:
```python
trainer = pl.Trainer(
log_every_n_steps=100,
max_epochs=max_epochs,
devices=1,
accelerator=accelerator,
enable_checkpointing=False,
num_sanity_val_steps=0, # to avoid adding unnecessary item to validation_epoch_embedding_norms
)
###########################
# Pre-training
###########################
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
record_shapes=True,
profile_memory=True,
with_stack=True,
) as prof:
with record_function("pretraining-validation"):
# perform extra 'validation' epoch to see if untrained model does anything useful
trainer.validate(model, dataloader_val_simclr)
###########################
# Training
###########################
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
record_shapes=True,
profile_memory=True,
with_stack=True,
) as prof:
with record_function("training-phase"):
trainer.fit(
model=model,
train_dataloaders=dataloader_train_simclr,
val_dataloaders=dataloader_val_simclr,
)
###########################
# Testing
###########################
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
record_shapes=True,
profile_memory=True,
with_stack=True,
) as prof:
with record_function("testing-final"):
trainer.test(
model,
dataloaders=dataloader_test_simclr,
)
```
Stack traces:
```
RuntimeError Traceback (most recent call last)
Cell In[4], line 107
96 trainer = pl.Trainer(
97 log_every_n_steps=100,
98 max_epochs=max_epochs,
(...)
101 num_sanity_val_steps=0, # to avoid adding unnecessary item to validation_epoch_embedding_norms
102 )
104 ###########################
105 # Pre-training (Validation before training)
106 ###########################
--> 107 with profile(
108 activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
109 record_shapes=True,
110 profile_memory=True,
111 with_stack=True,
112 ) as prof:
113 with record_function("pretraining-validation"):
114 # perform extra 'validation' epoch to see if untrained model does anything useful
115 trainer.validate(model, dataloader_val_simclr)
File d:\{repository_path}\venv\Lib\site-packages\torch\profiler\profiler.py:699, in profile.__exit__(self, exc_type, exc_val, exc_tb)
698 def __exit__(self, exc_type, exc_val, exc_tb):
--> 699 self.stop()
700 prof.KinetoStepTracker.erase_step_count(PROFILER_STEP_NAME)
701 if self.execution_trace_observer:
File d:\{repository_path}\venv\Lib\site-packages\torch\profiler\profiler.py:715, in profile.stop(self)
713 if self.record_steps and self.step_rec_fn:
714 self.step_rec_fn.__exit__(None, None, None)
--> 715 self._transit_action(self.current_action, None)
File d:\{repository_path}\venv\Lib\site-packages\torch\profiler\profiler.py:744, in profile._transit_action(self, prev_action, current_action)
742 if action_list:
743 for action in action_list:
--> 744 action()
File d:\{repository_path}\venv\Lib\site-packages\torch\profiler\profiler.py:199, in _KinetoProfile.stop_trace(self)
197 self.execution_trace_observer.stop()
198 assert self.profiler is not None
--> 199 self.profiler.__exit__(None, None, None)
File d:\{repository_path}\venv\Lib\site-packages\torch\autograd\profiler.py:296, in profile.__exit__(self, exc_type, exc_val, exc_tb)
294 if self.use_cuda:
295 torch.cuda.synchronize()
--> 296 self.kineto_results = _disable_profiler()
297 _run_on_profiler_stop()
298 parsed_results = self._parse_kineto_results(self.kineto_results)
RuntimeError: !stack.empty() INTERNAL ASSERT FAILED at "..\\torch\\csrc\\autograd\\profiler_python.cpp":969, please report a bug to PyTorch. Python replay stack is empty.
```
Sometimes (seems random to be), I get this error:
```
RuntimeError Traceback (most recent call last)
Cell In[28], [line 208](vscode-notebook-cell:?execution_count=28&line=208)
[189](vscode-notebook-cell:?execution_count=28&line=189) trainer = pl.Trainer(
[190](vscode-notebook-cell:?execution_count=28&line=190) log_every_n_steps=100,
[191](vscode-notebook-cell:?execution_count=28&line=191) max_epochs=max_epochs,
(...)
[202](vscode-notebook-cell:?execution_count=28&line=202) ],
[203](vscode-notebook-cell:?execution_count=28&line=203) )
[205](vscode-notebook-cell:?execution_count=28&line=205) ###########################
[206](vscode-notebook-cell:?execution_count=28&line=206) # Pre-training
[207](vscode-notebook-cell:?execution_count=28&line=207) ###########################
--> [208](vscode-notebook-cell:?execution_count=28&line=208) with profile(
[209](vscode-notebook-cell:?execution_count=28&line=209) activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
[210](vscode-notebook-cell:?execution_count=28&line=210) record_shapes=True,
[211](vscode-notebook-cell:?execution_count=28&line=211) profile_memory=True,
[212](vscode-notebook-cell:?execution_count=28&line=212) with_stack=True,
[213](vscode-notebook-cell:?execution_count=28&line=213) ) as prof:
[214](vscode-notebook-cell:?execution_count=28&line=214) with record_function("pretraining-validation"):
[215](vscode-notebook-cell:?execution_count=28&line=215) # perform extra 'validation' epoch to see if untrained model does anything useful
[216](vscode-notebook-cell:?execution_count=28&line=216) trainer.validate(model, dataloader_val_simclr)
File d:\__repos\masters_bacter_private\venv\Lib\site-packages\torch\profiler\profiler.py:695, in profile.__enter__(self)
[694](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:694) def __enter__(self):
--> [695](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:695) self.start()
[696](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:696) return self
File d:\__repos\masters_bacter_private\venv\Lib\site-packages\torch\profiler\profiler.py:705, in profile.start(self)
[704](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:704) def start(self):
--> [705](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:705) self._transit_action(ProfilerAction.NONE, self.current_action)
[706](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:706) if self.record_steps:
[707](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:707) self.step_rec_fn = prof.record_function(
[708](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:708) "ProfilerStep#" + str(self.step_num)
[709](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:709) )
File d:\__repos\masters_bacter_private\venv\Lib\site-packages\torch\profiler\profiler.py:744, in profile._transit_action(self, prev_action, current_action)
[742](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:742) if action_list:
[743](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:743) for action in action_list:
--> [744](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:744) action()
File d:\__repos\masters_bacter_private\venv\Lib\site-packages\torch\profiler\profiler.py:155, in _KinetoProfile.prepare_trace(self)
[141](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:141) def prepare_trace(self):
[142](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:142) self.profiler = prof.profile(
[143](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:143) use_cuda=(ProfilerActivity.CUDA in self.activities),
[144](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:144) use_cpu=(ProfilerActivity.CPU in self.activities),
(...)
[153](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:153) experimental_config=self.experimental_config,
[154](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:154) )
--> [155](file:///D:/{repository_path}/venv/Lib/site-packages/torch/profiler/profiler.py:155) self.profiler._prepare_trace()
File d:\__repos\masters_bacter_private\venv\Lib\site-packages\torch\autograd\profiler.py:284, in profile._prepare_trace(self)
[282](file:///D:/{repository_path}/venv/Lib/site-packages/torch/autograd/profiler.py:282) def _prepare_trace(self):
[283](file:///D:/{repository_path}/venv/Lib/site-packages/torch/autograd/profiler.py:283) self.entered = True
--> [284](file:///D:/{repository_path}/venv/Lib/site-packages/torch/autograd/profiler.py:284) _prepare_profiler(self.config(), self.kineto_activities)
RuntimeError: Can't disable Kineto profiler when it's not running
```
Example setup:
```python
import os
import torch
import torch.nn as nn
from torch.utils.data import DataLoader, Dataset
from torchvision.transforms import ToTensor
from torchvision.datasets import MNIST
import pytorch_lightning as pl
from torch.profiler import profile, record_function, ProfilerActivity
# Define a simple SimCLR model
class SimCLRModel(pl.LightningModule):
def __init__(self, hidden_dim=128, lr=1e-3):
super().__init__()
self.encoder = nn.Sequential(
nn.Linear(28 * 28, 512),
nn.ReLU(),
nn.Linear(512, hidden_dim),
)
self.projection = nn.Sequential(
nn.Linear(hidden_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim),
)
self.lr = lr
def forward(self, x):
h = self.encoder(x.view(x.size(0), -1))
z = self.projection(h)
return z
def training_step(self, batch, batch_idx):
x, _ = batch
z = self(x)
# Dummy loss for demonstration purposes
loss = torch.mean(z)
self.log('train_loss', loss)
return loss
def validation_step(self, batch, batch_idx):
x, _ = batch
z = self(x)
# Dummy loss for demonstration purposes
loss = torch.mean(z)
self.log('val_loss', loss)
return loss
def test_step(self, batch, batch_idx):
x, _ = batch
z = self(x)
# Dummy loss for demonstration purposes
loss = torch.mean(z)
self.log('test_loss', loss)
return loss
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.lr)
return optimizer
# Define a simple dataset (using MNIST for simplicity)
class ContrastiveMNIST(Dataset):
def __init__(self, root, train=True, transform=ToTensor(), download=True):
self.mnist = MNIST(root, train=train, transform=transform, download=download)
def __len__(self):
return len(self.mnist)
def __getitem__(self, idx):
img, target = self.mnist[idx]
# Create a dummy second view for contrastive learning (same as first for simplicity)
img_pair = img
return img, img_pair
# --- Setup ---
# Define hyperparameters
max_epochs = 3
batch_size = 64
learning_rate = 1e-3
hidden_dimension = 128
accelerator = "gpu" # "cpu" or "cuda"
# Create data loaders
data_dir = os.getcwd() # Use current directory to store MNIST
train_dataset = ContrastiveMNIST(data_dir, train=True, download=True)
val_dataset = ContrastiveMNIST(data_dir, train=False, download=True)
test_dataset = ContrastiveMNIST(data_dir, train=False, download=True)
dataloader_train_simclr = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=2)
dataloader_val_simclr = DataLoader(val_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
dataloader_test_simclr = DataLoader(test_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
# Initialize the model
model = SimCLRModel(hidden_dim=hidden_dimension, lr=learning_rate)
```
### Versions
PyTorch version: 2.3.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Education (10.0.19045 64-bitowy)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: AMD Ryzen 7 5800X 8-Core Processor
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3801
MaxClockSpeed: 3801
L2CacheSize: 4096
L2CacheSpeed: None
Revision: 8448
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] pytorch-lightning==2.3.3
[pip3] torch==2.3.1+cu118
[pip3] torch-tb-profiler==0.4.3
[pip3] torchmetrics==1.4.0.post0
[pip3] torchvision==0.18.1+cu118
[conda] Could not collect
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
3,034,253,834
|
[dynamo] Guard serialization for FUNCTORCH_STACK_MATCH
|
zhxchen17
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152723
* #152721
* #152716
* #152687
* __->__ #152616
* #152615
Make Functorch interpreters serializable most of the time, so that we can save the guards on functorch states.
## Test Cases:
0. torch.compile() without functorch layers present. Guard should fail with any layer being pushed.
1. torch.compile() nested in vmap.
2. torch.compile() nested in grad.
3. torch.compile() nested in jvp + vmap
4. torch.compile() nested functionalize
5. torch.compile() nested in vmap + grad
Differential Revision: [D74008787](https://our.internmc.facebook.com/intern/diff/D74008787/)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,034,253,735
|
[dynamo] Guard serialization for DUAL LEVEL.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152723
* #152721
* #152716
* #152687
* #152616
* __->__ #152615
Seem dual level counter should be stored in OutputGraph so that the value can be preserved through roundtripping.
Differential Revision: [D74008786](https://our.internmc.facebook.com/intern/diff/D74008786/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,034,252,160
|
[WIP] Make FR vendor generic and try to enable it for gloo
|
fduwjj
|
open
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152614
* #152563
* #152585
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
3,034,214,821
|
Revert "Cleanup VS 2019 refs in pytorch (#145863)"
|
xuhancn
|
open
|
[
"triaged",
"open source",
"ciflow/binaries",
"ciflow/trunk",
"release notes: releng",
"ciflow/xpu",
"ci-no-td"
] | 3
|
COLLABORATOR
|
This reverts commit b45e6fa707ced2adb68eaf1a2c1ccb389a6283d7.
revert PRs:
https://github.com/pytorch/pytorch/pull/145863
https://github.com/pytorch/pytorch/pull/145319
| true
|
3,034,210,237
|
Enable AOTI for Metal inductor
|
malfet
|
open
|
[
"enhancement",
"module: mps",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Now that `torch.compile` is usable for MPS backend, we should extend it to Python-less environment (including ExecuTorch) and one avenue of enabling is is AOTI
https://github.com/pytorch/pytorch/blob/4c8dee7986d0da5cd8485b8d84323c425d228891/aten/src/ATen/test/mps_test_metal_library.cpp#L61-L71 contains an example how Metal shader generated by Inductor can be compiled and executed
### Alternatives
Invent something new, but why
### Additional context
N/A
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
| true
|
3,034,187,972
|
Makefile: refactor build, setup and lint rules
|
ariel-anieli
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 2
|
NONE
|
Hello maintainers,
This is my first ever PR to the project; your feedback is much appreciated.
I am proposing to refactor some rules in Makefile. The output is unchanged.
After,
```shell
# make lint -n
lintrunner
# make quicklint -n
lintrunner
# make ios -n
./scripts/build_ios.sh
# make setup-env-cuda -n
make setup-env PYTHON="python3" NIGHTLY_TOOL_OPTS="pull --cuda"
make[1]: Entering directory '/home/ariel/src/pytorch'
if [ -n "" ]; then \
echo "Please commit or stash all changes before running this script"; \
exit 1; \
fi
python3 tools/nightly.py pull --cuda
make[1]: Leaving directory '/home/ariel/src/pytorch'
# make setup_env_cuda -n
make setup-env PYTHON="python3" NIGHTLY_TOOL_OPTS="pull --cuda"
make[1]: Entering directory '/home/ariel/src/pytorch'
if [ -n "" ]; then \
echo "Please commit or stash all changes before running this script"; \
exit 1; \
fi
python3 tools/nightly.py pull --cuda
make[1]: Leaving directory '/home/ariel/src/pytorch'
# git log --oneline -1
c48ea9abba (HEAD -> refactor-makefile, origin/refactor-makefile) Makefile: refactor build, setup and lint rules
```
Before,
```shell
# make lint -n
lintrunner
# make quicklint -n
lintrunner
# make ios -n
./scripts/build_ios.sh
# make setup-env-cuda -n
make setup-env PYTHON="python3" NIGHTLY_TOOL_OPTS="pull --cuda"
make[1]: Entering directory '/home/ariel/src/pytorch'
if [ -n "" ]; then \
echo "Please commit or stash all changes before running this script"; \
exit 1; \
fi
python3 tools/nightly.py pull --cuda
make[1]: Leaving directory '/home/ariel/src/pytorch'
# make setup_env_cuda -n
make setup-env PYTHON="python3" NIGHTLY_TOOL_OPTS="pull --cuda"
make[1]: Entering directory '/home/ariel/src/pytorch'
if [ -n "" ]; then \
echo "Please commit or stash all changes before running this script"; \
exit 1; \
fi
python3 tools/nightly.py pull --cuda
make[1]: Leaving directory '/home/ariel/src/pytorch'
# git log --oneline -1
6f6acb4128 (HEAD -> main, origin/main, origin/HEAD) [AOTI][CPU] Introduce config.cpp.use_decompose_tanh (#152542)
```
| true
|
3,034,170,316
|
Update padding_mode type annotation to use Literal type (PaddingMode)
|
sudiptap
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
NONE
|
Fixes #152280
| true
|
3,034,159,846
|
[Environment Variable] Use thread-safe getenv functions
|
cyyever
|
open
|
[
"oncall: distributed",
"oncall: jit",
"open source",
"NNC",
"release notes: linalg_frontend"
] | 1
|
COLLABORATOR
|
Use thread-safe getenv wrapper in remaining code.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,034,152,488
|
[triton pin update] Run Inductor CI on pin updates for Triton and the PyTorch nightly branch
|
atalman
|
open
|
[
"oncall: releng",
"triaged",
"module: user triton"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We would like to run Inductor CI on Triton pin updates So that we can see any regressions on the pin updates and notice any issues before accepting pin update.
This most will require us to be able to upload triton on PR so that it can be tested on inductor CI:
https://hud.pytorch.org/hud/pytorch/pytorch/main/1?per_page=50&name_filter=inductor
### Versions
2.8
cc @msaroufim @jerryzh168 @oulgen @aakhundov @davidberard98
| true
|
3,034,094,316
|
Loops impacting output when utilizing hooks
|
Thomas2419
|
open
|
[
"module: nn",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
Hello! I believe this is a hook related behaving oddly under loops.
When using a hook and a loop I'm getting unexpected output logits. I'm bringing this here as it ONLY happens when i use both and thus i believe is some weird hook+loop pytorch interaction and not a transformers interaction.
```
import torch, torch.nn.functional as F
from transformers import CLIPModel, CLIPProcessor
from PIL import Image
# -------------------- config ---------------------------------------------
device = "cuda" if torch.cuda.is_available() else "cpu"
model_id = "openai/clip-vit-base-patch16"
# -------------------- load model -----------------------------------------
model = CLIPModel.from_pretrained(
model_id,
attn_implementation="eager",
).to(device).eval()
processor = CLIPProcessor.from_pretrained(model_id)
# Load the image
url = "/content/cat_couch.jpg"
pil_img = Image.open(url)
caption = "a cat on a couch"
proc_inputs = processor(text=[caption],
images=pil_img,
return_tensors="pt",
padding=True)
layer_idx = -2
latents = {}
steps = 5
sim_no_hook = model(**proc_inputs).logits_per_image[0, 0].item()
# Capture the full input tuple
def capture_hook(name):
def hook(module, input, output):
latents[name] = tuple(
x.clone().detach().requires_grad_(True) if isinstance(x, torch.Tensor) else x
for x in input
)
return hook
# Run once to capture original latents
h_text = model.text_model.encoder.layers[layer_idx].register_forward_hook(capture_hook("input_text"))
h_img = model.vision_model.encoder.layers[layer_idx].register_forward_hook(capture_hook("input_img"))
sim_with_hook = model(**proc_inputs).logits_per_image[0, 0].item()
h_text.remove()
h_img.remove()
# Use global CURRENT_INTERP for looped injection
CURRENT_INTERP_TEXT = latents["input_text"][0]
CURRENT_INTERP_IMG = latents["input_img"][0]
# Persistent hook logic
sim_scores_for = []
sim_scores_noloop = [] # NEW: no hook, no injection, pure loop to test
# Register hooks before loop
h_text = model.text_model.encoder.layers[layer_idx].register_forward_hook(persistent_hook_text)
h_img = model.vision_model.encoder.layers[layer_idx].register_forward_hook(persistent_hook_img)
# FOR loop
for _ in range(steps):
sim = model(**proc_inputs).logits_per_image[0, 0].item()
sim_scores_for.append(sim)
h_text.remove()
h_img.remove()
# Also test: same loop style, no hooks (pure loop behavior test)
for _ in range(steps):
sim = model(**proc_inputs).logits_per_image[0, 0].item()
sim_scores_noloop.append(sim)
print(f"HOOK FOR loop scores:")
print(sim_scores_for)
print(f"NO HOOK FOR loop scores:")
print(sim_scores_noloop)
print(f"Without hook: {sim_no_hook:.6f}")
print(f"With hook: {sim_with_hook:.6f}")
```
The outputs:
```
HOOK FOR loop scores:
[26.457609176635742, 26.457609176635742, 26.457609176635742, 26.457609176635742, 26.457609176635742]
NO HOOK FOR loop scores:
[29.929569244384766, 29.929569244384766, 29.929569244384766, 29.929569244384766, 29.929569244384766]
Without hook: 29.929569
With hook: 29.929569
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.12 (main, Apr 9 2025, 08:55:54) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.123+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.11
[pip3] optree==0.15.0
[pip3] pynvjitlink-cu12==0.5.2
[pip3] torch==2.6.0+cu124
[pip3] torchaudio==2.6.0+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
3,034,063,675
|
AOTI regression on SAM and tts-angular
|
zou3519
|
open
|
[
"high priority",
"triage review",
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 4
|
CONTRIBUTOR
|
In aot_inductor_torchbench. See https://hud.pytorch.org/pytorch/pytorch/commit/701c0848b8695daa802c2d7ff2f9177faa6e1fe8#41477577732-box for failing logs.
It looks like these were both previously "pass" but now "fail_to_run", so at least there isn't silent incorrectness.
I'm going to flip the statuses on these so that the inductor-periodic CI becomes green, but we should either look into this or determine that we don't care about them.
cc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
| true
|
3,034,060,338
|
Fix some inductor periodic benchmarks
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/inductor-periodic"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152621
* __->__ #152605
Some were reporting "pass" consistently on https://hud.pytorch.org/
Those are fine to flip.
I filed a separate issue for the now-regressions for AOTI:
https://github.com/pytorch/pytorch/issues/152606. These should be looked
at.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,034,031,609
|
[Testing] Is FindCUDA.cmake from `Modules_CUDA_fix` called at all?
|
malfet
|
open
|
[] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
3,034,023,783
|
[BE] Delete `Module_CUDA_fix`
|
malfet
|
open
|
[
"release notes: build",
"topic: improvements"
] | 3
|
CONTRIBUTOR
|
We should be using upstream find(CUDA) always, shouldn't we?
| true
|
3,034,010,859
|
[testing] 4
|
zou3519
|
closed
|
[
"release notes: releng",
"fx",
"module: inductor",
"ciflow/inductor",
"ciflow/inductor-periodic"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152602
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,033,950,638
|
[multigraph] use backend specializations in compile_and_call_fx_graph
|
bobrenjc93
|
open
|
[
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152601
* #152597
* #152596
The goal of this multigraph work is to enable a compiled region that has a single dynamo trace but multiple backend specializations. This work was inspired by vLLM who does this in a somewhat hacky way where they use a custom backend to capture a dynamo graph and then manually invoke compile_fx multiple times to get specialized graphs.
There's really two parts of this work:
**The frontend changes:**
1) we introduce an optional kwarg `backend_specializations` to mark_dynamic that takes in a list of specializations. I debated other methods including specifying specializations via decorators, but ultimately decided this approach was more harmonious. The big issue with decorators is the difficulty of composing well with the rest of the torch.compile ecosystem including graph breaks, lazy initialization of variable trackers and symbolic variables, etc.
**The backend changes (this PR):**
1) We capture the backend_specialization specified in the mark_dynamic API into a SymbolicContext. See changes in `/_dynamo/variables/builder.py`
2) After we are done dynamo tracing, we invoke `call_user_compiler` N + 1 times for N specializations and 1 generic graph. Under the hood this will call compile_fx, which composes nicely with both Async Compile and AOTAutogradCache.
3) When we have specializations, we install a specialized dispatch function that checks each specialization and dispatches to the first one that matches. If none of the specializations match, we dispatch to the generic graph. I decided to do this over returning N different GuardedCodes since 1) it doesn't pollute the dynamo cache (eg. if you have 8 specializations, you would hit the cache limit) 2) it naturally incorporates the hierarchical lattice structure of the guards since the specializations are always necessarily stricter than the generic region's guards.
I benchmarked this PR stack with #152596 and found around a 50% reduction when dispatching to the specialized regions:

| true
|
3,033,950,555
|
store backend specializations in StatelessSymbolicContext
|
bobrenjc93
|
closed
|
[
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152670
* #152601
* __->__ #152600
* #152597
* #152596
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,033,943,220
|
[testing] 3
|
zou3519
|
closed
|
[
"topic: not user facing",
"ciflow/inductor-periodic"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152599
| true
|
3,033,924,328
|
[ez] fix grammar mistakes in StatefulSymbolicContext comment
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152650
* #152601
* #152600
* #152597
* #152596
* __->__ #152598
* #151407
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,033,924,252
|
[multigraph] add backend_specialization kwarg to mark_dynamic
|
bobrenjc93
|
open
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152601
* __->__ #152597
* #152596
The goal of this multigraph work is to enable a compiled region that has a single dynamo trace but multiple backend specializations. This work was inspired by vLLM who does this in a somewhat hacky way where they use a custom backend to capture a dynamo graph and then manually invoke compile_fx multiple times to get specialized graphs.
There's really two parts of this work:
**The frontend changes (this PR):**
1) we introduce an optional kwarg `backend_specializations` to mark_dynamic that takes in a list of specializations. I debated other methods including specifying specializations via decorators, but ultimately decided this approach was more harmonious. The big issue with decorators is the difficulty of composing well with the rest of the torch.compile ecosystem including graph breaks, lazy initialization of variable trackers and symbolic variables, etc.
**The backend changes:**
1) We capture the backend_specialization specified in the mark_dynamic API into a SymbolicContext. See changes in `/_dynamo/variables/builder.py`
2) After we are done dynamo tracing, we invoke `call_user_compiler` N + 1 times for N specializations and 1 generic graph. Under the hood this will call compile_fx, which composes nicely with both Async Compile and AOTAutogradCache.
3) When we have specializations, we install a specialized dispatch function that checks each specialization and dispatches to the first one that matches. If none of the specializations match, we dispatch to the generic graph. I decided to do this over returning N different GuardedCodes since 1) it doesn't pollute the dynamo cache (eg. if you have 8 specializations, you would hit the cache limit) 2) it naturally incorporates the hierarchical lattice structure of the guards since the specializations are always necessarily stricter than the generic region's guards.
I benchmarked this PR stack with #152596 and found around a 50% reduction when dispatching to the specialized regions:

| true
|
3,033,901,683
|
[not for review] benchmark script
|
bobrenjc93
|
open
|
[
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152601
* #152597
* __->__ #152596
| true
|
3,033,890,781
|
ROCm, 7900 XTX: Pytorch FLASH_ATTENTION SDPA is 2.5x slower than MATH (fp16, head_dim 256, seqlen 4360, 12 heads)
|
FeepingCreature
|
open
|
[
"module: performance",
"module: rocm",
"triaged",
"module: sdpa"
] | 13
|
NONE
|
edit: Title changed to highlight later discovery, original contents preserved for easier reading.
This was originally "ROCm, 7900 XTX: Pytorch SDPA is 2.5x slower than manual implementation with non-continuous v", but it turned out that the non-contiguous v didn't really matter.
---
I was trying to figure out why AuraFlow seemed to be unusually slow in ComfyUI on my 7900 XTX. After a lot of time debugging and reducing together with Gemini 2.5 Pro, to our great surprise, we found that for these matrix sizes, *a compiled manual reimplementation of SDPA* was considerably faster than a Pytorch SDPA call! We baked it down to this benchmark:
```python
import torch
import torch.nn.functional as F
import time
import math
# --- Test Configuration ---
bsz = 2
n_heads = 12
seqlen = 4360
hidden_size = 3072 # Needed for storage calculations
head_dim = hidden_size // n_heads
dtype = torch.float16
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Benchmark parameters
warmup_runs = 10
num_runs = 40
# Target shape and stride for the synthesized non-contiguous tensors
target_shape = (bsz, n_heads, seqlen, head_dim)
# Stride derived from sequence: linear(B,S,D) -> view(B,S,H,D') -> transpose(1,2) -> (B,H,S,D')
target_stride = (13393920, 256, 3072, 1)
# --- Function 1: Manual SDPA Core (Matmul -> Softmax -> Matmul) ---
# Expects pre-normed Q and K. Intended for compilation. Output: B H S D'
def manual_sdpa_core_ops(q_normed, k_normed, v):
scale = 1.0 / math.sqrt(q_normed.size(-1))
attn_weights = torch.matmul(q_normed, k_normed.transpose(-2, -1)) * scale
attn_weights = F.softmax(attn_weights.float(), dim=-1).to(q_normed.dtype)
attn_output = torch.matmul(attn_weights, v)
return attn_output
# --- Function 2: Built-in SDPA Core ---
# Expects pre-normed Q and K. Uses F.scaled_dot_product_attention. Run eagerly. Output: B H S D'
def builtin_sdpa_core(q_normed, k_normed, v):
attn_output = F.scaled_dot_product_attention(q_normed, k_normed, v)
return attn_output
# --- Utility: Benchmark Runner ---
def run_benchmark(func, label, qn, kn, v):
print(f"\n--- Benchmarking: {label} ---")
args_tuple = (qn, kn, v)
print("Warm-up...")
with torch.no_grad():
for _ in range(warmup_runs):
_ = func(*args_tuple)
torch.cuda.synchronize()
# Timing
print("Timing...")
total_time = 0.0
with torch.no_grad():
torch.cuda.synchronize()
start_time = time.time()
for _ in range(num_runs):
_ = func(*args_tuple)
torch.cuda.synchronize()
end_time = time.time()
total_time = end_time - start_time
avg_time_ms = (total_time / num_runs) * 1000
print(f"Result: {avg_time_ms:.4f} ms per run")
return avg_time_ms
# --- Main Execution Logic ---
if __name__ == "__main__":
if device.type != 'cuda':
print("This benchmark requires a CUDA-compatible device (ROCm counts).")
exit()
print("--- Bug Report Benchmark: F.sdpa vs Compiled Core (ROCm) ---")
print("Compares F.scaled_dot_product_attention against a torch.compile'd")
print("version of (Matmul -> Softmax -> Matmul). Uses inputs with specific")
print("history: Synth Non-Contig -> Eager LayerNorm -> Benchmark Func.")
print("\n--- Environment ---")
print(f"Device: {torch.cuda.get_device_name(device)}")
print(f"PyTorch Version: {torch.__version__}")
print(f"Input Shapes (Q, K, V) before SDPA: {target_shape}")
print(f"Input dtype: {dtype}")
# --- Synthesize Non-Contiguous Inputs ---
print("\n--- Input Generation ---")
print(f"Synthesizing non-contiguous inputs with target stride {target_stride}...")
num_elements_storage = bsz * seqlen * hidden_size
q = torch.randn(num_elements_storage, dtype=dtype, device=device)
k = torch.randn(num_elements_storage, dtype=dtype, device=device)
v = torch.randn(num_elements_storage, dtype=dtype, device=device)
q = torch.as_strided(q, size=target_shape, stride=target_stride).detach()
k = torch.as_strided(k, size=target_shape, stride=target_stride).detach()
v = torch.as_strided(v, size=target_shape, stride=target_stride).detach()
print(f"Synthetic non-contiguous inputs created. is_contiguous: {q.is_contiguous()}")
print("\nApplying F.layer_norm eagerly to synthetic Q/K...")
q = F.layer_norm(q, (head_dim,)).detach()
k = F.layer_norm(k, (head_dim,)).detach()
# V is not normed
print(f"Eager LN applied. Resulting Q is_contiguous: {q.is_contiguous()}")
print("Compiling manual core function")
compiled_manual_core = torch.compile(manual_sdpa_core_ops)
print("\n--- Validation ---")
output_builtin = builtin_sdpa_core(q, k, v)
output_manual_compiled = compiled_manual_core(q, k, v)
rtol = 1e-2; atol = 1e-2
are_close = torch.allclose(output_builtin, output_manual_compiled, rtol=rtol, atol=atol)
if are_close: print(f"Validation PASSED! Outputs are close (rtol={rtol}, atol={atol}).")
else:
diff = torch.abs(output_builtin.float() - output_manual_compiled.float())
print(f"Validation FAILED! Outputs differ (rtol={rtol}, atol={atol}). Max diff: {diff.max().item()}")
eager_time = run_benchmark(
builtin_sdpa_core,
"Eager: F.scaled_dot_product_attention",
q, k, v
)
# 2. Compiled: Manual SDPA Core (Inputs: Synth Strided -> Eager LN -> Contiguous)
compiled_time = run_benchmark(
compiled_manual_core,
"Compiled: Manual Core (Matmul->Softmax->Matmul)",
q, k, v
)
# --- Summary ---
print("\n\n--- Benchmark Summary (Average ms per run) ---")
print(f"Eager: F.scaled_dot_product_attention: {eager_time:.4f} ms")
print(f"Compiled: Manual Core (Matmul->Softmax->Matmul): {compiled_time:.4f} ms")
print("\n--- Conclusion ---")
print("This benchmark compares the core attention calculation speed.")
print("Inputs are generated with specific non-contiguous strides, then LayerNormed eagerly (resulting in contiguous tensors).")
print("Expected: F.sdpa performance to be similar or better than compiled basic ops.")
print("Observed: Compiled basic ops are significantly faster (~8.6ms vs ~21ms) on my 7900 XTX, ROCm 6.4, Pytorch nightly April 2025.")
```
As you can see, the outputs are identical.
As far as we can tell, the issue goes away if we remove the layer_norm, even though they're not part of the `torch.compile`d function and their output is continuous! It's a bit strange and maybe we're missing something.
### Versions
PyTorch version: 2.8.0.dev20250423+rocm6.4
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.4.43482-0f2d60242
OS: Ubuntu 25.04 (x86_64)
GCC version: (Ubuntu 14.2.0-19ubuntu2) 14.2.0
Clang version: 20.1.2 (0ubuntu1)
CMake version: version 3.31.6
Libc version: glibc-2.41
Python version: 3.13.3 (main, Apr 8 2025, 19:55:40) [GCC 14.2.0] (64-bit runtime)
Python platform: Linux-6.14.4-x64v3-xanmod1-x86_64-with-glibc2.41
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Radeon RX 7900 XTX (gfx1100)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.4.43482
MIOpen runtime version: 3.4.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X3D 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 55%
CPU max MHz: 5763,0000
CPU min MHz: 400,0000
BogoMIPS: 8400,09
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 128 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-triton-rocm==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250423+rocm6.4
[pip3] torchaudio==2.6.0.dev20250424+rocm6.3
[pip3] torchsde==0.2.6
[pip3] torchvision==0.22.0.dev20250423+rocm6.3
[conda] Could not collect
cc @msaroufim @jerryzh168 @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,033,812,815
|
[c10d] Add support for ReduceOp::AVG in ProcessGroupMPI for FSDP2
|
nariaki3551
|
open
|
[
"oncall: distributed",
"open source",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Hi,
Currently, running FSDP2 with the MPI backend fails. This is because `ProcessGroupMPI` does not support reduce_scatter with `ReduceOp::AVG` used during the backward pass.
However, most MPI implementations (such as OpenMPI) do not natively support an `AVG` reduce operation. To address this, this patch adds support for `ReduceOp::AVG` in `ProcessGroupMPI` as follows:
- The process group performs the collective using `MPI_SUM`, then divides the result by `world_size` to emulate `AVG`.
- A `TORCH_CHECK` is added to ensure the input tensor is of floating-point type. Supporting integer tensors would require additional logic for type conversion, and I believe that there is currently no strong use case for averaging integer data. Therefore, I limited floating-point tensors for AVG op.
This patch enables FSDP2 to train correctly using the MPI backend.
---
### Error Reproduction
Environment: **Ubuntu** 22.04, **Pytorch** v2.8.0a0+gitf84062f, **Open MPI** v5.0.7rc2
<details> <summary>FSDP2 training script</summary>
```python
import os
import time
import argparse
import dataclasses
import torch
from torch import nn
import torch.distributed as dist
from torch.distributed.fsdp import fully_shard
import torch.optim as optim
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.sequential0 = nn.Linear(512, 1024)
self.sequential1 = nn.Linear(1024, 512)
self.last = nn.Linear(512, 10)
def forward(self, x):
x = torch.relu(self.sequential0(x))
x = torch.relu(self.sequential1(x))
return self.last(x)
def fsdp_training(local_rank: int):
rank = dist.get_rank()
world_size = dist.get_world_size()
device = torch.device('cuda', local_rank)
model = SimpleModel().to(device)
fully_shard(model)
gpu_name = torch.cuda.get_device_name(local_rank)
print(f"Rank {rank}/{world_size}: FSDP is model created; #params: {sum(p.numel() for p in model.parameters())}; GPU: {gpu_name}")
if rank == 0:
print(model)
optimizer = optim.AdamW(model.parameters(), lr=0.01)
input_data = torch.randn(2, 512).to(device)
target = torch.randint(0, 10, (2,)).to(device)
print(f"Rank {rank}/{world_size}: Start training")
model.train()
num_epochs = 100
for epoch in range(num_epochs):
optimizer.zero_grad()
output = model(input_data)
loss = nn.CrossEntropyLoss()(output, target)
loss.backward()
optimizer.step()
if rank == 0:
print(f"Epoch {epoch}/{num_epochs}: Loss {loss.item()}")
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='FSDP Training')
args = parser.parse_args()
local_rank = int(os.getenv("LOCAL_RANK", 0))
local_rank = int(os.getenv("OMPI_COMM_WORLD_RANK", 0))
torch.cuda.set_device(local_rank)
dist.init_process_group(backend="mpi")
fsdp_training(local_rank)
dist.destroy_process_group()
```
</details>
Command: `mpirun -n 2 -- python3 test_fsdp2.py`
<details> <summary>Error (Traceback)</summary>
```python
[rank0]: Traceback (most recent call last):
[rank0]: File "/app/pytorch_tests/src/training/test_fsdp2_mpi.py", line 81, in <module>
[rank0]: fsdp_training(local_rank)
[rank0]: File "/app/pytorch_tests/src/training/test_fsdp2_mpi.py", line 52, in fsdp_training
[rank0]: loss.backward()
[rank0]: File "/app/pytorch/torch/_tensor.py", line 648, in backward
[rank0]: torch.autograd.backward(
[rank0]: File "/app/pytorch/torch/autograd/__init__.py", line 354, in backward
[rank0]: _engine_run_backward(
[rank0]: File "/app/pytorch/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: File "/app/pytorch/torch/distributed/fsdp/_fully_shard/_fsdp_state.py", line 297, in _root_post_backward_final_callback
[rank0]: fsdp_param_group.post_backward()
[rank0]: File "/app/pytorch/torch/distributed/fsdp/_fully_shard/_fsdp_param_group.py", line 445, in post_backward
[rank0]: ) = foreach_reduce(
[rank0]: File "/app/pytorch/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/app/pytorch/torch/distributed/fsdp/_fully_shard/_fsdp_collectives.py", line 421, in foreach_reduce
[rank0]: dist.reduce_scatter_tensor(
[rank0]: File "/app/pytorch/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: File "/app/pytorch/torch/distributed/distributed_c10d.py", line 4416, in reduce_scatter_tensor
[rank0]: work.wait()
[rank0]: IndexError: map::at
```
</details>
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,033,791,119
|
Flex Attention doesn't scale with custom bias
|
danjenson
|
open
|
[
"triaged",
"oncall: pt2",
"module: pt2-dispatcher",
"module: flex attention"
] | 1
|
NONE
|
### 🐛 Describe the bug
When using FlexAttention with the following custom `RBFBias`, I cannot get FlexAttention to go above ~L=1600 without OOMing on an NVIDIA 24GB 4090. I also find that this implementation is about 6-8x slower than an implementation in JAX. Is there something I can configure to leverage the memory efficiency and speed of other fused attention mechanisms?
```python
#!/usr/bin/env -S PYENV_VERSION=torch python3
import argparse
import sys
import time
import torch
import torch.nn as nn
from torch.nn.attention.flex_attention import flex_attention
torch.set_default_device("cuda:0")
def main(seed: int, N: int, B: int, H: int, L: int, D: int, D_s: int):
torch.manual_seed(seed)
bias = RBFBias(num_heads=4, num_basis=5)
kernel_options = {}
# kernel_options = { # NOTE: doesn't help
# "BLOCK_M": 64,
# "BLOCK_N": 64,
# "BLOCK_M1": 32,
# "BLOCK_N1": 64,
# "BLOCK_M2": 64,
# "BLOCK_N2": 32,
# } # https://github.com/pytorch/pytorch/issues/133254
attn = BiasedFlexAttention(bias, kernel_options)
# attn = torch.compile(attn, dynamic=False) # NOTE: fails
b = sample_batch(B, H, L, D, D_s)
attn(**b) # precompile flex_attention (?)
torch.cuda.synchronize()
times = torch.zeros(N)
for i in range(N):
b = sample_batch(B, H, L, D, D_s)
torch.cuda.synchronize()
start = time.perf_counter()
attn(**b)
torch.cuda.synchronize()
stop = time.perf_counter()
times[i] = stop - start
print(
f"[B={B} H={H} L={L} D={D} D_s={D_s}]: {times.mean():0.6f}±{times.std():0.6f}s"
)
class RBFBias(nn.Module):
def __init__(self, num_heads, num_basis):
super().__init__()
self.alpha = nn.Parameter(torch.randn(num_heads, num_basis))
self.beta = nn.Parameter(torch.randn(num_heads, num_basis))
def forward(self, score, b, h, q_idx, kv_idx, qs_s, ks_s):
q_s = qs_s[b, q_idx]
k_s = ks_s[b, kv_idx]
d_sq = torch.square(q_s - k_s).sum()
alpha, beta = self.alpha[h], self.beta[h]
d_rbf = alpha * torch.exp(-beta * d_sq)
return score + d_rbf.sum()
class BiasedFlexAttention(nn.Module):
def __init__(self, bias: nn.Module, kernel_options: dict = {}):
super().__init__()
self.bias = bias
self.kernel_options = kernel_options
def forward(self, qs, ks, vs, qs_s, ks_s):
def score_mod(score, b, h, q_idx, kv_idx):
return self.bias(score, b, h, q_idx, kv_idx, qs_s, ks_s)
return flex_attention(
qs,
ks,
vs,
score_mod=score_mod,
kernel_options=self.kernel_options,
)
def sample_batch(B: int, H: int, L: int, D: int, D_s: int):
qs, ks, vs = torch.randn(3, B, H, L, D)
qs_s, ks_s = torch.randn(2, B, L, D_s)
return {"qs": qs, "ks": ks, "vs": vs, "qs_s": qs_s, "ks_s": ks_s}
def parse_args(argv):
parser = argparse.ArgumentParser(
prog=argv[0],
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
)
parser.add_argument(
"-seed",
type=int,
default=42,
help="Seed.",
)
parser.add_argument(
"-N",
type=int,
default=1000,
help="Number of trials.",
)
parser.add_argument(
"-B",
type=int,
default=32,
help="Batch size.",
)
parser.add_argument(
"-L",
type=int,
default=1664, # 1024 + 512 + 128
help="Length of sequence to test.",
)
parser.add_argument(
"-H",
type=int,
default=4,
help="Number of attention heads.",
)
parser.add_argument(
"-D",
type=int,
default=16,
help="Embedding dim per head.",
)
parser.add_argument(
"-D_s",
type=int,
default=2,
help="Spatial dimension.",
)
return parser.parse_args(argv[1:])
if __name__ == "__main__":
args = parse_args(sys.argv)
main(**vars(args))
```
### Versions
```
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Void Linux (x86_64)
GCC version: (GCC) 14.2.1 20250405
Clang version: 17.0.6
CMake version: version 3.30.1
Libc version: glibc-2.41
Python version: 3.12.7 (main, Oct 6 2024, 23:32:45) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.12.25_1-x86_64-with-glibc2.41
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.144
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 35%
CPU max MHz: 5881.0000
CPU min MHz: 545.0000
BogoMIPS: 8982.57
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-lightning==2.5.1
[pip3] torch==2.7.0
[pip3] torchmetrics==1.7.0
[pip3] torchvision==0.21.0
[pip3] triton==3.3.0
[conda] Could not collect
```
cc @zou3519 @bdhirsh @penguinwu @Chillee @drisspg @yanboliang @BoyuanFeng @chauhang @ydwu4
| true
|
3,033,632,252
|
[ratter-build] Cannot detect CUDA when build from source
|
hieupth
|
open
|
[
"needs reproduction",
"module: build",
"triaged"
] | 1
|
NONE
|
Hi, I am building from source using `rattler-build` with this `recipe.yaml`
```yaml
context:
name: pytorch
version: nightly
rev: main
python: 3.12
gcc: 13.3
cuda: 12.8
cudnn: 9.8
archs: 8.6;9.0;10.0;12.0;12.6
package:
name: ${{name|lower}}
version: ${{version}}
source:
git: https://github.com/pytorch/pytorch.git
rev: ${{rev}}
build:
script:
env:
PACKAGE_TYPE: conda
USE_SYSTEM_SLEEF: 1
BUILD_CUSTOM_PROTOBUF: OFF
USE_SYSTEM_PYBIND11: 1
USE_SYSTEM_EIGEN_INSTALL: 1
USE_MKLDNN: 1
USE_CUDA: 1
USE_CUFILE: 1
TORCH_CUDA_ARCH_LIST: "${{archs}}"
CUDA_HOME: $PREFIX/targets/x86_64-linux
CUDAToolkit_BIN_DIR: $PREFIX/bin
CUDAToolkit_ROOT_DIR: $PREFIX
CUDAToolkit_ROOT: $PREFIX
CUDAToolkit_TARGET_DIR: $PREFIX/targets/x86_64-linux
CMAKE_PREFIX_PATH: $PREFIX
CMAKE_LIBRARY_PATH: $PREFIX/lib:$PREFIX/include:$CMAKE_LIBRARY_PATH
content: |
> third_party/NNPACK/cmake/DownloadSix.cmake
$PYTHON setup.py install \
--single-version-externally-managed \
--record=record.txt \
--prefix="$PREFIX"
requirements:
build:
- gcc=${{gcc}}
- gxx=${{gcc}}
- make
- cmake
- ninja
- mkl-devel
- libprotobuf
- protobuf
- nccl
- cudnn=${{cudnn}}
- cuda-toolkit=${{cuda}}
- cuda-compiler=${{cuda}}
- cuda-nvcc=${{cuda}}
- cuda-driver-dev=${{cuda}}
- cuda-cudart-dev=${{cuda}}
- cuda-cupti-dev=${{cuda}}
- cuda-nvrtc-dev=${{cuda}}
- cuda-nvtx-dev=${{cuda}}
- cuda-nvml-dev=${{cuda}}
- cuda-profiler-api=${{cuda}}
- cusparselt
- libcublas-dev
- libcudss-dev
- libcufile-dev
- libcufft-dev
- libcurand-dev
- libcusolver-dev
- libcusparse-dev
host:
- python=3.12
- setuptools
- pip
- six
- numpy
- pyyaml
- mkl-devel
- typing-extensions
- magma
- nccl
- cudnn=${{cudnn}}
- cuda-toolkit=${{cuda}}
- cuda-compiler=${{cuda}}
- cuda-nvcc=${{cuda}}
- cuda-driver-dev=${{cuda}}
- cuda-cudart-dev=${{cuda}}
- cuda-cupti-dev=${{cuda}}
- cuda-nvrtc-dev=${{cuda}}
- cuda-nvtx-dev=${{cuda}}
- cuda-nvml-dev=${{cuda}}
- cuda-profiler-api=${{cuda}}
- cusparselt
- libcublas-dev
- libcudss-dev
- libcufile-dev
- libcufft-dev
- libcurand-dev
- libcusolver-dev
- libcusparse-dev
- libabseil
- libprotobuf
- sleef
- pybind11
- eigen
- zlib
run:
- python=3.12
- mkl
- numpy
- pyyaml
- libnuma
- typing-extensions
- cudnn=${{cudnn}}
- cuda-toolkit=${{cuda}}
- cuda-driver-dev=${{cuda}}
tests:
- script:
- python -c "
import torch;
print("is_available:", torch.cuda.is_available());
print("Device name:", torch.cuda.get_device_name(0));
"
```
I had been trying all the ways I found but it keep error like this:
```bash
-- Could NOT find CUDA (missing: CUDA_INCLUDE_DIRS) (found version "12.8")
│ │ CMake Warning at cmake/public/cuda.cmake:31 (message):
│ │ PyTorch: CUDA cannot be found. Depending on whether you are building
│ │ PyTorch or a PyTorch dependent library, the next warning / error will give
│ │ you more info.
│ │ Call Stack (most recent call first):
│ │ cmake/Dependencies.cmake:44 (include)
│ │ CMakeLists.txt:856 (include)
│ │
│ │ CMake Warning at cmake/Dependencies.cmake:76 (message):
│ │ Not compiling with CUDA. Suppress this warning with -DUSE_CUDA=OFF.
│ │ Call Stack (most recent call first):
│ │ CMakeLists.txt:856 (include)
```
My system is Debian, RTX5090 with Nvidia Driver 570.133.20. The nvidia-smi work correctly and I don't know what happened here. Please help!
cc @malfet @seemethere
| true
|
3,033,571,301
|
Fix: promote scalar to MPS device in exec_binary_kernel
|
KAVYANSHTYAGI
|
open
|
[
"triaged",
"open source",
"release notes: mps"
] | 3
|
NONE
|
**PR Summary**
This PR fixes an inconsistency in torch.copysign on the MPS backend when used with a scalar as the second operand. Scalars were being promoted to CPU tensors by default, leading to incorrect results due to cross-device operations.
**Repro Before Fix**
import torch
t = torch.tensor([1.0, 2.0, 3.0], device="mps")
torch.copysign(t, -2.0)
Returns: tensor([1., 2., 3.], device='mps:0')
Expected After Fix
Correct output:
tensor([-1., -2., -3.], device='mps:0')
The fix ensures that scalars are promoted to the same device (input.device()) in exec_binary_kernel, aligning behavior across backends and preventing silent failures.
**Related Issue**
This addresses the issue discussed in [https://github.com/pytorch/pytorch/issues/152582] https://github.com/pytorch/pytorch/issues/152582
**CC Maintainers**
@malfet @albanD @kulinseth @DenisVieriu97
Please review: this is a small but important fix to maintain consistency and correctness in binary ops on MPS.
| true
|
3,033,319,153
|
Fix #152280: add Literal[…] PaddingMode to Conv modules
|
AnandVishesh1301
|
open
|
[
"triaged",
"open source",
"release notes: AO frontend"
] | 3
|
NONE
|
## Description
Updates `padding_mode` type annotations in convolution modules to use `Literal` for improved type safety. This PR builds on #152458 by @sujeet4010, addressing unresolved MYPY errors in `torch/ao/nn/qat/modules/conv.py` and adding test coverage.
## Related Issues
- Resolves #152280 (original issue)
- Fixes MYPY errors from #152458
## Changes Made
- Extended `Literal` type to QAT modules.
- Added tests validating type enforcement.
- Addressed CI failures from previous attempt.
Credit to @sujeet4010 for the initial implementation.
| true
|
3,033,318,715
|
[Dynamo] Optimize dedupe region ancestor tracking
|
mlazos
|
open
|
[
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"merging"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152589
* #152572
* #152570
* #152506
* #152410
* #152505
* #152389
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,033,316,017
|
[WIP] suggest whitelist for dynamic shape recompilations
|
pianpwk
|
open
|
[
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
CONTRIBUTOR
|
For this toy example, running with `TORCH_LOGS="recompiles"`:
```
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(4, 4)
self.attr = torch.randn(4)
def forward(self, x, y):
return self.lin(x) + self.attr + y
fn = torch.compile(Foo())
# 1
fn(torch.randn(7, 4), torch.randn(4))
# 2
fn(torch.randn(8, 4), torch.randn(4))
# 3
fn.lin = torch.nn.Linear(8, 4)
fn.attr = torch.randn(4)
fn(torch.randn(9, 8), torch.randn(4))
```
logs:
```
V0506 17:15:05.960000 3670767 torch/_dynamo/guards.py:3384] [0/1] [__recompiles] Recompiling function forward in /data/users/pianpwk/pytorch/custom_tests/test_recompiles_tlparse.py:13
V0506 17:15:05.960000 3670767 torch/_dynamo/guards.py:3384] [0/1] [__recompiles] triggered by the following guard failure(s):
V0506 17:15:05.960000 3670767 torch/_dynamo/guards.py:3384] [0/1] [__recompiles] - 0/0: tensor 'x' size mismatch at index 0. expected 7, actual 8
V0506 17:15:05.960000 3670767 torch/_dynamo/guards.py:3384] [0/1] [__recompiles] - The following environment variable would enable dynamic compilation to start, avoiding this recompile: TORCH_COMPILE_DYNAMIC_SOURCES="L['x']"
V0506 17:15:06.083000 3670767 torch/_dynamo/guards.py:3384] [0/2] [__recompiles] Recompiling function forward in /data/users/pianpwk/pytorch/custom_tests/test_recompiles_tlparse.py:13
V0506 17:15:06.083000 3670767 torch/_dynamo/guards.py:3384] [0/2] [__recompiles] triggered by the following guard failure(s):
V0506 17:15:06.083000 3670767 torch/_dynamo/guards.py:3384] [0/2] [__recompiles] - 0/1: tensor 'x' size mismatch at index 1. expected 4, actual 8
V0506 17:15:06.083000 3670767 torch/_dynamo/guards.py:3384] [0/2] [__recompiles] - Multiple size mismatches found. The following environment variable would enable dynamic compilation to start, avoiding this recompile: TORCH_COMPILE_DYNAMIC_SOURCES="L['x'],L['self']._modules['lin']._parameters['weight']"
V0506 17:15:06.083000 3670767 torch/_dynamo/guards.py:3384] [0/2] [__recompiles] - Size guard failed on a parameter, consider using torch._dynamo.config.force_parameter_static_shapes = False to allow dynamism on parameters.
V0506 17:15:06.083000 3670767 torch/_dynamo/guards.py:3384] [0/2] [__recompiles] - 0/0: tensor 'x' size mismatch at index 0. expected 7, actual 9
V0506 17:15:06.083000 3670767 torch/_dynamo/guards.py:3384] [0/2] [__recompiles] - Multiple size mismatches found. The following environment variable would enable dynamic compilation to start, avoiding this recompile: TORCH_COMPILE_DYNAMIC_SOURCES="L['x'],L['self']._modules['lin']._parameters['weight']"
V0506 17:15:06.083000 3670767 torch/_dynamo/guards.py:3384] [0/2] [__recompiles] - Size guard failed on a parameter, consider using torch._dynamo.config.force_parameter_static_shapes = False to allow dynamism on parameters.
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,033,254,808
|
[Inductor] Introduce Wrapper IR line for symbolic call args
|
blaine-rister
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 3
|
CONTRIBUTOR
|
Preparatory refactor for https://github.com/pytorch/pytorch/pull/146942.
This PR introduces a new wrapper IR line to represent symbolic call args. This deletes a little bit of duplicated code between the Python and C++ backends. In the main PR, having a Wrapper IR line for this also tells the FX backend what this part of the wrapper code is doing. Before this PR, symbolic call args generated raw Python lines, which confuse the FX converter.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,033,115,488
|
[2/N] Use std::filesystem
|
cyyever
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 5
|
COLLABORATOR
|
Use std::filesystem in most inductor code. This is follow-up of #152288 .
The check of `std::filesystem::create_directories` has been fixed because it may return false when the directory to create already exists.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,033,099,094
|
[c10d][fr] Decouple the core logic of FR with the entry and event type
|
fduwjj
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152614
* #152563
* __->__ #152585
We want to make FR generic enough so the first step is to make the FR a template struct so that most of common code logic can be reused. The reason for this is that CudaEvent does not inherit c10::Event and we just want to swap the event part so that for NCCL we use CudaEvent and for the rest of backends, we use c10::event.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
Differential Revision: [D74262695](https://our.internmc.facebook.com/intern/diff/D74262695)
| true
|
3,033,072,933
|
How does torch.cudagraph capture a hybrid graph?
|
ghostplant
|
closed
|
[] | 1
|
NONE
|
I have a model containing not only CUDA operations in some places, and also CPU operators in other places. I want to capture the whole graph as a single CUDAGraph to replay. Is it possible in Pytorch?
| true
|
3,033,072,462
|
add support for 0 size shardedTensor and recalculate metadata from all_gather
|
duduyi2013
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (sharded)"
] | 11
|
CONTRIBUTOR
|
Summary:
change set
1. a ShardedTensor could have 0 size initially, the current check won't pass if the size is 0, added here
2. when we call ShardedTensor._init_from_local_shards, it will assume all the metadata is correct, all_gather to double check. In the new case, the metadata could be all 0 size, and the tensor has actual size, we need to provide such capability to recalculate the local/global metadata from the local tensor by all_gathering the information
Test Plan: i don't see a UT is associated, I have tested this with diff stack, D73274786.
Differential Revision: D73903933
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,033,047,728
|
[MPS] Binary kernels produce incorrect results when one of the tensor arguments is from a wrapped scalar
|
qqaatw
|
closed
|
[
"triaged",
"module: regression",
"module: correctness (silent)",
"module: mps"
] | 4
|
COLLABORATOR
|
### 🐛 Describe the bug
Repro:
```python
import torch
tcpu = torch.tensor([1.0,2.0,3.0], device="cpu")
torch.copysign(tcpu, -2.0) # tensor([-1., -2., -3.])
t = torch.tensor([1.0,2.0,3.0], device="mps")
torch.copysign(t, -2.0) # tensor([1., 2., 3.], device='mps:0')
```
Internally, the scalar is wrapped into a cpu tensor and then re-dispatched.
Three possible solutions here:
1. Wrap the scalar into an mps tensor instead of a cpu tensor.
2. Send the cpu tensor to mps in the binary kernel.
3. Error out rather than produce incorrect results.
### Versions
nightly
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,033,013,295
|
[invoke_subgraph] rename identifiers to prevent python mangling
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152494
* #152490
* #152383
* #152384
* __->__ #152581
* #152547
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,032,942,578
|
[cutlass backend] cache filtered ops based on layouts
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #153006
* __->__ #152580
Differential Revision: [D73972687](https://our.internmc.facebook.com/intern/diff/D73972687/)
Add cache to store the list of filtered ops for a specific shape + layout + dtype (aka hash on input_nodes).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,932,963
|
[aoti] skip input symbol codegen for sympy expr w/ many symbols
|
ColinPeppler
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 3
|
CONTRIBUTOR
|
Issue was that
- symbol-ids appeared out-of-order w.r.t to the order of the forward inputs
```
def forward(arg0 # [(s3 - 1) + s4, 32], arg1 #[(s3 - 1)] ..)
```
- this causes codegen to fail because it expects all the base symbols `s4,s3` to have been codegen-ed already.
- well, we can skip codegen-ing sympy expr with many symbols e.g. `(s3 - 1) + s4` because `s3` and `s4` will be codegen-ed by other inputs.
```
# for example
s3 = arg1.size(0) + 1
s4 = argN.size(0)
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152579
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,902,015
|
[testing] 1
|
zou3519
|
closed
|
[
"release notes: releng",
"fx",
"module: inductor",
"ciflow/inductor",
"ciflow/inductor-periodic"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152578
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,891,842
|
[cutlass backend] Minor lru_cache to slightly speed up filtering ops
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td",
"ciflow/inductor-periodic"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152577
For default level, it went from 0.11332 seconds to Filtering took 0.10064 seconds.
You can't really apply lru_cache too aggressively. For example, hashing a cutlass op takes a long time.
Removing a log further bring it down to 0.07202 seconds
Differential Revision: [D73971021](https://our.internmc.facebook.com/intern/diff/D73971021/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,875,949
|
[Inductor] Fix int check again
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Made an oss change to a diff train diff
@diff-train-skip-merge
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,861,344
|
[IR] Input Adapter refactor prototype (#152459)
|
felixsu2006
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 9
|
CONTRIBUTOR
|
Summary:
1. Adding `input` field to `_adapt_flat_args` function
2. In `process_forward_inputs`, `reorder_kwargs` will now do nothing if no kwargs are provided (previously would error)
3. Pass `args` as input to `_adapt_flat_args`
These changes are made to update the InputAdapter
see more context in D73811508
Test Plan: see D73811508
Differential Revision: D73945419
| true
|
3,032,857,106
|
Added documentation for nonzero_static function (#152347)
|
sanjai-11
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 7
|
NONE
|
Fixes #152347
This PR adds documentation for the nonzero_static function in PyTorch.
| true
|
3,032,848,140
|
Allow decomposeK to fuse
|
PaulZhang12
|
open
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152573
* #150654
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,830,059
|
[Dynamo] Fix typing in graph_deduplication.py
|
mlazos
|
open
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152589
* __->__ #152572
* #152570
* #152506
* #152410
* #152505
* #152389
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,032,815,194
|
[export] Ignore None buffers
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/152467
| true
|
3,032,802,827
|
[Hierarchical Compile] Replace tracing alias and mutation check with dynamo impl
|
mlazos
|
open
|
[
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152589
* #152572
* __->__ #152570
* #152506
* #152410
* #152505
* #152389
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,032,723,115
|
[ROCm] Update spack includes
|
jithunnair-amd
|
open
|
[
"module: rocm",
"triaged",
"open source",
"release notes: rocm",
"ciflow/rocm"
] | 6
|
COLLABORATOR
|
* Cleans up code in `caffe2/CMakeLists.txt` to remove individual ROCm library include paths and use `ROCM_INCLUDE_DIRS` CMake var instead
* `ROCM_INCLUDE_DIRS` CMake var is set in `cmake/public/LoadHIP.cmake` by adding all the ROCm packages that PyTorch depends on
* `rocm_version.h` is provided by the `rocm-core` package, so use the include directory for that component to be compliant with Spack
* Move `find_package_and_print_version(hip REQUIRED CONFIG)` earlier so that `hip_version.h` can be located in the hip package include dir for Spack
* `list(REMOVE_DUPLICATES ROCM_INCLUDE_DIRS)` to remove duplicate `/opt/rocm/include` entries in the non-Spack case
* Remove user-provided env var `ROCM_INCLUDE_DIRS` since `ROCM_PATH` already exists as a user-provided env var, which should be sufficient to locate the include directories for ROCm.
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,032,717,939
|
Allow Metal Binary iterator to take CPUScalar operands
|
skotapati
|
closed
|
[
"open source",
"release notes: mps",
"ciflow/mps"
] | 2
|
COLLABORATOR
|
Currently the metal binary kernel can only take MPSTensors and errors out if a scalar value is passed in. The following change allows the binary kernel to work with cpu scalar inputs, without the need to initialize a new MPS tensor.
This is necessary for enabling binary logical/comparison ops via the metal kernel, which will be added in a followup PR
| true
|
3,032,702,917
|
🐛 Add `ciflow/pull`🦋
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
To make it easier to workaround GitHub relibability issues, when it sometime fails to scheduled `on: pull_request` workflows
See https://github.com/pytorch/pytorch/issues/151322
But alas, it does not fixes problem at hand...
| true
|
3,032,688,256
|
[Benchmark] High compilation time variance on benchmark dashboards
|
huydhn
|
open
|
[
"module: ci",
"triaged",
"module: infra"
] | 0
|
CONTRIBUTOR
|
The issue is reported by compiler team (@zou3519, @oulgen) in which the compilation time seems to have a higher variance across runs. This happens on both PT2 (no compiler cache) and CacheBench dashboard, which seems to indicate an underlying problem with the runner.


One potential explanation is that the H100/A100 where these benchmarks are running is multi-tenancy. So, other jobs running in parallel on the same runner could cause this.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @ZainRizvi @clee2000
| true
|
3,032,687,653
|
Fix assertion in reorder_communication_preserving_peak_memory
|
wconstab
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152565
>=0 is practically correct becuase we do model the runtime of some ops as 0.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,678,721
|
`torch.randint` can't handle large `high` argument (and in general high range of `torch.uint64`)
|
vadimkantorov
|
closed
|
[
"module: docs",
"triaged",
"actionable",
"module: python frontend"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
[Docs for `manual_seed`](https://pytorch.org/docs/stable/generated/torch.Generator.html#torch.Generator.manual_seed) say `The desired seed. Value must be within the inclusive range [-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff].`. So trying to generate a seed:
`python -c 'import torch; print(torch.randint(-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff +1, (1, ), dtype=torch.int64))'` prints `RuntimeError: Overflow when unpacking long`
The low/upper bounds for arguments are not specified in docs https://pytorch.org/docs/stable/generated/torch.randint.html :(
### Versions
2.6.0.dev20241007+cpu
cc @svekars @sekyondaMeta @AlannaBurke @albanD
| true
|
3,032,645,173
|
[c10d][fr] Make FR vendor neutral so that other backends can use it
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152614
* __->__ #152563
* #152585
Current FR code is built with `USE_C10D_NCCL` we should remove it to make it generic. And we keep existing API used by NCCL so that we can have some bc compatibility because lots of use cases are around FR with NCCL. The generic version with C10::Event can then be used for other backend like Gloo, etc.
The current Unit test should cover the change.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k @d4l3kC
| true
|
3,032,644,811
|
xpu: rely on sycl/sycl.hpp to include bfloat16.hpp
|
dvrogozh
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"ciflow/xpu",
"release notes: xpu",
"module: xpu"
] | 18
|
CONTRIBUTOR
|
Fixes: https://github.com/intel/torch-xpu-ops/issues/1503
`sycl/ext/oneapi/bfloat16.hpp` header file is a DPC++ compiler internal header. It's not documented for usage (see extension specification linked below) and is not guaranteed to exist. Instead, documented usage of extension suggests to rely on including `sycl/sycl.hpp` which in its turn includes `bfloat16.hpp` header (which is implementation detail).
We stepped into issues by explicitly including `bloat16.hpp` sycl header whithin user facing production environment when `intel-sycl-rt` wheel is installed (which is the dependency of `torch` wheel package built and publicly available for xpu). Compiler includes this file from `intel-sycl-rt` and due to `#pragma once` usage its content is included as well giving redefinitions of symbols in this file (previous inclusion is coming from `sycl/sycl.hpp`):
```
In file included from /workspace/lib/python3.12/site-packages/torch/include/c10/util/BFloat16.h:23:
/opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/ext/oneapi/bfloat16.hpp:60:23: error: redefinition of 'BF16VecToFloatVec'
60 | template <int N> void BF16VecToFloatVec(const bfloat16 src[N], float dst[N]) {
| ^
/workspace/include/sycl/ext/oneapi/bfloat16.hpp:60:23: note: previous definition is here
60 | template <int N> void BF16VecToFloatVec(const bfloat16 src[N], float dst[N]) {
|
```
While SYCL header files themselves can be improved (`#pragma once` dropped), we still must correct usage of sycl `bfloat16.hpp` header in pytorch, i.e. drop it. This fortunately helps to address the reported issue of redefinitions though follow up on compiler side is still required.
Also, `SYCL_EXT_ONEAPI_BFLOAT16_MATH_FUNCTIONS` used to cover inclusion of `sycl/sycl.hpp` does not make sense since it's defined in this very header. Thus, we should use `SYCL_LANGUAGE_VERSION` instead which is defined on compiler level.
See: https://github.com/intel/llvm/blob/f958dce28053dff145cd725ff57bc4ce94cb94d7/sycl/doc/extensions/experimental/sycl_ext_oneapi_bfloat16_math_functions.asciidoc
CC: @EikanWang, @guangyey, @gujinghui
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,032,596,817
|
DISABLED test_graph_partition_reorder_cpu_and_gpu_interleave (__main__.CudaGraphTreeTests)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_graph_partition_reorder_cpu_and_gpu_interleave&suite=CudaGraphTreeTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41432152468).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_graph_partition_reorder_cpu_and_gpu_interleave`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_cudagraph_trees.py", line 3206, in test_graph_partition_reorder_cpu_and_gpu_interleave
self.assertEqual(self.get_manager().new_graph_id().id, 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 1 but got 2.
Absolute difference: 1
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_cudagraph_trees.py CudaGraphTreeTests.test_graph_partition_reorder_cpu_and_gpu_interleave
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_cudagraph_trees.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,596,815
|
DISABLED test_pending_fusion_pro_and_epi (__main__.TestPrologueFusion)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pending_fusion_pro_and_epi&suite=TestPrologueFusion&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41442212447).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pending_fusion_pro_and_epi`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_max_autotune.py", line 1550, in test_pending_fusion_pro_and_epi
).run(code[0])
RuntimeError: Expected to not find ".run(" but found it
# Topologically Sorted Source Nodes: [relu], Original ATen: [aten.relu]
stream0 = get_raw_stream(0)
triton_poi_fused_relu_1.run(buf2, 16384, stream=stream0)
~~~~~ <--- HERE
return (buf2, )
From CHECK-NOT: .run(
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_max_autotune.py TestPrologueFusion.test_pending_fusion_pro_and_epi
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_max_autotune.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,594,687
|
DISABLED test_comprehensive_signal_windows_hamming_cuda_float32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_signal_windows_hamming_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41444319438).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_signal_windows_hamming_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 862, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 846, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1460, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1347, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2219, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2266, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpprxfcxho/fh/cfhjwbghdrheeqgrbsv2nrm7qfppxwloeepx7nnn4tfgkz4bjodb.py", line 186, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpzp29actm/triton/TM6JRP6APJLRXXMGDDHV527FZJVUOTAAOWW7AGPEBRSGPED7SGIA/triton_poi_fused_cos_linspace_mul_sum_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 4: SampleInput(input=2, args=(), kwargs={'sym': 'True', 'device': "'cuda:0'", 'dtype': 'torch.float32', 'requires_grad': 'False'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=4 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_signal_windows_hamming_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,032,594,660
|
DISABLED test_comprehensive_amin_cuda_float64 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_amin_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41442776015).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 10 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_amin_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 862, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 846, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1460, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1347, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2219, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2266, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpj96xl7eo/d2/cd2iznar6kymctlotthswvmztzfv4k3voqctpfmfbjp35bdlfdqs.py", line 78, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpvh5gum64/triton/RVENYFYII7IGSNDPLPOQUNDLGK426N3SQL2PUHHSGUV64S7CSW7A/triton_poi_fused_amin_eq_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 1: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.float64], args=(), kwargs={'dim': '0', 'keepdim': 'True'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_amin_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,496,059
|
[BE] Update numba versions
|
malfet
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 18
|
CONTRIBUTOR
|
Let's see if PyTorch is compatible with latest
`test_unary_funcs` are no longer failing due to https://github.com/pytorch/pytorch/pull/148024
| true
|
3,032,481,926
|
[ONNX] Delete JitTraceConvertStrategy
|
titaiwangms
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx"
] | 10
|
COLLABORATOR
|
Fixes #151703
| true
|
3,032,258,804
|
PGO does not work on jobs for frameworks that copy code to different dirs at different attempts.
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
CONTRIBUTOR
|
**internal Xrefs:**
```
attempt 0:[ https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/fire-swapna942-](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/fire-swapna942-f725974742/attempt_0/version_0/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000)[f725974742](https://www.internalfb.com/intern/fblearner/details/725974742)/attempt_0/version_0/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
attempt 1:[ https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/fire-swapna942-](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/fire-swapna942-f725974742/attempt_1/version_0/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000)[f725974742](https://www.internalfb.com/intern/fblearner/details/725974742)/attempt_1/version_0/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
```
**whats going on**
here is an interesting observation if you look at local code state in attempt1 at the frame 0/6
you will see TWO entries for the same function each encoding a different process id.
(1) This is the entry coming from attempt 0 which says i saw different values for stuff that need to become dynamic.
```
/dev/shm/uid-99/7bbb6bf0-seed-nspid4026547364_cgpid14361465-ns-XXXXXXX5501:torch_dynamo_resume_in_forward_at_5501:
L['___stack0'][0]: tensor size=[1024, 112, 256] stride=[S(1), S(2), 1]
L['___stack0'][1]: tensor size=[?, 23, 256] stride=[S(1), S(2), 1]
L['___stack0'][2]: tensor size=[?] stride=[1]
L['___stack0'][3]: tensor size=[1024, 350] stride=[S(1), 1]
L['___stack0'][4]: tensor size=[?, 1101] stride=[S(1), 1]
L['dense_arch_out'][0]: tensor size=[?, 1024] stride=[S(1), 1]
...
```
(2)This is a new entry in the file. It seems like in this attempt we think that we are running a different frame due to this embedding of the process ID in the frame so we start from scratch.
```
/dev/shm/uid-99/7bbb6bf0-seed-nspid4026546680_cgpid7288320-ns-XXXXXXXXX:5501:torch_dynamo_resume_in_forward_at_5501:
L['___stack0'][0]: tensor size=[1024, 112, 256] stride=[S(1), S(2), 1]
L['___stack0'][1]: tensor size=[7363, 23, 256] stride=[S(1), S(2), 1]
L['___stack0'][2]: tensor size=[7363] stride=[1]
L['___stack0'][3]: tensor size=[1024, 350] stride=[S(1), 1]
L['___stack0'][4]: tensor size=[7363, 1101] stride=[S(1), 1]
L['dense_arch_out'][0]: tensor size=[7363, 1024] stride=[S(1), 1]
```
**Solution:**
We will hash the file and use it as the prefix to the file name and line number.
To avoid hashing for multiple times with in an attempts we will cache using the file path .
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
3,032,237,381
|
Implemented `Size.__radd__`
|
randolf-scholz
|
open
|
[
"triaged",
"open source",
"release notes: python_frontend",
"module: python frontend"
] | 4
|
CONTRIBUTOR
|
Fixes #144334
Builds on top of #146834 by @khushi-411 (I reused the `THPSize_add` method as-is)
The needed trick was to add `PyNumberMethods` because these Number Protocol appears to be responsible for `__radd__` (see https://stackoverflow.com/q/18794169)
cc @albanD
| true
|
3,032,138,329
|
[BE] Replace func_name with __func__
|
malfet
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary: Not sure why one needs to preserve the name by hand
Test Plan: CI
Differential Revision: D73941209
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,032,133,746
|
Clean up conda usage in benchmark scripts
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"test-config/default",
"module: dynamo",
"ciflow/inductor",
"suppress-bc-linter"
] | 3
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/152123.
* Switch `benchmarks/dynamo/Makefile` to use uv. Note that these scripts are only used locally, so it's kind of ok to keep conda here IMO. But switching to uv is probably nicer to most folks.
* Delete some files that are outdated and not used anymore
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,032,096,260
|
removing short-perf-test-cpu.sh and short-perf-test-gpu.sh
|
jeanschmidt
|
closed
|
[
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"ciflow/nightly",
"ciflow/unstable",
"ciflow/slow"
] | 4
|
CONTRIBUTOR
|
When working on #148342 I realised that there is no reference from those files. So seems they are stale and can be safely removed.
| true
|
3,032,085,521
|
MPS varying seq len SDPA memory leak
|
SalmanMohammadi
|
open
|
[
"module: memory usage",
"triaged",
"module: mps",
"module: sdpa"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
After trying the fix from #152371 (thanks so much for landing this so quickly) However, I was still seeing memory leaks. I found another issue where memory usage on MPS explodes when the sequence length sufficiently varies for SDPA - this does not occur with CUDA.

### Reproduction script:
```python
import torch
import torch.nn.functional as F
import sys
def get_memory_stats(device: torch.device):
if device.type == "mps":
peak_active = torch.mps.current_allocated_memory()
peak_alloc = torch.mps.driver_allocated_memory()
return peak_active, peak_alloc
elif device.type == "cuda":
peak_active = torch.cuda.memory_stats().get("active_bytes.all.peak", 0)
peak_alloc = torch.cuda.max_memory_allocated()
return peak_active, peak_alloc
def format_bytes(size_bytes):
"""Converts bytes to a readable string (KB, MB, GB)."""
if size_bytes < 1024:
return f"{size_bytes} B"
elif size_bytes < 1024**2:
return f"{size_bytes / 1024:.2f} KB"
elif size_bytes < 1024**3:
return f"{size_bytes / 1024**2:.2f} MB"
else:
return f"{size_bytes / 1024**3:.2f} GB"
def run_sdpa_test_single_bs(batch_size, num_iterations, num_heads, head_dim, min_seq_len, max_seq_len, device, dtype):
actual_max_seq_len = max(max_seq_len, min_seq_len + 1)
peak_active, peak_alloc = get_memory_stats(device)
print(f" Initial Memory: Active={format_bytes(peak_active)}, Alloc={format_bytes(peak_alloc)}")
for i in range(num_iterations):
seq_len = torch.randint(min_seq_len, actual_max_seq_len, (1,)).item()
query = torch.randn(batch_size, num_heads, seq_len, head_dim, device=device, dtype=dtype)
key = torch.randn(batch_size, num_heads, seq_len, head_dim, device=device, dtype=dtype)
value = torch.randn(batch_size, num_heads, seq_len, head_dim, device=device, dtype=dtype)
with torch.no_grad():
F.scaled_dot_product_attention(query, key, value)
peak_active, peak_alloc = get_memory_stats(device)
if (i + 1) % (num_iterations // 10 or 1) == 0:
print(f" Step {i + 1}/{num_iterations}: Active={format_bytes(peak_active)}, Alloc={format_bytes(peak_alloc)}")
final_peak_active, final_peak_alloc = get_memory_stats(device)
print(f" Final Memory: Active={format_bytes(final_peak_active)}, Alloc={format_bytes(final_peak_alloc)}")
print(f"--- Finished SDPA Test for BS={batch_size}, SeqLen Range=({min_seq_len}-{actual_max_seq_len - 1}) ---")
if __name__ == "__main__":
batch_size = 4
num_iterations = 400
num_heads = 8
head_dim = 128
min_seq_len = 128
max_seq_len = min_seq_len + int(sys.argv[1])
device = torch.device(sys.argv[2])
dtype = torch.bfloat16
run_sdpa_test_single_bs(batch_size, num_iterations, num_heads, head_dim, min_seq_len, max_seq_len, device, dtype)
```
#### CUDA results:
```bash
root@cb0c541d80f5:/workspace/axolotl# python ../mem_test.py 128 cuda
Initial Memory: Active=0 B, Alloc=0 B
Step 40/400: Active=8.71 MB, Alloc=8.71 MB
Step 80/400: Active=8.71 MB, Alloc=8.71 MB
Step 120/400: Active=9.66 MB, Alloc=9.66 MB
Step 160/400: Active=9.66 MB, Alloc=9.66 MB
Step 200/400: Active=9.66 MB, Alloc=9.66 MB
Step 240/400: Active=9.66 MB, Alloc=9.66 MB
Step 280/400: Active=9.66 MB, Alloc=9.66 MB
Step 320/400: Active=9.66 MB, Alloc=9.66 MB
Step 360/400: Active=9.66 MB, Alloc=9.66 MB
Step 400/400: Active=9.66 MB, Alloc=9.66 MB
Final Memory: Active=9.66 MB, Alloc=9.66 MB
--- Finished SDPA Test for BS=4, SeqLen Range=(128-255) ---
root@cb0c541d80f5:/workspace/axolotl# python ../mem_test.py 256 cuda
Initial Memory: Active=0 B, Alloc=0 B
Step 40/400: Active=12.00 MB, Alloc=12.00 MB
Step 80/400: Active=12.00 MB, Alloc=12.00 MB
Step 120/400: Active=13.17 MB, Alloc=13.17 MB
Step 160/400: Active=13.17 MB, Alloc=13.17 MB
Step 200/400: Active=13.17 MB, Alloc=13.17 MB
Step 240/400: Active=13.17 MB, Alloc=13.17 MB
Step 280/400: Active=13.17 MB, Alloc=13.17 MB
Step 320/400: Active=13.17 MB, Alloc=13.17 MB
Step 360/400: Active=13.17 MB, Alloc=13.17 MB
Step 400/400: Active=13.17 MB, Alloc=13.17 MB
Final Memory: Active=13.17 MB, Alloc=13.17 MB
--- Finished SDPA Test for BS=4, SeqLen Range=(128-383) ---
root@cb0c541d80f5:/workspace/axolotl# python ../mem_test.py 512 cuda
Initial Memory: Active=0 B, Alloc=0 B
Step 40/400: Active=20.78 MB, Alloc=20.78 MB
Step 80/400: Active=20.78 MB, Alloc=20.78 MB
Step 120/400: Active=20.78 MB, Alloc=20.78 MB
Step 160/400: Active=20.78 MB, Alloc=20.78 MB
Step 200/400: Active=20.78 MB, Alloc=20.78 MB
Step 240/400: Active=20.78 MB, Alloc=20.78 MB
Step 280/400: Active=20.78 MB, Alloc=20.78 MB
Step 320/400: Active=20.78 MB, Alloc=20.78 MB
Step 360/400: Active=20.78 MB, Alloc=20.78 MB
Step 400/400: Active=20.78 MB, Alloc=20.78 MB
Final Memory: Active=20.78 MB, Alloc=20.78 MB
--- Finished SDPA Test for BS=4, SeqLen Range=(128-639) ---
root@cb0c541d80f5:/workspace/axolotl# python ../mem_test.py 2048 cuda
Initial Memory: Active=0 B, Alloc=0 B
Step 40/400: Active=67.58 MB, Alloc=67.58 MB
Step 80/400: Active=67.58 MB, Alloc=67.58 MB
Step 120/400: Active=67.58 MB, Alloc=67.58 MB
Step 160/400: Active=67.58 MB, Alloc=67.58 MB
Step 200/400: Active=67.58 MB, Alloc=67.58 MB
Step 240/400: Active=67.58 MB, Alloc=67.58 MB
Step 280/400: Active=67.58 MB, Alloc=67.58 MB
Step 320/400: Active=67.89 MB, Alloc=67.89 MB
Step 360/400: Active=67.89 MB, Alloc=67.89 MB
Step 400/400: Active=68.14 MB, Alloc=68.14 MB
Final Memory: Active=68.14 MB, Alloc=68.14 MB
--- Finished SDPA Test for BS=4, SeqLen Range=(128-2175) ---
```
#### MPS Results:
```bash
> python minimal_test.py 128 mps
Initial Memory: Active=0 B, Alloc=384.00 KB
Step 40/400: Active=5.86 MB, Alloc=77.17 MB
Step 80/400: Active=5.86 MB, Alloc=85.52 MB
Step 120/400: Active=5.83 MB, Alloc=117.83 MB
Step 160/400: Active=5.86 MB, Alloc=118.02 MB
Step 200/400: Active=4.17 MB, Alloc=118.28 MB
Step 240/400: Active=5.83 MB, Alloc=118.41 MB
Step 280/400: Active=5.84 MB, Alloc=118.47 MB
Step 320/400: Active=5.84 MB, Alloc=118.48 MB
Step 360/400: Active=5.83 MB, Alloc=118.56 MB
Step 400/400: Active=5.83 MB, Alloc=118.61 MB
Final Memory: Active=5.83 MB, Alloc=118.61 MB
--- Finished SDPA Test for BS=4, SeqLen Range=(128-255) ---
> python minimal_test.py 256 mps
Initial Memory: Active=0 B, Alloc=384.00 KB
Step 40/400: Active=7.81 MB, Alloc=143.22 MB
Step 80/400: Active=7.81 MB, Alloc=151.73 MB
Step 120/400: Active=7.81 MB, Alloc=184.08 MB
Step 160/400: Active=7.81 MB, Alloc=184.47 MB
Step 200/400: Active=7.81 MB, Alloc=184.77 MB
Step 240/400: Active=7.81 MB, Alloc=185.03 MB
Step 280/400: Active=8.11 MB, Alloc=185.28 MB
Step 320/400: Active=7.81 MB, Alloc=185.50 MB
Step 360/400: Active=7.81 MB, Alloc=185.78 MB
Step 400/400: Active=17.01 MB, Alloc=185.88 MB
Final Memory: Active=17.01 MB, Alloc=185.88 MB
--- Finished SDPA Test for BS=4, SeqLen Range=(128-383) ---
> python minimal_test.py 512 mps
Initial Memory: Active=0 B, Alloc=384.00 KB
Step 40/400: Active=5.06 MB, Alloc=1.13 GB
Step 80/400: Active=17.57 MB, Alloc=1.13 GB
Step 120/400: Active=15.55 MB, Alloc=1.13 GB
Step 160/400: Active=10.97 MB, Alloc=1.13 GB
Step 200/400: Active=7.15 MB, Alloc=1.13 GB
Step 240/400: Active=15.55 MB, Alloc=1.13 GB
Step 280/400: Active=10.97 MB, Alloc=1.13 GB
Step 320/400: Active=17.57 MB, Alloc=1.13 GB
Step 360/400: Active=10.97 MB, Alloc=1.13 GB
Step 400/400: Active=17.57 MB, Alloc=1.13 GB
Final Memory: Active=17.57 MB, Alloc=1.13 GB
--- Finished SDPA Test for BS=4, SeqLen Range=(128-639) ---
```
### Versions
On MPS:
```bash
Collecting environment information...
PyTorch version: 2.8.0.dev20250430
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.8 (main, Jan 5 2025, 06:55:30) [Clang 19.1.6 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] torch==2.8.0.dev20250430
[pip3] torchao==0.10.0+cpu
[pip3] torchaudio==2.6.0.dev20250430
[pip3] torchdata==0.11.0
[pip3] torchtune==0.0.0
[pip3] torchvision==0.22.0.dev20250430
[conda] No relevant packages
```
On CUDA:
```bash
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 4.0.0
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-196-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version: 550.127.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7543 32-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5599.84
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] apollo-torch==1.0.3
[pip3] galore-torch==1.0
[pip3] numpy==2.0.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0+cu124
[pip3] torch-optimi==0.2.1
[pip3] torchao==0.9.0
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] No relevant packages
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,032,074,389
|
Mr
|
Orilwela
|
closed
|
[] | 1
|
NONE
|
### 📚 The doc issue
Help
### Suggest a potential alternative/fix
_No response_
| true
|
3,032,073,487
|
FakeTensorUpdater does not trace nodes correctly
|
ProExpertProg
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"inductor_pattern_match",
"vllm-compile"
] | 2
|
NONE
|
### 🐛 Describe the bug
There are two issues in the tracing in `torch._inductor.fx_utils.FakeTensorUpdater`:
1. `auto_functionalized` (and other higher-order op`) nodes don't get re-traced
2. If a node returns a tensor with a different `dtype`, it's still considered the same tensor a lot of the time.
It's actually really hard to reproduce the issue, as most of the time tracing just doesn't happen. But later passes might depend on accurate trace information, and sometimes the lack of tracing leads to wrong inputs to custom ops, which can fail tracing (but it's hard to hit that because usually the tracing quits too early).
But basically, this happened when manually editing the graph. A `dtype` was changed on an `empty` node, and instead of it's users getting retraced, the fake output tensor was considered unchanged (issue 2). Once I got past that issue by just removing the meta value, tracing did not propagate through the `auto_functionalized` nodes. That left a custom op that was type-sensitive with incorrect inputs.
I think the fixes are:
1. `should_process_node` should return `True` if a node is a higher-order op
2. `is_fake_tensor_same` should explicitly compare `dtype`, not just `type(new) == type(old)`
### Versions
torch==2.6.0
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @zou3519
| true
|
3,032,064,980
|
[invoke_subgraph] Unpacked operands
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152494
* #152490
* #152383
* #152384
* #152581
* __->__ #152547
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,059,090
|
Remove Conda Instructions
|
AlannaBurke
|
open
|
[
"module: docs",
"release notes: releng"
] | 1
|
CONTRIBUTOR
|
Fixes #149551
Needs input on some of the instructions.
cc @svekars @sekyondaMeta
| true
|
3,032,051,955
|
ci: Switch benchmark dependency to use pip
|
seemethere
|
open
|
[
"topic: not user facing"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152843
* __->__ #152545
As an effort to reduce our dependency on conda we should use pip here,
also pins all the dependencies based on versions that I took today
(04/30/2024), realistically this should probably be in a
requirements.txt but I'm trying to limit the scope of this PR since I'm
getting it done quickly.
Relates to https://github.com/pytorch/pytorch/issues/148340
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
3,032,048,424
|
Migrate perf_test/test_[gc]pu_speed_mnist.sh from conda to venv
|
jeanschmidt
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Replace conda with venv on:
* `.ci/pytorch/perf_test/test_cpu_speed_mnist.sh`
* `.ci/pytorch/perf_test/test_gpu_speed_mnist.sh`
Fixes #148342
| true
|
3,032,027,010
|
strict multidimensional slicing
|
avikchaudhuri
|
open
|
[
"fb-exported",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Differential Revision: D73937420
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,032,026,781
|
[AOTI][CPU] Introduce config.cpp.use_decompose_tanh
|
hl475
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary: Previously D70489427 changed tanh impl to `.tanh()`, and this is causing some meta internal workload perf regression. This diff will introduce a config so we can set it based on need.
Differential Revision: D73909371
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,032,008,373
|
Add parameters for monitor
|
yangw-dev
|
closed
|
[
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Add log interval and log-data-collect interval to all test yml
Add upload step for all test yml files
next step:
enable perf test with utilization
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,031,973,325
|
[CUDA] Rest peak memory stats before running `test_set_per_process_memory_fraction`
|
eqy
|
closed
|
[
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13
|
COLLABORATOR
|
Otherwise previous tests can cause `application = int(total_memory * 0.499) - torch.cuda.max_memory_reserved()` to go negative
Hopefully abates current flakiness (see also https://github.com/pytorch/pytorch/issues/135115#:~:text=TestCuda.test_set_per_process_memory_fraction)
cc @ptrblck @msaroufim @jerryzh168
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.