id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,921,402,900
|
[export] fix stft decomp and making it consistent with cpp impl.
|
ydwu4
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Summary: We change the fake impl of stft to follow more closely with its cpp implementation [here](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/SpectralOps.cpp#L951-L963)
where " n_frames = 1 + (len - n_fft) / hop_length;" is also an integer division.
Test Plan: Existing tests and buck2 build --flagfile fbcode//mode/dev fbcode//executorch/examples/models/fb/llama4:speech_transform.pte
Differential Revision: D71209142
edit: we kept the original path un-changed.
| true
|
2,921,385,908
|
[BE] simplify test_cpp_extensions_aot and .gitignore
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
It is shady to clean up an install mid-test. So don't do that anymore and use .gitignore instead.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149231
| true
|
2,921,380,345
|
[BE] Add STABLE_LIBRARY test for multiple returns
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149230
* #149052
| true
|
2,921,363,910
|
[aot] always lower the backward with a deepcopy
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
FIXES https://github.com/pytorch/pytorch/issues/149105
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149651
* #149650
* #149649
* #149647
* __->__ #149229
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,921,290,026
|
[dynamo][guards][serialization] Dont use ID_MATCH guard for bool and None
|
anijain2305
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149228
Doing this removes the need of collecting `id` and therefore facilitates serialization. It also improves readability with recompilations. Earlier, recompile message will just show the `id`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,921,285,274
|
Unexpected behaviour in using torch.nn.utils.rnn.pack_padded_sequence API
|
sjh0849
|
open
|
[
"module: rnn",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
The test cases test_pack_padded_sequence_empty and test_pack_padded_sequence_zero_length expect a valid PackedSequence output for an empty tensor or a sequence with a zero length. This contradicts the underlying behaviour in the current implementation.
```python
from torch.nn.utils.rnn import pack_padded_sequence
def test_pack_padded_sequence_empty(self):
# Test with an empty sequence
sequences = torch.tensor([], dtype=torch.float32).reshape(0, 0)
lengths = torch.tensor([], dtype=torch.int64)
packed = pack_padded_sequence(sequences, lengths, batch_first=True, enforce_sorted=False)
self.assertEqual(packed.data.numel(), 0)
self.assertEqual(packed.batch_sizes.numel(), 0)
def test_pack_padded_sequence_zero_length(self):
# Test with a sequence of zero length
sequences = torch.tensor([
[0, 0, 0, 0],
[1, 2, 3, 0],
[4, 5, 0, 0]
], dtype=torch.float32)
lengths = torch.tensor([0, 3, 2])
packed = pack_padded_sequence(sequences, lengths, batch_first=True, enforce_sorted=False)
self.assertEqual(packed.data.tolist(), [1, 4, 2, 5, 3])
self.assertEqual(packed.batch_sizes.tolist(), [2, 2, 1])
```
```
======================================================================
ERROR: test_pack_padded_sequence_empty (__main__.TestPackPaddedSequence)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/user/projects/api_guided_testgen/out/bug_detect_gpt4o/exec/zero_shot/torch/torch.nn.utils.rnn.pack_padded_sequence.py", line 71, in test_pack_padded_sequence_empty
packed = pack_padded_sequence(sequences, lengths, batch_first=True, enforce_sorted=False)
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/utils/rnn.py", line 264, in pack_padded_sequence
_VF._pack_padded_sequence(input, lengths, batch_first)
RuntimeError: Cannot pack empty tensors.
======================================================================
ERROR: test_pack_padded_sequence_zero_length (__main__.TestPackPaddedSequence)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/user/projects/api_guided_testgen/out/bug_detect_gpt4o/exec/zero_shot/torch/torch.nn.utils.rnn.pack_padded_sequence.py", line 51, in test_pack_padded_sequence_zero_length
packed = pack_padded_sequence(sequences, lengths, batch_first=True, enforce_sorted=False)
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/utils/rnn.py", line 264, in pack_padded_sequence
_VF._pack_padded_sequence(input, lengths, batch_first)
RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) E-2224G CPU @ 3.50GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 10
CPU max MHz: 4700.0000
CPU min MHz: 800.0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchvision==0.20.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py39h5eee18b_2
[conda] mkl_fft 1.3.11 py39h5eee18b_0
[conda] mkl_random 1.2.8 py39h1128e8f_0
[conda] numpy 2.0.1 py39h5f9d8c6_1
[conda] numpy-base 2.0.1 py39hb5e798b_1
[conda] pytorch 2.5.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 2.5.0 py39_cpu pytorch
[conda] torchvision 0.20.0 py39_cpu pytorch
cc @mikaylagawarecki
| true
|
2,921,280,571
|
Ensure conj_physical always does a physical conjugation
|
amjames
|
open
|
[
"open source",
"topic: bc breaking",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147231
* __->__ #149226
When the argument tensor has a conj bit set. The conj_physical
implementation will do `arg.conj().clone()`. Which is not a physical
conjugation but a reversal of the conjugate view state.
Instead we should unconditionally perform the conjugation of the
underling data data, and ensure the argument conjugate bit is propagated
to the result.
| true
|
2,921,276,261
|
unexpected results when using torch.all_close API
|
sjh0849
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
The test_allclose_zero_toleranceshows unexpected behavior (expecting False, but torch.allclose returns True), showing that the implementation does not follow the documented and expected semantics.
```python
def test_allclose_zero_tolerance(self):
# Test with zero tolerance
a = torch.tensor([1.0, 2.0, 3.0])
b = torch.tensor([1.0, 2.0, 3.0 + 1e-9])
self.assertFalse(torch.allclose(a, b, rtol=0, atol=0))
```
```
======================================================================
FAIL: test_allclose_zero_tolerance (__main__.TestTorchAllClose)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/user/projects/api_guided_testgen/out/bug_detect_gpt4o/exec/zero_shot/torch/torch.allclose.py", line 65, in test_allclose_zero_tolerance
self.assertFalse(torch.allclose(a, b, rtol=0, atol=0))
AssertionError: True is not false
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) E-2224G CPU @ 3.50GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 10
CPU max MHz: 4700.0000
CPU min MHz: 800.0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchvision==0.20.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py39h5eee18b_2
[conda] mkl_fft 1.3.11 py39h5eee18b_0
[conda] mkl_random 1.2.8 py39h1128e8f_0
[conda] numpy 2.0.1 py39h5f9d8c6_1
[conda] numpy-base 2.0.1 py39hb5e798b_1
[conda] pytorch 2.5.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 2.5.0 py39_cpu pytorch
[conda] torchvision 0.20.0 py39_cpu pytorch
| true
|
2,921,271,196
|
Unexpected results in using torch.save API
|
sjh0849
|
open
|
[
"module: serialization",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
In test_save_with_different_pickle_protocol, the test iterates over all protocols (0 through pickle.HIGHEST_PROTOCOL) and expects that saving and then loading the tensor works correctly with each protocol. However, for protocol 0, torch.load fails with an AssertionError (inside torch.load’s persistent_load), which suggests that our source code (the new zipfile-based serialization) does not correctly support protocol 0. This is unexpected behavior based on the API.
```python
import unittest
import torch
import io
import os
import pickle
from pathlib import Path
class TestTorchSave(unittest.TestCase):
def setUp(self):
# Create a tensor to use in tests
self.tensor = torch.tensor([0, 1, 2, 3, 4])
self.filename = 'test_tensor.pt'
def tearDown(self):
# Clean up any files created during tests
if os.path.exists(self.filename):
os.remove(self.filename)
def test_save_with_different_pickle_protocol(self):
# Test saving with a different pickle protocol
for protocol in range(pickle.HIGHEST_PROTOCOL + 1):
with self.subTest(pickle_protocol=protocol):
buffer = io.BytesIO()
torch.save(self.tensor, buffer, pickle_protocol=protocol)
buffer.seek(0)
loaded_tensor = torch.load(buffer)
self.assertTrue(torch.equal(self.tensor, loaded_tensor))
if __name__ == '__main__':
unittest.main()
```
```
======================================================================
FAIL: test_save_with_different_pickle_protocol (__main__.TestTorchSave) (pickle_protocol=0)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/user/projects/api_guided_testgen/out/bug_detect_gpt4o/exec/basic_rag_apidoc/torch/torch.save.py", line 40, in test_save_with_different_pickle_protocol
loaded_tensor = torch.load(buffer)
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 1025, in load
return _load(opened_zipfile,
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 1446, in _load
result = unpickler.load()
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 1400, in persistent_load
assert isinstance(saved_id, tuple)
AssertionError
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) E-2224G CPU @ 3.50GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 10
CPU max MHz: 4700.0000
CPU min MHz: 800.0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchvision==0.20.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py39h5eee18b_2
[conda] mkl_fft 1.3.11 py39h5eee18b_0
[conda] mkl_random 1.2.8 py39h1128e8f_0
[conda] numpy 2.0.1 py39h5f9d8c6_1
[conda] numpy-base 2.0.1 py39hb5e798b_1
[conda] pytorch 2.5.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 2.5.0 py39_cpu pytorch
[conda] torchvision 0.20.0 py39_cpu pytorch
cc @mruberry @mikaylagawarecki
| true
|
2,921,262,423
|
Inconsistent results in using torch.jit.script API from API documentation.
|
sjh0849
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
Expects an AttributeError is raised since the ignored_method is skipped in compiling as per API documentation.
```python
def test_script_module_with_ignored_method(self):
class IgnoredMethodModule(nn.Module):
def forward(self, x):
return x * 2
@torch.jit.ignore
def ignored_method(self, x):
return x * 3
module = IgnoredMethodModule()
scripted_module = torch.jit.script(module)
input_tensor = torch.tensor(5)
self.assertEqual(scripted_module(input_tensor), module(input_tensor))
# Ensure ignored method is not part of the scripted module
with self.assertRaises(AttributeError):
scripted_module.ignored_method(input_tensor)
```
```
======================================================================
FAIL: test_script_module_with_ignored_method (__main__.TestTorchJitScript)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/user/projects/api_guided_testgen/out/bug_detect_gpt4o/exec/basic_rag_apidoc/torch/torch.jit.script.py", line 70, in test_script_module_with_ignored_method
scripted_module.ignored_method(input_tensor)
AssertionError: AttributeError not raised
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) E-2224G CPU @ 3.50GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 10
CPU max MHz: 4700.0000
CPU min MHz: 800.0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchvision==0.20.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py39h5eee18b_2
[conda] mkl_fft 1.3.11 py39h5eee18b_0
[conda] mkl_random 1.2.8 py39h1128e8f_0
[conda] numpy 2.0.1 py39h5f9d8c6_1
[conda] numpy-base 2.0.1 py39hb5e798b_1
[conda] pytorch 2.5.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 2.5.0 py39_cpu pytorch
[conda] torchvision 0.20.0 py39_cpu pytorch
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,921,255,893
|
inconsistent result of torch.equal API from API documentation.
|
sjh0849
|
closed
|
[
"module: docs",
"triaged",
"module: python frontend"
] | 3
|
NONE
|
### 🐛 Describe the bug
Expect this to assert false, as they are different types (based on the documentation, indicate they should have same elements), but an assertion error is thrown.
```python
def test_different_dtypes(self):
# Test with tensors of different data types
tensor1 = torch.tensor([1, 2, 3], dtype=torch.int32)
tensor2 = torch.tensor([1, 2, 3], dtype=torch.float32)
self.assertFalse(torch.equal(tensor1, tensor2))
```
```
======================================================================
FAIL: test_different_dtypes (__main__.TestTorchEqual)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/user/projects/api_guided_testgen/out/bug_detect_gpt4o/exec/basic_rag_apidoc/torch/torch.equal.py", line 28, in test_different_dtypes
self.assertFalse(torch.equal(tensor1, tensor2))
AssertionError: True is not false
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) E-2224G CPU @ 3.50GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 10
CPU max MHz: 4700.0000
CPU min MHz: 800.0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchvision==0.20.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py39h5eee18b_2
[conda] mkl_fft 1.3.11 py39h5eee18b_0
[conda] mkl_random 1.2.8 py39h1128e8f_0
[conda] numpy 2.0.1 py39h5f9d8c6_1
[conda] numpy-base 2.0.1 py39hb5e798b_1
[conda] pytorch 2.5.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 2.5.0 py39_cpu pytorch
[conda] torchvision 0.20.0 py39_cpu pytorch
cc @svekars @sekyondaMeta @AlannaBurke @albanD
| true
|
2,921,208,558
|
[MPS] Add inductor support for `i1e`.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,921,201,591
|
Skip some tests not using gradcheck on slowgradcheck
|
soulitzer
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149220
| true
|
2,921,188,983
|
Optimize pack_padded_sequence backward function
|
abdogad
|
closed
|
[] | 2
|
NONE
| null | true
|
2,921,173,356
|
cd: Add no-cache for test binaries
|
seemethere
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
MEMBER
|
This is to make it so that we don't experience issues like https://github.com/pytorch/vision/actions/runs/13861462856/job/38795684317#step:13:212
```
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
unknown package:
Expected sha256 8e34a6f02ac5a63763251953063a19ba9df855ac2c8a13ef409dfef708e2ba26
Got 341156cc5067488565c1e103be6e95105b0fc0d87d8ac24ff8891f63fd33216f
```
| true
|
2,921,111,952
|
[EZ] Fix typo in UnaryOps.mm
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
s/imput/input/
| true
|
2,921,016,360
|
[MPSInductor] Add support for atan2
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149216
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,921,006,381
|
[WIP] rewrite should_swap
|
pianpwk
|
closed
|
[
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,920,998,515
|
BC fix for AOTIModelPackageLoader() constructor defaults
|
pytorchbot
|
closed
|
[
"open source",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149082
The default value for `run_single_threaded` was wrongly specified in the .cpp file instead of the header, breaking C++-side instantiation of `AOTIModelPackageLoader` with no arguments. This PR fixes this and adds a test for the use case of running with `AOTIModelPackageLoader` instead of `AOTIModelContainerRunner` on the C++ side.
cc @desertfire @chenyang78 @penguinwu @yushangdi @benjaminglass1
| true
|
2,920,967,320
|
[fbgemm] Update FBGEMM
|
q10
|
open
|
[
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
- Update pinned version of FBGEMM to bring in compilation fixes for https://github.com/pytorch/pytorch/issues/129358
Fixes #129358
| true
|
2,920,911,870
|
[Bugfix] Skip non-tensor user inputs when calling create_graph_signature with trace_joint=True
|
bremerm31
|
closed
|
[
"fb-exported",
"topic: bug fixes",
"topic: not user facing",
"ciflow/inductor"
] | 5
|
NONE
|
Summary:
# Context
Encountered a bug while trying to export the joint graph for a module.
# Reproducer
```py
class mod(torch.nn.Module):
def forward(self, ph: object, t: torch.Tensor):
return (t.sum(),)
m = mod()
t = torch.rand(100, requires_grad=True)
g, sig = aot_export_module(
m,
args=(
None,
t,
),
trace_joint=True,
output_loss_index=0,
)
```
Before this change, crashes with
```
AttributeError: 'NoneType' object has no attribute 'requires_grad'
```
After this diff, the reproducer runs to completion
Differential Revision: D71203331
| true
|
2,920,893,083
|
Support windows in C++ shape guards
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #140756
* __->__ #149211
* #149197
* #149149
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,920,835,826
|
TEST: ir.py without ir.py
|
rec
|
closed
|
[
"module: rocm",
"open source",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 1
|
COLLABORATOR
|
TEST, please ignore.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149210
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,920,819,354
|
Wrong argument in `CompileCounterWithBackend` when running `torch.utils.benchmark.utils.compile.bench_all`
|
GdoongMathew
|
open
|
[
"triaged",
"module: benchmark"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When benchmarking forward speed with different inductor mode, the it raises an exception which causes no benchmarking result.
```python
from torchvision.models import resnet18
from torch.utils.benchmark.utils.compile import benchmark_compile
import torch
x = torch.zeros((1, 3, 64, 64), device="cuda")
model = resnet18()
model.cuda().eval()
benchmark_compile(model=model, sample_input=x, backend="inductor", mode="reduce-overhead")
```
Error message:
```
backend='<torch._dynamo.testing.CompileCounterWithBackend object at 0x7f75f3b7bfa0>' raised:
TypeError: CompileCounterWithBackend.__call__() got an unexpected keyword argument 'mode'
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
Failed to compile inductor with mode reduce-overhead
(None, None)
```
## Solution
The reason from what I understand, is that the backend is pre-wrapped with the `CompileCounterWithBackend` class, which does not support addition arguments other than the `GraphModule` and the `sample_input`. One way to solve this, is to use the `_TorchCompileWrapper, _TorchCompileInductorWrapper` to let them handle argument.
```python
# In `torch._dynamo.testing`
class CompileCounterWithBackend:
def __init__(
self,
backend: str,
# torch.compile supported arguments.
mode: str | None = None,
options: dict | None = None,
dynamic: bool | None = None,
) -> None:
from torch import _TorchCompileWrapper, _TorchCompileInductorWrapper
self.frame_count = 0
self.op_count = 0
self.backend = (
_TorchCompileInductorWrapper(
mode=mode,
options=options,
dynamic=dynamic,
)
if backend == "inductor"
else _TorchCompileWrapper(
backend,
mode=mode,
options=options,
dynamic=dynamic,
)
) # -> added
self.graphs: List[torch.fx.GraphModule] = []
def __call__(
self,
gm: torch.fx.GraphModule,
example_inputs: List[torch.Tensor],
**kwargs,
) -> Callable[..., Any]:
self.frame_count += 1
for node in gm.graph.nodes:
if "call" in node.op:
self.op_count += 1
self.graphs.append(gm)
return self.backend(gm, example_inputs)
```
And in the `benchmark_compile` function, we'd need to pass the kwargs to the `CompileCounterWithBackend`.
```python
def benchmark_compile(
model: Union[torch.nn.Module, Callable],
sample_input: Union[torch.Tensor, Any],
num_iters: int = 5,
backend: Optional[str] = None,
optimizer: Optional[torch.optim.Optimizer] = None,
loss_fn : Union[torch.nn.Module, Callable, None] = None,
**compile_kwargs: Any,
):
"""
Use this utility to benchmark torch.compile
"""
if backend:
try:
torch._dynamo.reset()
compile_counter_with_backend = CompileCounterWithBackend(backend, **compile_kwargs)
opt_model = torch.compile(model, backend=compile_counter_with_backend)
```
<details>
<summary>Environment Info</summary>
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 4GB Laptop GPU
Nvidia driver version: 565.90
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13700H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.83
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] lightning==2.5.0.post0
[pip3] lightning-utilities==0.11.7
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnx_graphsurgeon==0.5.6
[pip3] onnxconverter-common==1.14.0
[pip3] onnxmltools==1.13.0
[pip3] onnxruntime==1.21.0
[pip3] onnxruntime-gpu==1.20.2
[pip3] onnxruntime-training==1.19.2
[pip3] onnxscript==0.2.2
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.6.0
[pip3] torch_tensorrt==2.6.0
[pip3] torchao==0.5.0
[pip3] torchaudio==2.2.1+cu121
[pip3] torchmetrics==1.6.2
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
</details>
| true
|
2,920,818,695
|
op should NOT be static in aoti_torch_call_dispatcher
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cpp",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
aoti_torch_call_dispatcher is meant to call different ops, so the op must not be static. Otherwise, every call to this API will call the first op that was ever called, which is not the intended behavior of any human being.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149230
* #149052
* __->__ #149208
| true
|
2,920,798,329
|
Cache the get_device_module result
|
egienvalue
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary: As title.
Test Plan: OSS CIs.
Reviewed By: chaos5958
Differential Revision: D71084180
| true
|
2,920,782,658
|
debug ival swap
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 14
|
CONTRIBUTOR
|
Summary:
Recall that we use "ivals" to track intermediate values of mutations during unflattening. Previously, for each such intermediate value, we would create a hidden shared attribute that would be updated / read by respective submodules.
Unfortunately this scheme doesn't work when some but not all of those submodules are swapped out. This is because the swapped in submodules have no knowledge of these hidden attributes. Thus the submodules that are not swapped out end up reading / updating dangling state.
This PR does away with these hidden attributes. Instead, we directly read the underlying buffer or placeholder that was updated, and update those underlying buffers and placeholders in place. This makes the graphs look much closer to their eager origins.
Test Plan: added some tests, ensured existing tests pass
Differential Revision: D71203469
| true
|
2,920,759,083
|
Parameter not updating when FSDP2 model is used before optimizre creation
|
zhoukezi
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 1
|
NONE
|
### 🐛 Describe the bug
If calculations are performed using a FSDP2 model after calling `fully_shard` and before creating the optimizer, the parameters fail to update correctly. The parameters captured by the optimizer seem to differ from those in the training loop. Non-parallel and DDP are not affected. In larger multi-layer Transformers, only some parameters might be impacted. It is unclear which specific parameters are affected.
Example
```python
import os
from datetime import timedelta
import torch
import torch.distributed as dist
from torch.distributed.fsdp import fully_shard
class DummyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.param = torch.nn.Parameter(torch.ones(8))
def forward(self, x):
return self.param * x
rank = int(os.environ["RANK"])
torch_device = torch.device("cuda", rank)
torch.set_default_device(torch_device)
torch.cuda.set_device(rank)
dist.init_process_group(backend="nccl", timeout=timedelta(seconds=5), device_id=torch_device)
if rank == 0:
model = DummyModel()
optim = torch.optim.SGD(model.parameters(), lr=0.1)
loss = model(torch.ones(8)).sum()
loss.backward()
optim.step()
print("Reference", model.param)
# DDP
model = DummyModel()
model = torch.nn.parallel.DistributedDataParallel(model)
model(torch.ones(8)) # This line
optim = torch.optim.SGD(model.parameters(), lr=0.1)
model.train()
loss = model(torch.ones(8)).sum()
loss.backward()
optim.step()
if rank == 0:
print("DDP", model.module.param)
# FSDP2
model = DummyModel()
fully_shard(model)
model(torch.ones(8)) # This line
optim = torch.optim.SGD(model.parameters(), lr=0.1)
model.train()
loss = model(torch.ones(8)).sum()
loss.backward()
optim.step()
full = model.param.full_tensor()
if rank == 0:
print("FSDP2", full)
dist.destroy_process_group()
# Reference Parameter containing:
# tensor([0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000],
# device='cuda:0', requires_grad=True)
# DDP Parameter containing:
# tensor([0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000],
# device='cuda:0', requires_grad=True)
# FSDP2 tensor([1., 1., 1., 1., 1., 1., 1., 1.], device='cuda:0',
# grad_fn=<_ToTorchTensorBackward>)
```
### Versions
The `collect_env.py` has crashed. I'm using `uv`, and there is no `pip` in the environment.
```
Collecting environment information...
Traceback (most recent call last):
File "../collect_env.py", line 692, in <module>
main()
File "../collect_env.py", line 675, in main
output = get_pretty_env_info()
^^^^^^^^^^^^^^^^^^^^^
File "../collect_env.py", line 670, in get_pretty_env_info
return pretty_str(get_env_info())
^^^^^^^^^^^^^^
File "../collect_env.py", line 495, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../collect_env.py", line 450, in get_pip_packages
for line in out.splitlines()
^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
2,920,736,223
|
[MPS] Modify a test to test the correct function.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,920,700,015
|
[MPS] Add support for `i1e`
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149203
Followup after https://github.com/pytorch/pytorch/pull/149174
| true
|
2,920,696,794
|
[prototype] in memory checkpoint example
|
H-Huang
|
open
|
[
"oncall: distributed",
"release notes: distributed (checkpoint)"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149202
The problem:
Reading from disk / network file system is expensive. If we can read from memory then loading checkpoints is a lot faster and we can do it more frequently.
The idea:
1. Keep a process alive which has the checkpoint in memory
2. On cases where the Trainer dies (the host does not die), restart training, reconfigure the process group, retrieve the checkpoint from memory.
3. Reshard / replicate as necessary using DCP. Update to use TorchFTs pg transport which uses P2P ops.
Look at the example script https://github.com/pytorch/pytorch/pull/149202/files#diff-a41fce34729130d2f85e2eebdf2180353d2faaf0213ec778934ed075cc382a56 for a rough idea.
----------
Brainstorming docs: https://fburl.com/gdoc/w6x32v9a, https://fburl.com/gdoc/se7kh86g
Potential impact and savings: https://fburl.com/gdoc/5pcd2lkm
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,920,631,200
|
Segment fault when used with FAISS on arm
|
SylviaZiyuZhang
|
open
|
[
"module: binaries",
"module: crash",
"triaged",
"module: macos",
"module: openmp",
"module: arm"
] | 5
|
NONE
|
### 🐛 Describe the bug
TLDR: When `faiss` is imported, construction of `nn` objects yield segmentation faults.
```
thread #4, stop reason = EXC_BAD_ACCESS (code=1, address=0x8)
frame #0: 0x00000001015e5828 libomp.dylib`void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 44
libomp.dylib`__kmp_suspend_64<false, true>:
-> 0x1015e5828 <+44>: ldr x19, [x8, w0, sxtw #3]
0x1015e582c <+48>: mov x0, x19
0x1015e5830 <+52>: bl 0x1015e4dc8 ; __kmp_suspend_initialize_thread
0x1015e5834 <+56>: add x20, x19, #0x540
```
## Minimal Example
```python
import faiss
import torch
from torch import nn
patch_size = 14
input_resolution = 224
width = 1024
scale = 0.03125
positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
```
`lldb` debugger backtracking output
```
* thread #2, stop reason = EXC_BAD_ACCESS (code=1, address=0x8)
* frame #0: 0x00000001015e5828 libomp.dylib`void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 44
frame #1: 0x0000000101ad5520 libomp.dylib`kmp_flag_64<false, true>::wait(kmp_info*, int, void*) + 1880
frame #2: 0x0000000101ad0560 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 184
frame #3: 0x0000000101ad40e8 libomp.dylib`__kmp_fork_barrier(int, int) + 628
frame #4: 0x0000000101ab0e14 libomp.dylib`__kmp_launch_thread + 340
frame #5: 0x0000000101aef00c libomp.dylib`__kmp_launch_worker(void*) + 280
frame #6: 0x000000018c891034 libsystem_pthread.dylib`_pthread_start + 136
```
Problem disappears when `import faiss` is removed. Packages appear to have different omp runtimes that are interfering with each other.
### Versions
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.28.3
Libc version: N/A
Python version: 3.11.8 (main, Nov 14 2024, 22:46:31) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime)
Python platform: macOS-14.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] open_clip_torch==2.31.0
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[conda] numpy 1.26.4 py312h7f4fdc5_0
[conda] numpy-base 1.26.4 py312he047099_0
[conda] numpydoc 1.7.0 py312hca03da5_0
cc @seemethere @malfet @osalpekar @atalman @albanD @snadampal @milpuz01
| true
|
2,920,592,830
|
broadcast_object_list cast group_src to global_src is not safe when group is not subgroup of global group
|
zhc7
|
closed
|
[
"oncall: distributed",
"module: c10d"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
In torch distributed environment, it is possible to create a new process group that is larger than the global group. Example:
https://github.com/OpenRLHF/OpenRLHF/blob/f9a8fe2d78c31181aa496731a4858a9a95316927/openrlhf/utils/distributed_util.py#L19
this suggests that the group rank may actually be larger than the global rank. so there isn't a bijection guaranteed.
In this case, communicating through `broadcast_object_list` with `group_src` specified does not work as expected. This is because `broadcast_object_list` casts `group_src` into `global_src` here:
https://github.com/pytorch/pytorch/blob/71795f159e9f802acfad7235faf2939c2cf3e8d7/torch/distributed/distributed_c10d.py#L3481
this function calls `broadcast` here
https://github.com/pytorch/pytorch/blob/71795f159e9f802acfad7235faf2939c2cf3e8d7/torch/distributed/distributed_c10d.py#L3506
and here
https://github.com/pytorch/pytorch/blob/71795f159e9f802acfad7235faf2939c2cf3e8d7/torch/distributed/distributed_c10d.py#L3523
In `broadcast` function, `group_src` is ultimately used:
https://github.com/pytorch/pytorch/blob/71795f159e9f802acfad7235faf2939c2cf3e8d7/torch/distributed/distributed_c10d.py#L2712
so it is safer and better to use `group_src` as well in `broadcast_object_list` instead of `global_src`. I'm willing to submit a pr if this is confirmed.
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.28.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.154.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.0.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2101.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] ema-pytorch==0.7.6
[pip3] flashinfer-python==0.2.3+cu124torch2.5
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvidia-pytriton==0.5.5
[pip3] nvtx==0.2.5
[pip3] onnx==1.15.0rc2
[pip3] open-clip-torch==2.24.0
[pip3] optree==0.14.1
[pip3] pytorch-lightning==2.2.2
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.6.0
[pip3] torch-memory-saver==0.0.2
[pip3] torch-tensorrt==2.3.0a0
[pip3] torchao==0.9.0
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.11.0
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==1.3.2
[pip3] torchsde==0.2.6
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[pip3] tritonclient==2.44.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,920,587,740
|
DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_complex64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 7
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_complex64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38766586800).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 327, in test_binary_op_with_scalar_self_support
self._binary_test(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 263, in _binary_test
actual = op(inputs, self.is_cuda, is_fastpath)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_pow', keys=('aten::_foreach_pow', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_complex64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,920,559,770
|
[export] Update remove runtime asserts pass
|
angelayi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Test Plan: CI -- Removing asserts should be a noop
Differential Revision: D69566851
| true
|
2,920,478,703
|
use python fallback if there are overflows
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #140756
* #149211
* __->__ #149197
* #149149
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,920,417,011
|
(Will PR) Multiprocessing with CUDA_VISIBLE_DEVICES seems to give the wrong device
|
fzyzcjy
|
open
|
[
"module: multiprocessing",
"module: cuda",
"triaged"
] | 10
|
CONTRIBUTOR
|
### EDIT: PR to fix this
PR is here: https://github.com/pytorch/pytorch/pull/149248
### 🐛 Describe the bug
Hi thanks for the helpful library! When two processes have different CUDA_VISIBLE_DEVICES and pass around tensor between them, it seems the `.device` attribute is incorrect.
Example code:
```python
import os
def _run_second_process(queue):
print(f'[second] {os.environ.get("CUDA_VISIBLE_DEVICES")=}')
value_from_queue = queue.get()
print(f'[second] queue.get {value_from_queue=} {value_from_queue.device=}')
def _run_main_process():
import torch
print(f'[first] {os.environ.get("CUDA_VISIBLE_DEVICES")=}')
queue = torch.multiprocessing.Queue()
os.environ['CUDA_VISIBLE_DEVICES'] = '1,2'
p = torch.multiprocessing.Process(
target=_run_second_process,
kwargs=dict(queue=queue),
)
p.start()
del os.environ['CUDA_VISIBLE_DEVICES']
value_to_queue = torch.tensor([1.0, 2.0], device='cuda:1')
print(f'[first] queue.put {value_to_queue=} {value_to_queue.device=}')
queue.put(value_to_queue)
p.join()
if __name__ == '__main__':
_run_main_process()
```
Output:
```
[first] os.environ.get("CUDA_VISIBLE_DEVICES")=None
[second] os.environ.get("CUDA_VISIBLE_DEVICES")='1,2'
[first] queue.put value_to_queue=tensor([1., 2.], device='cuda:1') value_to_queue.device=device(type='cuda', index=1)
[second] queue.get value_from_queue=tensor([1., 2.], device='cuda:1') value_from_queue.device=device(type='cuda', index=1)
```
It seems `cuda:0` in the second process should mean `cuda:1` in the first process, thus the second process wrongly recognize the tensor as `cuda:1`.
This seems to be related to issues like github.com/volcengine/verl/pull/ 490#issuecomment-2720212225.
If I manage to find some spare time, I am happy to PR for this.
### Versions
<details>
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.39
Python version: 3.10.16 (main, Dec 4 2024, 08:53:38) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.127.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer-python==0.2.3+cu124torch2.5
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.14.1
[pip3] torch==2.5.1
[pip3] torch_memory_saver==0.0.2
[pip3] torchao==0.9.0
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.11.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @VitalyFedyunin @albanD @ptrblck @msaroufim @eqy
| true
|
2,920,357,260
|
[ROCm][Windows] Disable hipSPARSE and CK declarations and remove references for Windows
|
ikalinic
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 4
|
CONTRIBUTOR
|
This PR removes references to `hipSPARSE` and `ck` functions and disables declarations which are not supported on Windows.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,920,321,709
|
LSTM slow on PackedSequence
|
ikamensh
|
open
|
[
"module: rnn",
"triaged",
"topic: performance"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Using LSTM with `PackedSequence` input is very slow. This effect is extreme at high sequence lengths, see tables below. Given that PackedSequence is the only way to get both correct output and state for a sequence with non-homogenious length I think this is a big challenge in usability of RNNs. From less detailed experiments, similar slowdown occured for GRU. This seems like it must be avoidable, full sequence forward ignoring padding already produces right output (I can index by lengths and ignore output after it), but I can't get the correct state as only last timestep state is outputted.
Below is a script that reproduces it, both on GPU and CPU. It has commented sections for plotting and profiling. Here is how much slower using PackedSequence is:
| L | Packed / LSTM Forward (%) | Packed / LSTM Backward (%) |
|------|---------------------------|----------------------------|
| 10 | 526.90 | 277.19 |
| 20 | 739.90 | 460.88 |
| 50 | 1162.63 | 381.99 |
| 100 | 1506.77 | 395.81 |
| 200 | 2300.32 | 590.51 |
| 500 | 5967.95 | 1715.68 |
| 1000 | 9583.25 | 2793.80 |
| 2000 | 10983.58 | 5242.34 |
| 4000 | 11384.78 | 8090.40 |
```python
import time
import torch
import torch.nn as nn
import numpy as np
from functools import lru_cache
# Define the LSTM model
class SimpleLSTM(nn.Module):
def __init__(self, input_size, hidden_size):
super().__init__()
self.lstm = nn.LSTM(input_size, hidden_size, num_layers=2)
def forward(self, x):
# x shape: (sequence_length, batch_size, input_size)
out, _ = self.lstm(x)
return out
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
@lru_cache
def get_triangular_lengths(B: int, T: int) -> torch.Tensor:
return torch.from_numpy(np.linspace(1, T, num=B).round().astype(np.int64))
class PackedLSTM(nn.Module):
def __init__(self, input_size: int, hidden_size: int):
super().__init__()
self.lstm = nn.LSTM(input_size, hidden_size, num_layers=2)
def forward(self, x: torch.Tensor):
# x shape: (sequence_length, batch_size, input_size)
T, B, D = x.shape
# lengths shape: (batch_size,)
lengths = get_triangular_lengths(B, T)
packed_x = pack_padded_sequence(x, lengths.cpu(), enforce_sorted=False)
packed_out, _ = self.lstm(packed_x)
out, _ = pad_packed_sequence(packed_out)
return out
def benchmark(lstm_cls, input_size, hidden_size, batch_size, seq_len, quiet:bool =False):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = lstm_cls(input_size, hidden_size).to(device)
model.train() # Set the model to training mode
loss_fn = nn.MSELoss()
# Generate random input data and target
n_repeats = round( (100_000/seq_len)**(3/4) ) + 1
fwd = []
bckwd = []
for i in range(n_repeats):
input_data = torch.randn(seq_len, batch_size, input_size, device=device)
target = torch.randn(
seq_len, batch_size, hidden_size, device=device
) # Random target for loss computation
# Measure the time taken for a forward pass
start_time = time.time()
output = model(input_data)
forward_time = time.time() - start_time
fwd.append(forward_time)
# Measure the time taken for a backward pass
loss = loss_fn(output, target) # Compute loss
model.zero_grad()
start_time = time.time()
loss.backward()
backward_time = time.time() - start_time
bckwd.append(backward_time)
# Print the results
if not quiet:
print(
f"{lstm_cls.__name__} on {device}: Seq Length: {seq_len}, Forward: {forward_time:.5f} seconds, Backward: {backward_time:.5f} seconds"
)
return sum(fwd) / n_repeats, sum(bckwd) / n_repeats
# Parameters
input_size = 16 # Number of input features
hidden_size = 128 # Number of LSTM units
batch_size = 32 # Number of sequences to process in parallel
sequence_lengths = [
10,
20,
50,
100,
200,
500,
1000,
2000,
4000,
] # Different sequence lengths to benchmark
# Run the benchmark
for cls in [PackedLSTM, SimpleLSTM, ]:
forward_times = []
backward_times = []
for seq_len in sequence_lengths:
benchmark(
cls, input_size, hidden_size, batch_size, seq_len, quiet=True
)
forward_time, backward_time = benchmark(
cls, input_size, hidden_size, batch_size, seq_len
)
forward_times.append(forward_time)
backward_times.append(backward_time)
print(f"forward_times_{cls.__name__} = {forward_times}")
print(f"backward_times_{cls.__name__} = {backward_times}")
# # Plotting the results
# plt.figure(figsize=(10, 5))
# plt.plot(sequence_lengths, forward_times, label="Forward Time", marker="o")
# plt.plot(sequence_lengths, backward_times, label="Backward Time", marker="o")
# plt.xlabel("Sequence Length")
# plt.ylabel("Time (seconds)")
# plt.title(f"{cls.__name__} Forward and Backward Pass Time vs Sequence Length")
# plt.legend()
# plt.grid()
# plt.ylim(-10, 185)
# # plt.xscale('log') # Use logarithmic scale for better visualization
# # plt.yscale('log') # Use logarithmic scale for better visualization
# plt.show()
# import cProfile
# import io
# import pstats
#
#
# def profile_function(f, *args, **kwargs):
# pr = cProfile.Profile()
# pr.enable()
# result = f(*args, **kwargs)
# pr.disable()
#
# s = io.StringIO()
# ps = pstats.Stats(pr, stream=s).sort_stats("cumulative")
# ps.print_stats()
#
# print(s.getvalue()) # Print the profiling results
# return result # Return the original function result
#
#
# profile_function(benchmark, PackedLSTM, input_size, hidden_size, batch_size, 1_000)
```
### Observations, Implications
I see multiple posts about this in forums and stack overflow: https://stackoverflow.com/questions/72073853/pytorch-pack-padded-sequence-is-extremely-slow
https://discuss.pytorch.org/t/gru-training-very-slow-with-sequence-packing/192222
https://discuss.pytorch.org/t/pytorch-pack-padded-sequence-is-really-slow/150508
It must be that most people a) don't use PackedSequence in the first place, or b) didn't use big values of T in their timeseries and didn't mind the ~3-5 times slowdown for small T. Otherwise this is a big blocker. I'm using PackedSequence to deal with sometimes short sequences in replay buffer in RL context.
I would just use forward on padded sequence, but then I can't get correct final state. The problem is that in RL, I want to get the final state on history, and then do single step forward from that step on different possible inputs (critic Q(s,a) in SAC, for example).
Profiling has shown that most time is spent in forward / backward methods, not in packing / unpacking.
I've also observed that PackedSequence can handle longer time sequences without getting Out Of Memory errors, perhaps this is the tradeoff why its so slow.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.10 (Ootpa) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-553.22.1.el8_10.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
MIG 1g.10gb Device 0:
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7643 48-Core Processor
Stepping: 1
CPU MHz: 2300.000
CPU max MHz: 3640.9170
CPU min MHz: 1500.0000
BogoMIPS: 4591.43
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] mypy==1.8.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.10.0
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.10.0 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @mikaylagawarecki
| true
|
2,920,310,030
|
Add .editorconfig
|
zxiiro
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
This adds an .editorconfig file to automatically configure devs local Editors / IDEs with the basic formatting rules of the project.
List of supported editors: https://editorconfig.org/#pre-installed
| true
|
2,920,042,428
|
[CI INFRA TEST] Test experiment for ephemeral runners
|
jeanschmidt
|
open
|
[
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"ciflow/nightly",
"ciflow/binaries_wheel",
"ciflow/inductor",
"ciflow/slow",
"ciflow/torchao"
] | 2
|
CONTRIBUTOR
|
Just running the ci with the `ephemeral` experiment defined in https://github.com/pytorch/test-infra/issues/5132
This will run CI in meta's infra, with the ephemeral reuse changes. And force to only use ephemral runners from the autoscaler pool.
The goal of this experiment is evaluate queue time and if runners are stuck not picking up jobs. As well to evaluate if the experiment will suceed.
| true
|
2,919,791,063
|
[assoc_scan/scan] Added testcase for complex tensors
|
bohnstingl
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
We have a user @largraf using associative_scan with complex tensors. Thus, I wanted to add a test case to ensure that a `combine_fn` working on complex tensors is working with `associative_scan` and `scan`.
The tests do fail though with `torch.complex32`, potentially due to numerical precision issues? Furthermore, some operations, such as the scatter gather function (`o.scatter_(0, ind * idx, x.unsqueeze(0))`), are not implemented for `ComplexHalf` yet. Do we need to support that at the moment?
cc @ydwu4
| true
|
2,919,775,726
|
Super tiny fix typo
|
fzyzcjy
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
... when checking the doc to build from source
| true
|
2,919,682,792
|
Add scripts to generate plots of LRSchedulers
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: docs",
"release notes: optim"
] | 9
|
CONTRIBUTOR
|
Fixes #92007
## Changes
- Add script to generate plots for `lr_scheduler`
- Add plots to `lr_scheduler` docs
- Add example section if it missing in `lr_scheduler` docs
## Test Result
### LambdaLR

### MultiplicativeLR

### StepLR

### MultiStepLR

### ConstantLR

### LinearLR

### ExponentialLR

### PolynomialLR

### CosineAnnealingLR

### ChainedScheduler

### SequentialLR

### ReduceLROnPlateau

### CyclicLR

### OneCycleLR

### CosineAnnealingWarmRestarts

| true
|
2,919,648,483
|
Create devcontainer.json
|
kvandenheuvel23
|
closed
|
[
"triaged",
"open source"
] | 3
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
2,919,605,063
|
Issue with Shared CUDA Tensor Reference Counting in Multi-Processing
|
U-rara
|
open
|
[
"module: multiprocessing",
"module: cuda",
"triaged"
] | 4
|
NONE
|
### 🐛 Describe the bug
When using multi-processing sharing CUDA tensors, I discovered that when process B receives the information from process A's `tensor.untyped_storage()._share_cuda_()`, even if this information is released (without even rebuilding the tensor in process B), it causes the tensor in process A to have remaining references that cannot be properly reclaimed by `torch.cuda.ipc_collect()`. When I used the following code to decrement the reference count once:
```python
ref_counter_handle = serialized_data[0]["ref_counter_handle"]
ref_counter_offset = serialized_data[0]["ref_counter_offset"]
torch.UntypedStorage._release_ipc_counter_cuda(
ref_counter_handle, ref_counter_offset
)
```
The tensor in process A was correctly reclaimed. I'm confused whether this is the expected behavior, and I also want to know how to explicitly force the clearing of references to a shared CUDA tensor in order to release it.
Minimal implementation:
```python
import gc
import multiprocessing
import os
from time import sleep
import torch
from torch.multiprocessing.reductions import rebuild_cuda_tensor
class NamedCUDATensorMultiprocessingSerializer:
@staticmethod
def serialize(obj):
assert isinstance(obj, list)
serialized = []
for name, tensor in obj:
_storage = tensor.untyped_storage()
(
storage_device,
storage_handle,
storage_size_bytes,
storage_offset_bytes,
ref_counter_handle,
ref_counter_offset,
event_handle,
event_sync_required,
) = _storage._share_cuda_()
cuda_tensor_info = {
"name": name,
"tensor_cls": type(tensor),
"tensor_size": tensor.shape,
"tensor_stride": tensor.stride(),
"tensor_offset": tensor.storage_offset(),
"storage_cls": type(_storage),
"dtype": tensor.dtype,
"storage_device": storage_device,
"storage_handle": storage_handle,
"storage_size_bytes": storage_size_bytes,
"storage_offset_bytes": storage_offset_bytes,
"requires_grad": tensor.requires_grad,
"ref_counter_handle": ref_counter_handle,
"ref_counter_offset": ref_counter_offset,
"event_handle": event_handle,
"event_sync_required": event_sync_required,
}
serialized.append(cuda_tensor_info)
return serialized
@staticmethod
def deserialize(data):
deserialized = []
for serialized in data:
name = serialized.pop("name")
rebuilt_tensor = rebuild_cuda_tensor(**serialized)
deserialized.append((name, rebuilt_tensor))
return deserialized
def process_a(conn):
param_name = "param_A"
while True:
msg = conn.recv()
if msg == "get":
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
tensor = torch.randn([1000, 1000, 100]).to("cuda")
serialized_data = NamedCUDATensorMultiprocessingSerializer.serialize(
[(param_name, tensor)]
)
conn.send(serialized_data)
elif msg == "exit":
break
else:
print("Unknown command:", msg)
conn.close()
def main():
parent_conn, child_conn = multiprocessing.Pipe()
processA = multiprocessing.Process(target=process_a, args=(child_conn,))
processA.start()
for i in range(1000):
print("Iteration", i)
parent_conn.send("get")
serialized_data = parent_conn.recv()
# ref_counter_handle = serialized_data[0]["ref_counter_handle"]
# ref_counter_offset = serialized_data[0]["ref_counter_offset"]
# torch.UntypedStorage._release_ipc_counter_cuda(
# ref_counter_handle, ref_counter_offset
# )
del serialized_data
gc.collect()
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
sleep(0.05)
parent_conn.send("exit")
processA.join()
if __name__ == "__main__":
multiprocessing.set_start_method("spawn", force=True)
main()
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.28.1
Libc version: glibc-2.35
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-173-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 545.23.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.0.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer-python==0.2.2.post1+cu124torch2.5
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchao==0.9.0
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] flashinfer-python 0.2.2.post1+cu124torch2.5 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchao 0.9.0 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @VitalyFedyunin @albanD @ptrblck @msaroufim @eqy
| true
|
2,919,584,824
|
torch.fx.symbolic_trace failed on deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
FlintWangacc
|
open
|
[
"module: fx",
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 7
|
NONE
|
### 🐛 Describe the bug
I try to compile deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B to mlir with the following script.
```python
# Import necessary libraries
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch.export import export
import onnx
from torch_mlir import fx
# Load the DeepSeek model and tokenizer
model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
class Qwen2(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.qwen = model
def forward(self, x):
result = self.qwen(x)
result.past_key_values = ()
return result
qwen2 = Qwen2()
# Define a prompt for the model
prompt = "What are the benefits of using AI in healthcare?"
# Encode the prompt
input_ids = tokenizer.encode(prompt, return_tensors="pt")
exported_program: torch.export.ExportedProgram = export (
qwen2, (input_ids,)
)
traced_model = torch.fx.symbolic_trace(qwen2)
m = fx.export_and_import(traced_model, (input_ids,), enable_ir_printing=True,
enable_graph_printing=True)
with open("qwen1.5b_s.mlir", "w") as f:
f.write(str(m))
```
But it failed with following backtrace.
```shell
/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
Traceback (most recent call last):
File "/home/hmsjwzb/work/models/QWEN/qwen5.py", line 55, in <module>
traced_model = torch.fx.symbolic_trace(qwen2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 1314, in symbolic_trace
graph = tracer.trace(root, concrete_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 838, in trace
(self.create_arg(fn(*args)),),
^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen5.py", line 18, in forward
result = self.qwen(x)
^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 813, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 531, in call_module
ret_val = forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 806, in forward
return _orig_module_call(mod, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 856, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 813, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 531, in call_module
ret_val = forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 806, in forward
return _orig_module_call(mod, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 542, in forward
cache_position = torch.arange(
^^^^^^^^^^^^^
TypeError: arange() received an invalid combination of arguments - got (int, Proxy, device=Attribute), but expected one of:
* (Number end, *, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (Number start, Number end, *, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (Number start, Number end, Number step = 1, *, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
```
With Some debug, it seems the Trace module wrap x with Proxy to make it Proxy(x) and pass it to Qwen2. The Proxy caused error in the execution of neural network.
### Versions
```shell
Collecting environment information...
PyTorch version: 2.7.0.dev20250310+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 19.1.7 (https://github.com/llvm/llvm-project.git cd708029e0b2869e80abe31ddb175f7c35361f90)
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11+local (heads/3.11-dirty:f0895aa9c1d, Dec 20 2024, 14:17:01) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxscript==0.2.2
[pip3] optree==0.14.0
[pip3] torch==2.7.0.dev20250310+cpu
[pip3] torchvision==0.22.0.dev20250310+cpu
[pip3] triton==3.2.0
[conda] magma-cuda121 2.6.1 1 pytorch
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,919,395,978
|
[pt2_provenance_tracking] add support for cpp kernel
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Summary:
As title.
Add inductor cpp kernel to post grad graph node mapping
& UT.
Context:
Raised as a feature request for AOTI CPU case.
https://fb.workplace.com/groups/1028545332188949/permalink/1169020841474730/
Differential Revision: D71181284
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,919,311,118
|
[macOS] instantiating optimizer after torch.set_default_device("mps") throws "RuntimeError: Placeholder storage has not been allocated on MPS device!"
|
AlexPetrusca
|
open
|
[
"triaged",
"module: mps"
] | 3
|
NONE
|
### 🐛 Describe the bug
When I set "mps" as the default device and then try to instantiate an optimizer, say `torch.optim.SGD`, I get a RuntimeError with message "Placeholder storage has not been allocated on MPS device!". This also happens when instantiating other optimizers, like `torch.optim.Adam` and `torch.optim.AdamW`.
There's a few ways to work around this at the moment:
- default to using "cpu" and move `param` explicitly to "mps" with `param.to("mps")`.
- wrap the optimizer instantiation with `torch.set_default_device("cpu")` followed by `torch.set_default_device("mps")`.
So it seems like the optimizer's "placeholder storage" just can't be created on the "mps" device, but anywhere else will do.
Would love to see this issue fixed. Thanks!
### Minimal Reproduction
```python
import torch
import torch.nn as nn
torch.set_default_device("mps")
param = nn.Parameter(torch.randn(10))
print(f"Parameter device: {param.device}") # prints "Parameter device: mps:0"
optimizer = torch.optim.SGD([param], lr=1e-3) # blows up!
```
### Stack Trace
```pytb
Traceback (most recent call last):
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/debug.py", line 9, in <module>
optimizer = torch.optim.SGD([param], lr=1e-3) # throws "RuntimeError: Placeholder storage has not been allocated on MPS device!"
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/optim/sgd.py", line 63, in __init__
super().__init__(params, defaults)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/optim/optimizer.py", line 369, in __init__
self.add_param_group(cast(dict, param_group))
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_compile.py", line 46, in inner
import torch._dynamo
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/__init__.py", line 13, in <module>
from . import convert_frame, eval_frame, resume_execution
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 52, in <module>
from torch._dynamo.symbolic_convert import TensorifyState
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 57, in <module>
from . import (
...<6 lines>...
)
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/trace_rules.py", line 32, in <module>
from .variables import (
...<11 lines>...
)
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/variables/__init__.py", line 19, in <module>
from .base import VariableTracker
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 581, in <module>
from . import builder
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 86, in <module>
from ..side_effects import SideEffects
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/side_effects.py", line 21, in <module>
from .codegen import PyCodegen
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/codegen.py", line 54, in <module>
from .variables.torch_function import TensorWithTFOverrideVariable
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/variables/torch_function.py", line 193, in <module>
populate_builtin_to_tensor_fn_map()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/variables/torch_function.py", line 187, in populate_builtin_to_tensor_fn_map
setup_fn(op)
~~~~~~~~^^^^
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/variables/torch_function.py", line 175, in <lambda>
lambda o: o(1, inp1),
~^^^^^^^^^
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_tensor.py", line 38, in wrapped
return handle_torch_function(wrapped, args, *args, **kwargs)
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/overrides.py", line 1721, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_dynamo/variables/torch_function.py", line 152, in __torch_function__
return func(*args, **kwargs)
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_tensor.py", line 38, in wrapped
return handle_torch_function(wrapped, args, *args, **kwargs)
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/overrides.py", line 1721, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/utils/_device.py", line 104, in __torch_function__
return func(*args, **kwargs)
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_tensor.py", line 39, in wrapped
return f(*args, **kwargs)
File "/Users/apetrusca/alpine/project-a-week/week 7 - makemore/.venv/lib/python3.13/site-packages/torch/_tensor.py", line 1141, in __rfloordiv__
return torch.floor_divide(other, self)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
RuntimeError: Placeholder storage has not been allocated on MPS device!
```
### Versions
PyTorch version: 2.7.0.dev20250311
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.31.6
Libc version: N/A
Python version: 3.13.2 (main, Feb 6 2025, 16:51:52) [Clang 16.0.0 (clang-1600.0.26.6)] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit-Mach-O
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] torch==2.7.0.dev20250311
[pip3] torchaudio==2.6.0.dev20250311
[pip3] torchvision==0.22.0.dev20250311
[conda] Could not collect
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @kulinseth @malfet @DenisVieriu97 @jhavukainen
| true
|
2,919,305,220
|
[regression] Fix pin_memory() when it is called before device lazy initialization.
|
pytorchbot
|
closed
|
[
"open source"
] | 2
|
COLLABORATOR
|
PR #145752 has added a check in the isPinnedPtr to check if a device is initialized before checking if the tensor is pinned. Also that PR has added a lazy initialization trigger when an at::empty is called with a pinned param set to true. However, when the tensor is firstly created and it is pinned in a separate call by calling pin_memory() function, lazy device init is not called so is_pinned returns always false.
With this PR, the lazy initialization is moved to getPinnedMemoryAllocator function, thus it is assured that device is initialized before we pin a tensor.
Fixes #149032
@ngimel @albanD
| true
|
2,919,280,214
|
Add test coverage
|
hl475
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"fx"
] | 4
|
CONTRIBUTOR
|
Summary: Follow up from D71160718
Differential Revision: D71177037
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,919,268,579
|
Add side_effect to avoid dce custom op in CA graph
|
zhanglirong1999
|
closed
|
[
"module: custom-operators",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"module: compiled autograd"
] | 7
|
CONTRIBUTOR
|
We found that in compiled_autograd, when defining custom op, the custom op will be dce in the backward graph. We added a side effect condition in the dce function to prevent eliminating custom op with side effect in CA graph.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan
| true
|
2,919,205,616
|
[MPS] Add inductor support for i0e.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,919,189,889
|
[MPSInductor] Add `bessel_[jy][01]` ops
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149179
* #149123
By simply calling corresponding special functions
Followup TODO: tweak bessel_y0 to match CPU implementation for `torch.half` dtype
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,919,128,915
|
[Inductor][Optimus] Add move view after cat aten pattern
|
mengluy0125
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 9
|
CONTRIBUTOR
|
Summary:
Add aten pattern to move the view/reshape out of split cat, further reduce the number of kernels.
context: https://docs.google.com/document/d/1G2qFcQu1K7VXbz2uPe0CS2aBirnwtwI_B8lxmlBlAPQ/edit?tab=t.0
Test Plan:
### how to enable
Add the following patterns to the post grad
```
post_grad_fusion_options={
"normalization_aten_pass": {},
"move_view_after_cat_aten_pass": {},
},
```
### unit test
```
buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/inductor:split_cat_fx_aten_passes -- test_move_view_after_cat_aten
```
Buck UI: https://www.internalfb.com/buck2/3c5451be-c63a-4794-8d6b-103ecac78905
Test UI: https://www.internalfb.com/intern/testinfra/testrun/6192449704507267
### local reproduce
```
buck2 run mode/opt scripts/shuaiyang:test -- --flow_id 691990503 --use_synthetic_data --optimus
```
https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/mengluy/2025-03-13-20-59-34/trace.json.gz&bucket=gpu_traces
### E2E
baseline
f691990503
proposal
Differential Revision: D71177004
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,919,014,027
|
[Dist] Async op isend and irecv bug
|
feifei-111
|
open
|
[
"oncall: distributed",
"triaged",
"module: c10d"
] | 2
|
NONE
|
### 🐛 Describe the bug

I write a pp parallel framework for inference (for some reason, i can't post codes in the issue), and i found the time series is not correct, because of isend irecv behavior is a bit weird, just like the picture show
### Versions
cuda version: 12.2
torch version: 2.4.1
nccl version: 2.20.5 (from torch.cuda.nccl.version())
OS: Linux g340-cd51-2800-18c3-adff-a69e-f1f5 5.4.143.bsk.8-amd64 #5.4.143.bsk.8 SMP Debian 5.4.143.bsk.8 Wed Jul 20 08:43:36 UTC x86_64 GNU/Linux
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,918,858,200
|
[DO NOT LAND] Try changing the loop order
|
blaine-rister
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Testing a possible solution to #148718. It didn't work as well as expected.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,918,798,740
|
[AOTI][XPU] Fix: model_container_runner_xpu.cpp is not built into libtorch_xpu.so
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 12
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149175
The missing of model_container_runner_xpu.cpp will cause compilation failure when user build CPP inference application on XPU.
| true
|
2,918,736,849
|
[MPS] Add support for `i0e` in eager.
|
dcci
|
closed
|
[
"Merged",
"Reverted",
"topic: improvements",
"module: mps",
"release notes: mps",
"ciflow/mps",
"ciflow/inductor",
"ci-no-td"
] | 9
|
MEMBER
|
Add `special.i0e` to XFAIL_GRADLIST for now, as its backward op is not yet implemented
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,918,731,650
|
Update clang-format to 19.1.4
|
cyyever
|
open
|
[
"oncall: distributed",
"open source",
"NNC",
"topic: not user facing",
"ciflow/mps",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
To have the same version with clang-tidy used in lintrunner.
The changes are all formatting.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5
| true
|
2,918,684,341
|
fix two accuracy regression
|
shunting314
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149172
There are 2 accuracy regression in 3/12 nightly perf run. I can not repro them locally thus there is no effective way to bisect. Raise the tolerance to make them pass the accuracy check.
- error log for HF MegatronBertForQuestionAnswering https://gist.github.com/shunting314/25322b66e15e98feed32e0d9a1e43316
- error log for TIMM gluon_inception_v3 https://gist.github.com/shunting314/df64ce22327df27a7057bbbd19ef5164
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,918,673,778
|
Update logic when producing key name for keep_original_weights
|
hl475
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"fx",
"release notes: AO frontend"
] | 5
|
CONTRIBUTOR
|
Differential Revision: D71160718
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,918,673,363
|
fix two accuracy regression
|
shunting314
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149170
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,918,672,008
|
[RelEng] wheel testing for new arch versions
|
malfet
|
open
|
[
"oncall: releng",
"module: ci",
"triaged"
] | 0
|
CONTRIBUTOR
|
Was: [RelEng] s3_management/manage.py does not update indexes for new binaries
Discovered by @atalman while working on 2.7.0-RC1, when `https://download.pytorch.org/whl/test/rocm6.3/index.html` were never updated
manage.py currently only updated subfolder that contain some `.whl` files inside of it, but before first RC is built that folder is empty (perhaps it should not have been created in the first place)
But this new location is needed for `-test` step to succeed when it runs `pip install ./torch-XYZ.whl --index-url https://download.pytorch.org/whl/nightly/ACC_X_Y`
Logical solution seems to be to test if `https://download.pytorch.org/whl/nightly/ACC_X_Y` and if not fallback to default, as all dependencies its looking for should be there
cc @seemethere @pytorch/pytorch-dev-infra
| true
|
2,918,643,187
|
Remove some memory overhead in parallel compile workers
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149168
Summary: The parallel compile workers are holding on to more memory than they need to because they're loading the compiled modules into memory. Update the post-fork initializer to record when in a subprocess and skip some of the unnecessary overhead.
Test Plan: Ran a test script to compile 15k Triton kernels and used tracemalloc in the subprocs to investigate the overhead. On my devgpu:
* After importing torch in a subproc: 371M
* Without this PR, after compiling 15k kernels: 825M
* With this PR, after compiling 15k kernels: 531M
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,918,619,019
|
[AOTInductor] [BE] Add swap_constant_buffer into pybind for tests.
|
muchulee8
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149167
Summary:
We add swap_constant_buffer in pybind to add tests.
Test Plan:
python test/inductor/test_aot_inductor.py -k test_update_inactive_constant_buffer
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
2,918,614,931
|
[c10d] Add param recording for uniqueID broadcasting and allgather
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149166
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,918,557,652
|
Failed to load model in Release but can load in debug
|
Sanjib-ac
|
closed
|
[
"needs reproduction",
"oncall: jit"
] | 2
|
NONE
|
LibTorch 2.6 +cu124
Pytorch 2.6.0 +cu124
Torch::jit::load can load a model in debug but not in release.
Getting error "file_name!=nullptr".
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,918,556,868
|
[ATen-CPU] Add `math.h` for Gelu
|
SS-JIA
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Summary:
## Context
This PR is mostly to enable ExecuTorch build for Windows: https://github.com/pytorch/executorch/pull/9198
In ExecuTorch, the optimized GeLU kernel calls the ATen implementation. However, on Windows `math.h` needs to be included with `#define _USE_MATH_DEFINES` in order for math constants to be defined.
Test Plan:
Rely on CI to make sure existing tests do not break. Tested separately with ExecuTorch to make sure Windows build is successful.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,918,545,807
|
Test if ET unit tests are disabled.
|
shengfukevin
|
closed
|
[
"fb-exported",
"topic: not user facing",
"ci-no-td"
] | 8
|
CONTRIBUTOR
|
Summary:
Test if ET unit tests are disabled.
The code change will make all ET unit tests fail.
Differential Revision: D71157414
| true
|
2,918,511,036
|
[AOTInductor] Activate CPU test for update_constant_buffer
|
muchulee8
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149162
Summary:
Fixed by #145459
Test Plan:
Re-activating tests.
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
2,918,500,808
|
[AOTInductor] Add function to free buffer
|
muchulee8
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149161
* #149249
Summary:
We add a function that allows users to free the unused buffer.
Test Plan:
Testing correctness:
python test/inductor/test_aot_inductor.py -k free_inactive
Testing memory consumption:
LD_LIBRARY_PATH=/data/users/$USER/pytorch/build/lib
/home/$USER/local/pytorch/build/bin/test_aoti_inference
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
2,918,470,935
|
[cherry-pick] Revert #148823 - Make dynamism code robust to NotImplementedException
|
ZainRizvi
|
closed
|
[
"module: rocm",
"release notes: releng",
"fx",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Reverting since it was reverted from the main branch
| true
|
2,918,468,763
|
Clean up grid in execution trace
|
shengfukevin
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary: This DIFF https://www.internalfb.com/diff/D70471332 removed input "grid" when calling triton kernel. PyTorch execution trace need to make the appropriate change. It includes capturing ET and replay ET.
Test Plan:
buck2 run mode/opt caffe2/test:test_profiler_cuda -- profiler.test_execution_trace.TestExecutionTraceCUDA.test_execution_trace_with_pt2_cuda
buck2 run mode/opt param_bench/fb/integration_tests:test_et_replay
Differential Revision: D71152464
| true
|
2,918,466,389
|
[torch.export] ExportedProgram.module() does not support torch.Size as input
|
titaiwangms
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 3
|
COLLABORATOR
|
Not sure if this is an expected behavior, so file an issue to understand it. The repro is below:
```python
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, theta, size):
return torch.nn.functional.affine_grid(theta, size, align_corners=None)
model = Model()
theta = torch.ones((1, 2, 3))
size = torch.Size((1,3,24,24))
ep = torch.export.export(model, (theta, size,), strict=False)
args, kwargs = ep.example_inputs
# Fail with TreeSpec error
ep.module()(*args)
# Pass
from torch.utils._pytree import tree_map_only
args = tree_map_only(
torch.Size,
lambda x: torch.Tensor(x),
args
)
ep.module()(*args)
```
It looks like the graphmodule needs the input to be torch.Tensor, it is not following the original torch.nn.Module.
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,918,446,237
|
Bad index causes segfault instead of IndexError
|
johnstill
|
closed
|
[
"module: binaries"
] | 2
|
NONE
|
### 🐛 Describe the bug
If torch (and dependencies) has been installed from conda-forge, torch tensors fail to properly raise IndexError:
Installed from conda-forge (this is unexpected behavior):
```
$ python
Python 3.12.9 | packaged by conda-forge | (main, Mar 4 2025, 22:48:41) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.rand(10)[10]
zsh: segmentation fault python
```
Installed from pip (this is the expected behavior):
```
$ python
Python 3.12.9 | packaged by conda-forge | (main, Mar 4 2025, 22:48:41) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.rand(10)[10]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: index 10 is out of bounds for dimension 0 with size 10
>>> exit()
```
I know pytorch has decided not to maintain its Anaconda channel - the above installation is not from the pytorch Anaconda channel but from conda-forge. If that makes this someone else's problem please just point me to the right repository and I'll repost the bug report there.
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 7.9.2009 (Core) (x86_64)
GCC version: (conda-forge gcc 14.2.0-2) 14.2.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.17
Python version: 3.12.9 | packaged by conda-forge | (main, Mar 4 2025, 22:48:41) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 4
NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Model name: Intel(R) Xeon(R) CPU E5-4640 0 @ 2.40GHz
Stepping: 7
CPU MHz: 2700.146
CPU max MHz: 2800.0000
CPU min MHz: 1200.0000
BogoMIPS: 4799.81
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 20480K
NUMA node0 CPU(s): 0-7,32-39
NUMA node1 CPU(s): 8-15,40-47
NUMA node2 CPU(s): 16-23,48-55
NUMA node3 CPU(s): 24-31,56-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear spec_ctrl intel_stibp flush_l1d
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] optree==0.14.1
[pip3] torch==2.6.0
[conda] libblas 3.9.0 31_hfdb39a5_mkl conda-forge
[conda] libcblas 3.9.0 31_h372d94f_mkl conda-forge
[conda] liblapack 3.9.0 31_hc41d3b0_mkl conda-forge
[conda] libtorch 2.6.0 cpu_mkl_hc5f969b_101 conda-forge
[conda] mkl 2024.2.2 ha957f24_16 conda-forge
[conda] mkl-devel 2024.2.2 ha770c72_16 conda-forge
[conda] mkl-include 2024.2.2 ha957f24_16 conda-forge
[conda] numpy 2.2.3 py312h72c5963_0 conda-forge
[conda] optree 0.14.1 py312h68727a3_0 conda-forge
[conda] pytorch 2.6.0 cpu_mkl_py312_h446997d_101 conda-forge
[conda] pytorch-cpu 2.6.0 cpu_mkl_hc60beec_101 conda-forge
```
cc @seemethere @malfet @osalpekar @atalman
| true
|
2,918,418,704
|
illegal hardware instruction in `torch.tanh`
|
johnstill
|
closed
|
[
"needs reproduction",
"module: binaries",
"module: crash",
"triaged",
"module: intel"
] | 3
|
NONE
|
### 🐛 Describe the bug
Under some circumstances `torch.tanh` crashes with an "illegal hardware instruction"
```
$ python
Python 3.12.9 | packaged by conda-forge | (main, Mar 4 2025, 22:48:41) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> X = torch.rand(64000000)
>>> torch.tanh(X)
zsh: illegal hardware instruction python
```
But if I intersperse a single call to `torch.tanh` on a small tensor, the error doesn't happen:
```
$ python
Python 3.12.9 | packaged by conda-forge | (main, Mar 4 2025, 22:48:41) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> X = torch.rand(64000000)
>>> torch.tanh(torch.tensor([1]))
tensor([0.7616])
>>> torch.tanh(X)
tensor([0.3067, 0.2477, 0.7329, ..., 0.6530, 0.1699, 0.1196])
>>> exit()
```
### Versions
```
$ python collect_env.py ●●●[cml_tools•main]
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 7.9.2009 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.12.9 | packaged by conda-forge | (main, Mar 4 2025, 22:48:41) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 4
NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Model name: Intel(R) Xeon(R) CPU E5-4640 0 @ 2.40GHz
Stepping: 7
CPU MHz: 1212.451
CPU max MHz: 2800.0000
CPU min MHz: 1200.0000
BogoMIPS: 4799.81
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 20480K
NUMA node0 CPU(s): 0-7,32-39
NUMA node1 CPU(s): 8-15,40-47
NUMA node2 CPU(s): 16-23,48-55
NUMA node3 CPU(s): 24-31,56-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear spec_ctrl intel_stibp flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.6.0+cpu
[pip3] torchaudio==2.6.0+cpu
[pip3] torchvision==0.21.0+cpu
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.6.0+cpu pypi_0 pypi
[conda] torchaudio 2.6.0+cpu pypi_0 pypi
[conda] torchvision 0.21.0+cpu pypi_0 pypi
```
cc @seemethere @malfet @osalpekar @atalman @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,918,396,044
|
torch.multiprocessing.Queue Zeroes Out Tensors on Retrieval
|
ManuelZ
|
open
|
[
"module: windows",
"module: multiprocessing",
"module: cuda",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
When sending a CUDA tensor through a `torch.multiprocessing.Queue`, the received tensor contains only zeros instead of the expected values.
I reproduced it in Windows 10 with Pytorch 2.5.1 and 2.6.0.
I couldn't reproduce it in Colab with Pytorch 2.5.1.
Minimally reproducible example:
```
# Uncomment to test it in Colab
# %%writefile bug_report.py
import torch
import torch.multiprocessing as mp
def f1(shared_queue):
"""Send a CUDA tensor through the multiprocessing queue."""
t = torch.tensor((1, 2), device="cuda:0")
print("Tensor sent: ", t)
shared_queue.put(t)
def f2(shared_queue):
"""Retrieve the tensor from the queue and print it."""
while True:
if shared_queue.empty():
continue
t = shared_queue.get()
print(f"Tensor received: {t}")
break
if __name__ == "__main__":
mp.set_start_method("spawn", True)
shared_queue = torch.multiprocessing.Queue()
p1 = mp.Process(target=f1, args=(shared_queue,))
p2 = mp.Process(target=f2, args=(shared_queue,))
p1.start()
p2.start()
p1.join()
p2.join()
# Uncomment to test it in Colab, in a new cell
# !python bug_report.py
```
```
Tensor sent: tensor([1, 2], device='cuda:0')
Tensor received: tensor([0, 0], device='cuda:0')
```
### Versions
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home (10.0.19045 64-bit)
GCC version: (Rev6, Built by MSYS2 project) 13.1.0
Clang version: Could not collect
CMake version: version 3.31.0
Libc version: N/A
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:06:23) [MSC v.1942 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1050
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2800
MaxClockSpeed: 2801
L2CacheSize: 1024
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] efficientnet_pytorch==0.7.1
[pip3] numpy==1.26.4
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxslim==0.1.48
[pip3] pytorch_toolbelt==0.8.0
[pip3] segmentation_models_pytorch==0.4.0
[pip3] torch==2.6.0+cu126
[pip3] torch-lr-finder==0.2.2
[pip3] torchaudio==2.6.0+cu126
[pip3] torcheval==0.0.7
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.21.0+cu126
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] libblas 3.9.0 31_h641d27c_mkl conda-forge
[conda] libcblas 3.9.0 31_h5e41251_mkl conda-forge
[conda] liblapack 3.9.0 31_h1aa476e_mkl conda-forge
[conda] mkl 2024.2.2 h66d3029_15 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.90 pypi_0 pypi
[conda] pytorch-toolbelt 0.8.0 pypi_0 pypi
[conda] segmentation-models-pytorch 0.4.0 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torch-lr-finder 0.2.2 pypi_0 pypi
[conda] torchaudio 2.6.0+cu126 pypi_0 pypi
[conda] torcheval 0.0.7 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @VitalyFedyunin @albanD @ptrblck @msaroufim @eqy
| true
|
2,918,363,986
|
allow extra args for parameterization of tests in inductor
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148209
* __->__ #149154
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,918,332,011
|
ProcessGroupNCCL: ncclCommAbort hangs with NCCL 2.25.1-1
|
d4l3k
|
closed
|
[
"module: dependency bug",
"oncall: distributed",
"module: nccl",
"module: c10d",
"bug"
] | 8
|
MEMBER
|
### 🐛 Describe the bug
ncclCommAbort hangs when using NCCL 2.25.1-1 w/ PyTorch nightly. This is fixes with NCCL 2.26.2-1 which released yesterday (2025-03-12).
Full details (repro + stack traces) in https://gist.github.com/d4l3k/16a19b475952bc40ddd7f2febcc297b7
Relevant stack traces:
```
thread #16, name = 'python', stop reason = signal SIGSTOP
frame #0: 0x00007fb0b7f0792d libc.so.6`syscall + 29
frame #1: 0x00007fb08faef142 libstdc++.so.6`std::__atomic_futex_unsigned_base::_M_futex_wait_until_steady(this=<unavailable>, __addr=0x00007fac98000b00, __val=2147483648, __has_timeout=true, __s=<unavailable>, __ns=(__r = 711393434)) at futex.cc:217:18
frame #2: 0x00007fb090db0b85 libtorch_cuda.so`c10d::ProcessGroupNCCL::waitForFutureOrTimeout(std::future<bool>&, std::chrono::duration<long, std::ratio<1l, 1000l>> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>> const&, c10d::C10dLoggingData&, bool) + 725
frame #3: 0x00007fb090db1068 libtorch_cuda.so`c10d::ProcessGroupNCCL::abort() + 664
frame #4: 0x00007fb0af488edc libtorch_python.so`void pybind11::cpp_function::initialize<pybind11::cpp_function::cpp_function<void, c10d::Backend, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::call_guard<pybind11::gil_scoped_release>, char [65]>(void (c10d::Backend::*)(), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::call_guard<pybind11::gil_scoped_release> const&, char const (&) [65])::'lambda'(c10d::Backend*), void, c10d::Backend*, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::call_guard<pybind11::gil_scoped_release>, char [65]>(void&&, c10d::Backend (*)(), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::call_guard<pybind11::gil_scoped_release> const&, char const (&) [65])::'lambda1'(pybind11::detail::function_call&)::_FUN(pybind11::detail::function_call&) + 188
frame #5: 0x00007fb0aeb8866e libtorch_python.so`pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 2062
frame #6: 0x00000000004fc697 python3.10`cfunction_call(func='0x7fb039dbd260', args=<unavailable>, kwargs=<unavailable>) at methodobject.c:543:19
thread #17, name = 'python', stop reason = signal SIGSTOP
frame #0: 0x00007fb0b7ed4895 libc.so.6`clock_nanosleep@GLIBC_2.2.5 + 101
frame #1: 0x00007fb0b7ed9487 libc.so.6`__nanosleep + 23
frame #2: 0x00007fb0b7f05319 libc.so.6`usleep + 73
frame #3: 0x00007fb0937e944b libtorch_cuda.so`asyncJobLaunch(asyncJobsMain=0x00007fad3c004598, groupAbortFlag=0x00007fad3c004590) at group.cc:382:36
frame #4: 0x00007fb0937e9e54 libtorch_cuda.so`groupLaunch(job_=0x00007fad3c0045b0, simInfo=0x0000000000000000) at group.cc:423:3
frame #5: 0x00007fb0937eb0e5 libtorch_cuda.so`ncclGroupEndInternal(simInfo=0x0000000000000000) at group.cc:573:7
frame #6: 0x00007fb0937f4239 libtorch_cuda.so`ncclCommAbort(comm=<unavailable>) at init.cc:2098:3
frame #7: 0x00007fb090d83907 libtorch_cuda.so`c10d::NCCLComm::abort(std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>>) + 599
frame #8: 0x00007fb090da3ddb libtorch_cuda.so`c10d::ProcessGroupNCCL::abortCommsFromMap(std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>, std::shared_ptr<c10d::NCCLComm>, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>>, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>> const, std::shared_ptr<c10d::NCCLComm>>>>&, std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>> const&) + 75
frame #9: 0x00007fb090daea91 libtorch_cuda.so`c10d::ProcessGroupNCCL::abortComms(std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>> const&) + 129
frame #10: 0x00007fb090daf4ff libtorch_cuda.so`std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<bool>, std::__future_base::_Result_base::_Deleter>, std::thread::_Invoker<std::tuple<c10d::ProcessGroupNCCL::abort()::'lambda0'()>>, bool>>::_M_invoke(std::_Any_data const&) + 47
frame #11: 0x00007fb090c083eb libtorch_cuda.so`std::__future_base::_State_baseV2::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*) + 27
frame #12: 0x00007fb0b7e8f5c8 libc.so.6`__pthread_once_slow + 232
frame #13: 0x00007fb090da7c66 libtorch_cuda.so`std::__future_base::_Async_state_impl<std::thread::_Invoker<std::tuple<c10d::ProcessGroupNCCL::abort()::'lambda0'()>>, bool>::_M_run() + 214
frame #14: 0x00007fb08faf0e95 libstdc++.so.6`std::execute_native_thread_routine(__p=<unavailable>) at thread.cc:104:18
frame #15: 0x00007fb0b7e8a3b2 libc.so.6`start_thread + 722
frame #16: 0x00007fb0b7f0f430 libc.so.6`__clone3 + 48
thread #18, name = 'python', stop reason = signal SIGSTOP
frame #0: 0x00007fb0b7e86f4a libc.so.6`__futex_abstimed_wait_common + 202
frame #1: 0x00007fb0b7e8bec4 libc.so.6`__pthread_clockjoin_ex + 324
frame #2: 0x00007fb0937f004f libtorch_cuda.so`::commReclaim(ncclAsyncJob *) [inlined] commFree(comm=0x000000005a762f20) at init.cc:194:5
frame #3: 0x00007fb0937efe00 libtorch_cuda.so`::commReclaim(ncclAsyncJob *) [inlined] commCleanup(comm=0x000000005a762f20) at init.cc:1926:3
frame #4: 0x00007fb0937efa4a libtorch_cuda.so`commReclaim(job_=<unavailable>) at init.cc:2013:31
frame #5: 0x00007fb0937e8db8 libtorch_cuda.so`ncclAsyncJobMain(arg=0x00007fad3c0333b0) at group.cc:73:26
frame #6: 0x00007fb0b7e8a3b2 libc.so.6`start_thread + 722
frame #7: 0x00007fb0b7f0f430 libc.so.6`__clone3 + 48
```
### Versions
PyTorch main
NCCL 2.25.1-1
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o
| true
|
2,918,326,766
|
[c10d] Make getDefaultBackend more fault tolerant without relying on exceptions
|
PatriceVignola
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 8
|
CONTRIBUTOR
|
Summary: no-except builds are terminating when this exception is thrown. We should proactively check if a backend is available before calling has_hooks, instead of trying and failing.
Test Plan: CI
Differential Revision: D71144456
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,918,297,789
|
[ONNX] Cover dynamic_shapes checks within verify=True
|
titaiwangms
|
open
|
[
"module: onnx",
"triaged"
] | 3
|
COLLABORATOR
|
https://github.com/pytorch/pytorch/blob/38e81a53324146d445a81eb8f80bccebe623eb35/torch/onnx/_internal/exporter/_verification.py#L137
We can try a different set of inputs that has different shape to examine the dynamic_shapes so that users/us can catch the issues before actually applying the model, and save the trouble of making another code snippet to test it.
cc @justinchuby
| true
|
2,918,193,958
|
[fsdp] add an experimental allocator hook for buffers that participate in collective communication
|
jiayulu
|
open
|
[
"oncall: distributed",
"fb-exported",
"release notes: distributed (fsdp)"
] | 7
|
NONE
|
Summary: https://github.com/pytorch/pytorch/pull/147146
Test Plan: unit test
Differential Revision: D69694585
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,918,187,634
|
Fix shape guard failure to be valid python
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #140756
* #149211
* #149197
* __->__ #149149
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,918,187,455
|
Fix printing INT64_MIN
|
isuruf
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: dynamo"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #140756
* #149211
* #149197
* #149149
* __->__ #149148
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,918,142,688
|
[MPS] fix attention enable_gqa crash on mps
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"topic: bug fixes",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 8
|
COLLABORATOR
|
Fixes #149132
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,918,134,447
|
Update as strided doc
|
albanD
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 6
|
COLLABORATOR
|
Make it clearer why it is not recommended to use it and when the resulting Tensor will have undefined behavior.
| true
|
2,918,118,576
|
[ROCm] enable HIPMallocAsyncAllocator
|
ethanwee1
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: rocm",
"ci-no-td"
] | 27
|
CONTRIBUTOR
|
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,918,095,202
|
[c10d] Fix extra CUDA context created by barrier
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (c10d)",
"topic: bug fixes",
"ci-no-td"
] | 17
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149144
Fixes #149119.
In ProcessGroup.hpp, we create a dummy tensor for dispatching. This
requires a correct device index. This PR uses `device_id` given by user
when calling `init_process_group`.
This PR also uses `torch._C._get_accelerator()` to determine the device
type.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,918,093,634
|
[Easy] update pip sources for CUDA in nightly pull tool
|
XuehaiPan
|
open
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149143
* #145685
| true
|
2,918,087,968
|
ci: Update linux.20_04 --> linux.24_04
|
seemethere
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149142
Ubuntu 20.04 is getting deprecated soon so we might as well proactively
move to the latest LTS which is 24.04
> [!NOTE]
> The oldest supported version of python on 24.04 is Python 3.8. Since we test for Python 3.6 compat in our collect_env test we need to have this particular job stick with 20.04 for now until we decide to upgrade it to a newer python version.
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,918,084,254
|
[ONNX] Set `is_in_onnx_export` for dynamo=True
|
justinchuby
|
closed
|
[
"module: onnx",
"triaged"
] | 7
|
COLLABORATOR
|
Currently `is_in_onnx_export()` is True only for the torchscript exporter. We should set it to true during dynamo export as well to support `torch.onnx.ops.symbolic` usage.
Option 1: Users use
```
if torch.onnx.is_in_onnx_export() and torch.compile.is_exporting():
# Do the `torch.onnx.ops.symbolic` thing
```
Option 2: Define `torch.onnx.is_in_onnx_pt2_export()` or something like that
```
if torch.onnx.is_in_onnx_pt2_export():
# Do the `torch.onnx.ops.symbolic` thing
```
Option 3: Don't care about the old exporter
```
if torch.onnx.is_in_onnx_export():
# Do the `torch.onnx.ops.symbolic` thing
```
| true
|
2,918,016,530
|
PaddedTensor Init
|
alexanderb14
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 3
|
NONE
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,918,012,698
|
Gh/alexbrauckmann/paddedtensor init
|
alexanderb14
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
NONE
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,917,984,300
|
FSDP with AveragedModel
|
nikonikolov
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 3
|
CONTRIBUTOR
|
I am trying to use FSDP with `torch.optim.swa_utils.AveragedModel`, but I am getting the error
```
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/external/python_runtime_x86_64-unknown-linux-gnu/lib/python3.10/copy.py", line 161, in deepcopy
rv = reductor(4)
TypeError: cannot pickle 'module' object
```
This happens at `deepcopy.copy` inside `torch.optim.swa_utils.AveragedModel.__init__` and `module` seems to refer to `<module 'torch.cuda' from '/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/train.runfiles/pip-core_torch/site-packages/torch/cuda/__init__.py'>`
1. Is FSDP supposed to work with `torch.optim.swa_utils.AveragedModel`?
2. If not, how can one implement it? My plan to avoid `deepcopy` was to instead use a sharded state dict and compute the average separately on each rank to save memory. However, I can't find an easy way to convert the sharded state dict back to a full state dict offloaded to CPU when I need to save the state dict. Any tips on that?
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
2,917,919,812
|
Add back fake class registration to test_torchbind
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fixes #149121
Summary: as title, to fix https://github.com/pytorch/pytorch/issues/149121
Test Plan:
```
python test/export/test_torchbind.py
```
Differential Revision: D71129321
| true
|
2,917,898,165
|
Use TorchVersion for triton version check
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Followup after https://github.com/pytorch/pytorch/pull/149092#issuecomment-2721990321
To use TorchVersion for triton version parsing
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,917,891,673
|
[PGNCCL] Fix extra CUDA context created by barrier
|
kwen2501
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149135
Fixes #149119.
Use correct device to do barrier.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,917,869,344
|
add keepdim to cosine similarity
|
Isalia20
|
open
|
[
"module: nn",
"triaged",
"open source",
"release notes: onnx",
"topic: improvements"
] | 9
|
COLLABORATOR
|
Fixes #149120
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,917,847,704
|
Implement einsum backprop rather than decomposing
|
pgmoka
|
open
|
[
"module: autograd",
"triaged",
"module: python frontend"
] | 3
|
NONE
|
### 🚀 The feature, motivation and pitch
Currently when doing einsum backpropagation, the function decomposes into a series of IRs that achieves the einsum function. This is an issue for TPUs as the decomposition creates a significant performance impact due to the potential reshapes.
When looking at IRs for einsum at the moment, we will get something like:
```
IR {
%0 = f32[] prim::Constant(), xla_shape=f32[]
%1 = f32[3,3]{1,0} aten::expand(%0), xla_shape=f32[3,3]{1,0}
%2 = f32[3,3,1]{2,1,0} aten::as_strided(%1), xla_shape=f32[3,3,1]{2,1,0}
%3 = f32[3,3,1]{2,1,0} aten::as_strided(%2), xla_shape=f32[3,3,1]{2,1,0}
%4 = f32[1,3,3]{2,1,0} aten::view(%3), xla_shape=f32[1,3,3]{2,1,0}
%5 = f32[] prim::Constant(), xla_shape=f32[]
%6 = f32[3,3]{1,0} aten::expand(%5), xla_shape=f32[3,3]{1,0}
%7 = f32[3,3,1]{2,1,0} aten::as_strided(%6), xla_shape=f32[3,3,1]{2,1,0}
%8 = f32[3,3,1]{2,1,0} aten::as_strided(%7), xla_shape=f32[3,3,1]{2,1,0}
%9 = f32[1,3,3]{2,1,0} aten::view(%8), xla_shape=f32[1,3,3]{2,1,0}
%10 = f32[1,3,3]{2,1,0} aten::matmul(%9, %4), xla_shape=f32[1,3,3]{2,1,0}
%11 = f32[3,1,3]{2,1,0} aten::view(%10), xla_shape=f32[3,1,3]{2,1,0}
%12 = f32[3,3,1]{2,1,0} aten::as_strided(%11), xla_shape=f32[3,3,1]{2,1,0}
%13 = f32[3,3]{1,0} aten::view(%12), xla_shape=f32[3,3]{1,0}, ROOT=0
}
```
rather than a call to something like `aten::einsum` which can perform the einsum function more efficiently.
### Alternatives
_No response_
### Additional context
The lack of a backprop implementation from PyTorch caused an issue in PyTorchXLA (https://github.com/pytorch/xla/issues/8713). While we are able to create dispatch calls that resolve this issue, it creates potentially unknown edge cases, and it makes us acquire tech debt slowly over time.
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.