id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,834,080,960
|
[Inductor] Add a JIT Inductor unit test following #146293
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146529
Summary: To follow up https://github.com/pytorch/pytorch/pull/146293, add a JIT Inductor unit test. Other Triton template may need similar fixes.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,834,077,970
|
Add torch.func.debug_unwrap
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: torch.func"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146528
Use it to unwrap any functorch-wrapped tensor. I don't recommend using
the output in a program since it breaks the semantics of the transforms,
but it seems useful for debugging.
I will note that some people have wanted to get intermediate values out
of an e.g. grad transform, so this might be a way to do that...
Test Plan:
- tests
| true
|
2,834,050,325
|
[dynamo][fullgraph] Do not skip frame with fullgraph=True
|
anijain2305
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146553
* #146550
* __->__ #146527
Earlier if there were no ops in the graph, fullgraph=True will also fallback to eager. This hides issues in testing, where we silently fallback to eager, and do not test optimized bytecode. As can be seen in the PR, I had to fix several tests when I forced to use the optimized bytecode in the absence of graph. A few failing tests will be fixed in follow up PRs.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,834,040,218
|
[inductor] add units to estimated runtime log
|
xmfan
|
closed
|
[
"Stale",
"module: inductor",
"ciflow/inductor"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146526
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,834,025,262
|
[dynamo] improved graph break messages for some common graph break sites [1/N]
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: compile ux"
] | 9
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147429
* #147385
* __->__ #146525
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,834,021,333
|
AOTI packaged model fails with generic error when run in for loop but succeeds on individual sample
|
rbavery
|
open
|
[
"needs reproduction",
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 9
|
NONE
|
### 🐛 Describe the bug
I have a model that I used to export and compile using an early api of aoti inductor using `2.3.0+cu121`. I used to be able to run the model in a loop on batches of size `torch.randn((10, 9 * 4, 1024, 1024)`. Now I can't with torch 2.6.0 and a newly compiled AOTI model.
MRE (can provide the model if needed)
```python
import torch
model_path = "tests/test_fixtures/satlas_solar_1024.pt2"
model = torch._inductor.aoti_load_package(model_path)
device = torch.device("cuda" + ":" + str(torch.cuda.current_device()))
torch.cuda.set_device(device)
test_arr = torch.randn((10, 9 * 4, 1024, 1024), device=device)
for i in range(2):
print(i)
model(test_arr)
```
Below is the error
Sidenote: This final line of the stacktrace appears to be the same if the dtype, input shape are incorrect, which makes it confusing to debug, catch, and handle. See https://github.com/pytorch/pytorch/issues/138462 https://github.com/pytorch/pytorch/issues/141115
```
RuntimeError: run_func_( container_handle_, input_handles.data(), input_handles.size(), output_handles.data(), output_handles.size(), reinterpret_cast<AOTInductorStreamHandle>(stream_handle), proxy_executor_handle_) API call failed at /pytorch/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 107
```
I would expect the inference result and any intermediates to get deallocated quickly so that the next inference succeeds
If I decrease the batch size by one the problem goes away, but only for a low number of batches. If I increase the number of batches, I need to eventually decrease the batch size. Which I think indicates a memory leak?
### Error logs
```
(wherobots) wherobots@cd0c9b3187a0:~$ python tests/mre.py
0
1
[E205 21:41:28.167171125 c_shim_cuda.cpp:776] Exception in aoti_torch: CUDA out of memory. Tried to allocate 5.00 GiB. GPU 0 has a total capacity of 23.68 GiB of which 4.54 GiB is free. Process 511887 has 19.00 GiB memory in use. Of the allocated memory 11.92 GiB is allocated by PyTorch, and 6.44 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Exception raised from malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:1338 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7082c5af51b6 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x3d95c (0x7082c5bc695c in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libc10_cuda.so)
frame #2: <unknown function> + 0x3dd07 (0x7082c5bc6d07 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x3e24f (0x7082c5bc724f in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x1842ff9 (0x7082a8a07ff9 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #5: at::detail::empty_generic(c10::ArrayRef<long>, c10::Allocator*, c10::DispatchKeySet, c10::ScalarType, std::optional<c10::MemoryFormat>) + 0x14 (0x7082a8a02eb4 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #6: at::detail::empty_cuda(c10::ArrayRef<long>, c10::ScalarType, std::optional<c10::Device>, std::optional<c10::MemoryFormat>) + 0x12e (0x70827306578e in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #7: at::detail::empty_cuda(c10::ArrayRef<long>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) + 0x55 (0x708273065dc5 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #8: at::detail::empty_cuda(c10::ArrayRef<long>, c10::TensorOptions const&) + 0xe8 (0x708273065ee8 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #9: at::native::cudnn_convolution(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool) + 0x473 (0x70827308e973 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #10: <unknown function> + 0x3674cee (0x708275691cee in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #11: <unknown function> + 0x368a791 (0x7082756a7791 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #12: at::_ops::cudnn_convolution::call(at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::SymInt, bool, bool, bool) + 0x31c (0x7082a9c6dd0c in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #13: at::native::_convolution(at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long, bool, bool, bool, bool) + 0x1a73 (0x7082a8e416f3 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #14: <unknown function> + 0x2e8a85f (0x7082aa04f85f in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #15: <unknown function> + 0x2e9683c (0x7082aa05b83c in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #16: at::_ops::_convolution::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, bool, bool, bool, bool) + 0x1d2 (0x7082a971d3a2 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #17: <unknown function> + 0x4cc47fa (0x7082abe897fa in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #18: <unknown function> + 0x4cc578d (0x7082abe8a78d in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #19: at::_ops::_convolution::call(at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, bool, bool, bool, bool) + 0x3ac (0x7082a97544dc in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #20: at::native::convolution(at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long) + 0x3b8 (0x7082a8e343f8 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #21: <unknown function> + 0x2e89d7c (0x7082aa04ed7c in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #22: at::compositeexplicitautograd::convolution_symint(at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt) + 0x6b (0x7082aa06139b in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #23: aoti_torch_cuda_convolution + 0x1d8 (0x7082732a4cf8 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #24: torch::aot_inductor::AOTInductorModel::run_impl(AtenTensorOpaque**, AtenTensorOpaque**, CUstream_st*, AOTIProxyExecutorOpaque*) + 0x8af17 (0x7081d46aec97 in /tmp/W0tNth/data/aotinductor/model/csabojh45sbvcksqet76ss6ckkmw362qctnbe4ipwuqzht3bweqi.so)
frame #25: torch::aot_inductor::AOTInductorModelContainer::run(AtenTensorOpaque**, AtenTensorOpaque**, CUstream_st*, AOTIProxyExecutorOpaque*) + 0x1bc (0x7081d47155cc in /tmp/W0tNth/data/aotinductor/model/csabojh45sbvcksqet76ss6ckkmw362qctnbe4ipwuqzht3bweqi.so)
frame #26: AOTInductorModelContainerRun + 0x80 (0x7081d46f3320 in /tmp/W0tNth/data/aotinductor/model/csabojh45sbvcksqet76ss6ckkmw362qctnbe4ipwuqzht3bweqi.so)
frame #27: torch::inductor::AOTIModelContainerRunner::run(std::vector<at::Tensor, std::allocator<at::Tensor> > const&, void*) + 0xb5 (0x7082acaf8875 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #28: torch::inductor::AOTIModelContainerRunnerCuda::run(std::vector<at::Tensor, std::allocator<at::Tensor> > const&, void*) + 0x1e (0x708275a8149e in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #29: torch::inductor::AOTIModelPackageLoader::run(std::vector<at::Tensor, std::allocator<at::Tensor> > const&, void*) + 0xf (0x7082acae710f in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #30: <unknown function> + 0x9efc3d (0x7082bcc70c3d in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #31: <unknown function> + 0x51a017 (0x7082bc79b017 in /opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #50: <unknown function> + 0x29d90 (0x7082c64fbd90 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #51: __libc_start_main + 0x80 (0x7082c64fbe40 in /usr/lib/x86_64-linux-gnu/libc.so.6)
Error: aoti_torch_cuda_convolution(buf3040, heads_0_layers_0_0_weight, 0, var_array_52, 2, var_array_53, 2, var_array_54, 2, 0, var_array_55, 2, 1L, &buf3041_handle) API call failed at /tmp/torchinductor_rave/c4pm2ehn5ghyaxiv254gzv2odvkuskhwaeqgaqsxhqksjjrns3p7/csabojh45sbvcksqet76ss6ckkmw362qctnbe4ipwuqzht3bweqi.cpp, line 29163
Traceback (most recent call last):
File "/home/wherobots/tests/mre.py", line 10, in <module>
model(test_arr)
File "/opt/conda/envs/wherobots/lib/python3.11/site-packages/torch/_inductor/package/package.py", line 244, in __call__
flat_outputs = self.loader.run(flat_inputs) # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: run_func_( container_handle_, input_handles.data(), input_handles.size(), output_handles.data(), output_handles.size(), reinterpret_cast<AOTInductorStreamHandle>(stream_handle), proxy_executor_handle_) API call failed at /pytorch/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 107
```
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 10
CPU max MHz: 4600.0000
CPU min MHz: 800.0000
BogoMIPS: 6399.96
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1.5 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] mypy==1.14.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpy-groupies==0.11.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.0.0
[pip3] torch==2.6.0+cu124
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] numpy-groupies 0.11.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.0.0 pypi_0 pypi
[conda] torch 2.6.0+cu124 pypi_0 pypi
[conda] torchvision 0.21.0+cu124 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1 @ezyang @gchanan @zou3519 @kadeng @msaroufim
| true
|
2,834,016,328
|
DISABLED test_aoti_eager_support_out_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_support_out_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36715236035).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_support_out_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 1002, in test_aoti_eager_support_out
res_tensor = torch.clamp(
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_aoti_eager_support_out_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,833,994,440
|
[BE][EZ][Metal] Do not pass tensor length as arg
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146547
* __->__ #146522
As all devices capable of running Metal-2 support nonuniform threadgroup sizes, see https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdf for more detail
| true
|
2,833,994,355
|
[BE][EZ][Metal] Mark constant inputs as constant
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146522
* __->__ #146521
| true
|
2,833,982,192
|
CUDA CachingHostAllocator tracks registrations to call correct free
|
jeffdaily
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"rocm",
"rocm priority",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 11
|
COLLABORATOR
|
Allocations using cudaHostRegister should use corresponding cudaHostUnregister and similarly for cudaHostAlloc / cudaFreeHost. In test_cuda.py, the allocator config will change from test to test but the cache is not emptied prior to changing the config. This results in the wrong free being called later. Unit test sharding is avoiding this issue, but running the test_cuda.py with a single shard will fail.
The following reproducer demonstrates the problem.
```C++
int main(int argc, char **argv)
{
void *ptr;
assert(cudaSuccess == cudaHostAlloc(&ptr, 1024, cudaHostAllocDefault));
assert(cudaSuccess == cudaHostUnregister(ptr));
std::free(ptr);
return 0;
}
```
The above code results in the following failure because the ptr is an invalid argument to cudaHostUnregister.
```
a.out: test.cpp:53: int main(int, char**): Assertion `cudaSuccess == cudaHostUnregister(ptr)' failed.
```
| true
|
2,833,953,971
|
Redundant
|
drisspg
|
closed
|
[] | 0
|
CONTRIBUTOR
|
# Summary
TODO
| true
|
2,833,952,346
|
Triton pin update for PyTorch 2.7 / Triton 3.3: Upgrading PyTorch-Triton to a version that Supports Blackwell
|
drisspg
|
closed
|
[
"module: cuda",
"triaged"
] | 19
|
CONTRIBUTOR
|
# Upgrading PyTorch-Triton to Support Blackwell
Triton Bump PR: https://github.com/pytorch/pytorch/pull/148705
Test PR: https://github.com/pytorch/pytorch/pull/147320
This PR bumps the pin to one closer to main and is used for uncovering 🪲
Tracker Board: https://github.com/orgs/pytorch/projects/94/views/1
## Tracker
- [ ] AOT Inductor / cpp wrapper
+ [ ] NVIDIA/AMD @YUNQIUGUO @jataylo https://github.com/pytorch/pytorch/issues/147375
+ [x] XPU @anmyachev https://github.com/pytorch/pytorch/pull/146917
+ [x] https://github.com/pytorch/pytorch/issues/148111
- [x] AMD-specific
+ [x] https://github.com/pytorch/pytorch/issues/147377
+ [x] https://github.com/pytorch/pytorch/issues/147378
+ (lower pri) [ ] [Triton upstream] [Inductor] [ROCm] Cooperative reduction accuracy issues #147735
+ [ ] [Triton upstream] [Inductor] [ROCm] OpInfo quantile UT accuracy issues #147736
+ [ ] (Pending confirmation of @alexbaden's fix https://github.com/pytorch/pytorch/pull/147395 )[Triton upstream] [Inductor] [ROCm] UT failures "Cannot bitcast data-type of size" #147737
+ [ ] @YUNQIUGUO [Triton upstream] [Inductor] [ROCm] cpp_wrapper segfaults #147734
- [x] Blackwell-specific
+ [ ] https://github.com/pytorch/pytorch/issues/147478
- [x] FlexAttention/Decoding
+ [x] @drisspg https://github.com/pytorch/pytorch/issues/147373
+ [ ] [HAS FIX] https://github.com/pytorch/pytorch/issues/147468 cc @htyu
- [ ] @davidberard98 https://github.com/pytorch/pytorch/issues/144103
- [ ] Do a sweep through Triton commits to look for BC-breaking ones:
+ [ ] https://github.com/triton-lang/triton/pull/4955
+ [ ] https://github.com/triton-lang/triton/pull/5637
+ [ ] https://github.com/triton-lang/triton/pull/5926
+ ??? TODO look through more commits
- [ ] Nice to haves
+ [ ] Update FLOAT32_PRECISION in flex attention for MI300 to enable TF32
## Related Issues
- Main tracking issue for Blackwell support: #145949
- pytorch/pytorch#144103
## Overview
### Blackwell Support Status in Triton
Blackwell support was added to Triton mainline via [triton-lang/triton#5724](https://github.com/triton-lang/triton/pull/5724), including:
- Support for 5th generation Tensor Core
- Modeling and support of Tensor Memory
- Native support for microscaling formats mxfp4 and mxfp8
- Improvements to the software pipeliner for Tensor Cores and Tensor memory
### Options
1. **Cherry-pick Approach**:
- Cherry-pick essential Blackwell functionality (sm_100/120 support)
- Faster path to basic Blackwell support
- Defers MM performance optimizations until needed.
2. **Full Upgrade to Triton Main**:
- Includes all Blackwell optimizations
- Requires significant integration work
## Current Status
- Current PyTorch-Triton branch: 20 commits ahead, 524 commits behind main
- Current version: 3.2
- Blocking issue: triton-lang/triton#5512 removed `AttrsDescriptor` which impacts TorchInductor-generated code
## Timeline
WIP
CC @eellison @davidberard98 @ptrblck @atalman
cc @ptrblck @msaroufim @eqy
| true
|
2,833,946,703
|
Debug OIDC role on #139760
|
huydhn
|
closed
|
[
"Stale",
"ciflow/trunk",
"release notes: releng",
"test-config/default"
] | 3
|
CONTRIBUTOR
|
No need to review
| true
|
2,833,815,983
|
[torch] fix exception types in custom class magic setattr/getattr
|
suo
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 6
|
MEMBER
|
Summary:
`c10::AttributeError` is not automatically converted to Python AttributeError, it needs some special macros (e.g. `HANDLE_TH_ERRORS`).
Some Python functions like `hasattr` rely on the type of the throw exception to be correct.
We don't need the fully generality of those macros, so just do a targeted error type conversion here.
Test Plan: added unit test
Differential Revision: D69197217
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,833,811,148
|
add record function end callback in hccl
|
fenypatel99
|
closed
|
[
"oncall: distributed",
"fb-exported",
"release notes: distributed (c10d)"
] | 6
|
MEMBER
|
Differential Revision: D69191139
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,833,802,312
|
[DTensor][Test] Create a simple unit test for tensordot
|
wz337
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 15
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
The dims and shape of the tensors are from a specific Shampoo use case. We want to create a unit test for it to make sure there are no regressions for this.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wconstab @d4l3k @c-p-i-o
| true
|
2,833,754,207
|
[dynamo] check for incompatible configs
|
xmfan
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"ci-no-td"
] | 18
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146513
internal: https://fb.workplace.com/groups/1075192433118967/permalink/1599802033991335/
Assuming flags don't change during compilation, we shouldn't allow incompatible configs to be set at torch.compile wrap time.
Not in this PR: For flags that need to change during compilation, we'd have to be strict about where they can be used in the compile lifecycle
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,833,715,989
|
Fix `DispatchStub.cpp` compilation for gcc 14
|
anmyachev
|
closed
|
[
"triaged",
"open source",
"module: dispatch",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 21
|
COLLABORATOR
|
Otherwise I get the following error:
```bash
.../intel-xpu-backend-for-triton/pytorch/aten/src/ATen/native/DispatchStub.cpp:152:18: error: no matching function for call to ‘find(std::array<c10::DeviceType, 7>::const_iterator, std::array<c10::DeviceType, 7>::const_iterator, const c10::DeviceType&)’
152 | if (std::find(supported_devices.begin(), supported_devices.end(), device_type) == supported_devices.end()) {
| ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/c++/14/bits/locale_facets.h:48,
from /usr/include/c++/14/bits/basic_ios.h:37,
from /usr/include/c++/14/ios:46,
from /usr/include/c++/14/ostream:40,
from .../intel-xpu-backend-for-triton/pytorch/c10/core/DeviceType.h:13,
from .../intel-xpu-backend-for-triton/pytorch/aten/src/ATen/native/DispatchStub.h:3,
from .../intel-xpu-backend-for-triton/pytorch/aten/src/ATen/native/DispatchStub.cpp:2:
/usr/include/c++/14/bits/streambuf_iterator.h:435:5: note: candidate: ‘template<class _CharT2> typename __gnu_cxx::__enable_if<std::__is_char<_CharT2>::__value, std::istreambuf_iterator<_CharT, std::char_traits<_CharT> > >::__type std::find(istreambuf_iterator<_CharT, char_traits<_CharT> >, istreambuf_iterator<_CharT, char_traits<_CharT> >, const _CharT2&)’
435 | find(istreambuf_iterator<_CharT> __first,
| ^~~~
/usr/include/c++/14/bits/streambuf_iterator.h:435:5: note: template argument deduction/substitution failed:
.../intel-xpu-backend-for-triton/pytorch/aten/src/ATen/native/DispatchStub.cpp:152:18: note: mismatched types ‘std::istreambuf_iterator<_CharT, std::char_traits<_CharT> >’ and ‘const std::array<c10::DeviceType, 7>::value_type*’ {aka ‘const c10::DeviceType*’}
152 | if (std::find(supported_devices.begin(), supported_devices.end(), device_type) == supported_devices.end()) {
| ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
| true
|
2,833,707,842
|
Dynamo rewrite if ... Error to torch._check_with
|
angelayi
|
open
|
[
"feature",
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
There's a couple of cases in [modeling code](https://fburl.com/code/f25aujrk) where we see something like:
```python
def test_assert_dde(self):
class M(torch.nn.Module):
def forward(self, x):
b = x.item() # unbacked symint
if b < 10:
raise RuntimeError("bad")
return b + b
ep = torch.export.export(M(), (torch.tensor(14),))
```
This runs into a GuardOnDataDependentSymNode error due to the `if b < 10`.
A simple rewrite is to change this to a `torch._check_with`:
```python
def test_assert_dde(self):
class M(torch.nn.Module):
def forward(self, x):
b = x.item()
# if b < 10:
# raise RuntimeError("bad")
torch._check_with(RuntimeError, b >= 10, lambda: "bad")
return b + b
ep = torch.export.export(M(), (torch.tensor(14),), strict=False)
```
The torch._check then becomes a runtime check:
```
graph():
%x : [num_users=1] = placeholder[target=x]
%item : [num_users=2] = call_function[target=torch.ops.aten.item.default](args = (%x,), kwargs = {})
%ge_1 : [num_users=1] = call_function[target=operator.ge](args = (%item, 10), kwargs = {})
%_assert_scalar_default : [num_users=0] = call_function[target=torch.ops.aten._assert_scalar.default](args = (%ge_1, Runtime assertion failed for expression u0 >= 10 on node 'ge_1'), kwargs = {})
%add : [num_users=1] = call_function[target=operator.add](args = (%item, %item), kwargs = {})
return (add,)
```
This works with export(..., strict=False), and currently errors in dynamo:
```
torch._dynamo.exc.Unsupported: call_function args: BuiltinVariable(RuntimeError) SymNodeVariable() NestedUserFunctionVariable()
```
It would be cool if we could use dynamo to automagically rewrite `if ... raise Error` to become `torch._check_with(Error, ...)`! Not high-pri or anything.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @penguinwu @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,706,240
|
[ONNX] Bump torchlib opset to 22
|
justinchuby
|
closed
|
[
"open source",
"release notes: onnx"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,833,698,427
|
[BE][CI][Easy] bump `ruff` to 0.9.0: long statements in docstrings
|
XuehaiPan
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing",
"fx"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145606
* #144546
* #144569
* __->__ #146509
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,833,651,407
|
Something very wrong with float16 CPU implementation
|
ddpasa
|
open
|
[
"module: performance",
"module: cpu",
"triaged",
"module: half"
] | 14
|
NONE
|
### 🐛 Describe the bug
There is something very wrong with the float16 implementation in the CPU backend. It's much slower than the float32 implementation. I created a very simple matmul test to reproduce the issue, posting results below:
```
PyTorch Version: 2.6.0+cpu
torch.backends.cpu.get_cpu_capability(): AVX512
500x500 matrix multiplication:
fp16: 0.10194063186645508 seconds
fp32: 0.001997709274291992 seconds
fp16 takes x51.0 times
1000x1000 matrix multiplication:
fp16: 0.9695210456848145 seconds
fp32: 0.007059574127197266 seconds
fp16 takes x137.3 times
2000x2000 matrix multiplication:
fp16: 7.783996105194092 seconds
fp32: 0.057384490966796875 seconds
fp16 takes x135.6 times
```
```
PyTorch Version: 2.7.0.dev20250205+cpu
torch.backends.cpu.get_cpu_capability(): AVX512
500x500 matrix multiplication:
fp16: 0.09653520584106445 seconds
fp32: 0.002435922622680664 seconds
fp16 takes x39.6 times
1000x1000 matrix multiplication:
fp16: 0.9120731353759766 seconds
fp32: 0.011063098907470703 seconds
fp16 takes x82.4 times
2000x2000 matrix multiplication:
fp16: 6.930181264877319 seconds
fp32: 0.07475137710571289 seconds
fp16 takes x92.7 times
```
System information: Intel Core i7-1065G7 CPU running on Linux 6.12.11
I am a regular user of llama.cpp, where I use all quants from fp16 to q4, and never observed extreme performance issues of this kind.
The benchmark script is below:
```python
import time
import torch
print(f"PyTorch Version: {torch.__version__}")
print("torch.backends.cpu.get_cpu_capability():", torch.backends.cpu.get_cpu_capability())
def speedtest(N, precision):
x = torch.randn(N, N, dtype=precision)
y = torch.randn(N, N, dtype=precision)
start = time.time()
matmul_result = torch.matmul(x, y)
end = time.time()
return end - start
print('')
for N in (500, 1000, 2000):
fp32 = speedtest(N, torch.float32)
fp16 = speedtest(N, torch.float16)
print(f'{N}x{N} matrix multiplication:\nfp16: {fp16} seconds\nfp32: {fp32} seconds\nfp16 takes x{round(fp16/fp32, 1)} times\n\n')
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250205+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 19.1.7
CMake version: version 3.31.5
Libc version: glibc-2.40
Python version: 3.11.1 (main, Dec 21 2022, 10:10:41) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.12.11-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz
CPU family: 6
Model: 126
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 5
CPU(s) scaling MHz: 20%
CPU max MHz: 3900,0000
CPU min MHz: 400,0000
BogoMIPS: 2996,00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid sgx_lc fsrm md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 2 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] onnxruntime==1.19.0
[pip3] torch==2.7.0.dev20250205+cpu
[conda] Could not collect
cc @msaroufim @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,833,643,932
|
[dynamo][tests] Prepare for tightening fullgraph constraints
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146454
* __->__ #146507
In follow up PRs, we raise NoGraph exception for fullgraph=True when there is no graph. When I do this, many tests in Dynamo fail. This is bad because we silently fallback to eager, rendering the test kind of useless. Infact, it has discovered many issues. This PR adds torch ops in the funtions to force a graph.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,535,240
|
Support contextlib.ExitStack
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147990
* __->__ #146506
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,535,018
|
Allow setting attribute to NestedUserFunctionVariable
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147990
* #146506
* #146501
* #146500
* #148766
* #148765
* __->__ #146505
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,534,792
|
Introduce `UserDefinedExceptionClassVariable`
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147990
* #146506
* #146501
* #148766
* #148765
* #146500
* #146505
* #146502
* #146499
* __->__ #146504
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,534,586
|
Create new dynamo ObservedExceptions at runtime
|
guilhermeleobas
|
closed
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146506
* #146505
* __->__ #146503
* #146502
* #146501
* #146500
* #146504
* #146499
* #146498
* #146497
* #146496
* #146493
* #146492
* #146491
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,534,409
|
Correctly propagate exception to parent tx
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147990
* #146506
* #146501
* #148766
* #148765
* #146500
* #146505
* __->__ #146502
* #146499
* #146504
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,534,195
|
Update CPython tests for ctx manager to use unittest
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150466
* #147990
* #146506
* __->__ #146501
* #146500
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,534,014
|
Allow trace through unittest
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"keep-going"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150466
* #147990
* #146506
* #146501
* __->__ #146500
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,533,803
|
Add `__context/cause/suppress_context/traceback__` to Exception
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"keep-going"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147990
* #146506
* #146501
* #148766
* #148765
* #146500
* #146505
* #146502
* __->__ #146499
* #146504
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,533,557
|
Add `sys.exc_info` and `sys.exception`
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146506
* #146505
* #146502
* #146501
* #146500
* #146499
* #146504
* #146497
* #146496
* #146493
* #146492
* __->__ #146498
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,533,296
|
Propagate `AttributeError` to user code in user_defined.py
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146506
* #146505
* #146501
* #146500
* #146502
* #146499
* #146504
* __->__ #146497
* #146496
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,533,084
|
Handle `is`/`is not`
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146506
* #146505
* #146501
* #146500
* #146502
* #146499
* #146504
* #146497
* __->__ #146496
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,532,806
|
Fix round(...) with constants
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146506
* #146505
* #146503
* #146502
* #146501
* #146500
* #146504
* #146499
* #146498
* #146497
* #146496
* #146493
* #146492
* #146491
* __->__ #146495
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,532,598
|
Fix STOPITERATION_ERROR opcode
|
guilhermeleobas
|
closed
|
[
"open source",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146506
* #146505
* #146504
* #146503
* #146502
* #146501
* #146500
* #146499
* #146498
* #146497
* #146496
* #146495
* __->__ #146494
* #146493
* #146492
* #146491
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,532,387
|
Add `RAISE_VARARGS 0`
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146506
* #146505
* #146502
* #146501
* #146500
* #146499
* #146504
* #146497
* #146496
* __->__ #146493
* #146492
* #146498
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi
| true
|
2,833,532,174
|
Add `WITH_EXCEPT_START` opcode
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146506
* #146505
* #146502
* #146501
* #146500
* #146499
* #146504
* #146497
* #146496
* #146493
* __->__ #146492
* #146498
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,531,969
|
Add `make_dynamo_test`
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146506
* #146505
* #146502
* #146501
* #146500
* #146499
* #146504
* #146497
* #146496
* #146493
* #146492
* #146498
* __->__ #146491
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,833,435,354
|
[export] Serialize special values of float into strings for json.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 9
|
CONTRIBUTOR
|
Summary: Currently inf is serialized as Infinity in JSON which is not standard compliant. Instead we will tweak all special floating points into strings and handle them at json layer.
Test Plan:
see D69060784
CI
Differential Revision: D69186425
| true
|
2,833,405,692
|
Update code_template.py re.compile() is directly applied to the regex…
|
umer066
|
open
|
[
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
NONE
|
… string inside the class variable
re.compile() is directly applied to the regex string inside the class variable
Regular expressions are very expensive computationally. So, this avoids any redundant compilation.
Fixes #ISSUE_NUMBER
| true
|
2,833,387,748
|
[opcheck] Improve error reporting; allow atol/rtol overrides
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: composability"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146488
This PR improves opcheck to:
1. directly use torch.testing.assert_close (without a msg override).
This allows it to print the absolute and relative differences and the
number of mismatched elements.
2. take in an atol/rtol tolerance (for if someone just wants to use
opcheck in their testing).
Test Plan:
- tests
| true
|
2,833,304,642
|
DISABLED test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36696242960).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 952, in test_aoti_eager_dtype_device_layout
res = torch.tril_indices(
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,833,304,641
|
DISABLED test_aoti_eager_override_registration_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_override_registration_dynamic_shapes_cuda&suite=DynamicShapesCodegenGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36696243101).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_override_registration_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 1255, in test_aoti_eager_override_registration
res_array.append(getattr(torch, unary_op_name)(x))
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_aoti_eager_override_registration_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,833,283,073
|
Update quantile doc
|
ILCSFNO
|
open
|
[
"triaged",
"open source",
"release notes: python_frontend"
] | 9
|
CONTRIBUTOR
|
Fixes #146156
| true
|
2,833,263,304
|
Assertion Failure: TestMkldnnCPU.test_matmul_lower_precision_cpu_float16 on Graviton 2 & 3
|
rahultrada
|
open
|
[
"module: tests",
"triaged",
"module: mkldnn",
"module: correctness (silent)",
"module: arm"
] | 0
|
NONE
|
### 🐛 Describe the bug
Repro
```python
python test/test_mkldnn.py TestMkldnnCPU.test_matmul_lower_precision_cpu_float16
```
Error
```
File "/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4042, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 362 / 726 (49.9%)
Greatest absolute difference: 0.08086013793945312 at index (16, 6) (up to 1e-05 allowed)
Greatest relative difference: 0.16464824974536896 at index (19, 19) (up to 0.001 allowed)
To execute this test, run the following from the base repo dir:
python test/test_mkldnn.py TestMkldnnCPU.test_matmul_lower_precision_cpu_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
This failure is not encountered in CI (on both Graviton 2 & 3).
This issue is possibly related to https://github.com/pytorch/pytorch/issues/146155, which is also a test failure not encountered in CI (although that issue only affected Graviton 3)
### Versions
```
PyTorch version: 2.7.0.dev20250205+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: ARM
Model name: Neoverse-N1
Model: 1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 20
Stepping: r3p1
BogoMIPS: 50.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] torch==2.7.0.dev20250205+cpu
[conda] Could not collect
```
cc @mruberry @ZainRizvi @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01
| true
|
2,833,106,092
|
[ARM] Unit test failure on Neoverse-N1 - CPUReproTests.test_lstm_packed
|
robert-hardwick
|
open
|
[
"module: tests",
"module: arm",
"oncall: pt2",
"oncall: cpu inductor"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
We see this test failure on Neoverse-N1. `inductor/test_cpu_repro` tests are not currently enabled on CI. I have made some comments in https://github.com/pytorch/pytorch/pull/146479 with suggestion to mark failing tests on Aarch64 in test/inductor/test_cpu_repro.py as skipped.
```
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
python test/inductor/test_cpu_repro.py CPUReproTests.test_lstm_packed_unbatched_False_input_size_1_hidden_size_7_num_layers_1_bidirectional_False_bias_False_empty_state_False_batch_first_False_batch_size_1_seq_len_7
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_cpu_repro.py", line 607, in test_lstm_packed
self._test_lstm_packed(
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
[Previous line repeated 1 more time]
File "/var/lib/jenkins/workspace/test/inductor/test_cpu_repro.py", line 558, in _test_lstm_packed
self.assertTrue("aten.mkldnn_rnn_layer" in code)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
```
### Versions
Seen at pytorch commit = 354fe48db9ef94c69db6d03d997a374048824f83 on Neoverse-N1. But unable to reproduce on torch 2.5. Will run again and run collect.py for complete environment data.
cc @mruberry @ZainRizvi @malfet @snadampal @milpuz01 @chauhang @penguinwu
| true
|
2,832,956,660
|
Update addr doc
|
albanD
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Fixes https://github.com/pytorch/pytorch/issues/146399
| true
|
2,832,940,841
|
[Inductor UT][Windows][XPU] Fix Inductor UT on XPU Windows.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146481
* #147347
This PR fixed all the inductor UT failures for XPU backend on Windows we found in local machine(Due to resource constraints, we have not yet set up a Windows CI pipeline online.)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,832,877,906
|
[Submodule]: Update KleidiAI submodule to v1.3.0
|
nikhil-arm
|
closed
|
[
"module: cpu",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 14
|
COLLABORATOR
|
Change-Id: I687255982c72ee7daca438a15b718f07298963cc
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,832,815,588
|
Implement blend operation for float, double, int in VEC ATen backend for SVE
|
maajidkhann
|
closed
|
[
"module: cpu",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: intel"
] | 13
|
CONTRIBUTOR
|
- Added support for SVE vectorized blend operation for float, double, int8_t, int16_t, int32_t and int64_t data types.
- Utilizes SVE ACLE intrinsic (svcntb, svcntw, svcmpne, svsel) to handle different vector lengths (VL) dynamically.
- Ensured compatibility with SVE128, SVE256, and SVE512 hardware configurations.
- Enabled back blend SVE vec tests
**Testing:**
**a) Float DType:**
./vec_test_all_types_SVE256 --gtest_filter=BitwiseFloatsAdditional2/0.Blend [Test Passed] on Graviton 3 machine (SVE256)
./vec_test_all_types_SVE128 --gtest_filter=BitwiseFloatsAdditional2/0.Blend [Test Passed] on Graviton 4 machine (SVE128)
**b) Double DType:**
./vec_test_all_types_SVE256 --gtest_filter=BitwiseFloatsAdditional2/1.Blend [Test Passed] on Graviton 3 machine (SVE256)
./vec_test_all_types_SVE128 --gtest_filter=BitwiseFloatsAdditional2/1.Blend [Test Passed] on Graviton 4 machine (SVE128)
**c)Int DType:**
python3 test/inductor/test_cpu_repro.py CPUReproTests.test_vec_remainder
[Test Passed] on Graviton 3 machine (SVE256) and on Graviton 4 machine (SVE128)
<img width="661" alt="grv4_test_case_passed" src="https://github.com/user-attachments/assets/5572fcc0-a861-4bd6-bf9e-356219ffe656" />
Fixes https://github.com/pytorch/pytorch/issues/146309
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,832,766,325
|
[c10d] Add hccl distributed backend to c10d data structures
|
ankurneog
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ciflow/rocm",
"ci-no-td"
] | 33
|
CONTRIBUTOR
|
# MOTIVATION
Intel Gaudi is an out-of-tree PyTorch accelerator having its own device /dispatch key ```hpu``` .
With this change we add entries for Gaudi's distributed backend ```hccl``` to the c10d Backend data structures.
This is to ensure that there is no naming conflict in case a new in-tree accelerator is introduced with the same backend name.
The Out-of-tree backends are registered calling https://github.com/pytorch/pytorch/blob/fd0cd6a08f706b7bb1dedb296217b6441e4fb9ff/torch/distributed/distributed_c10d.py#L302
Successful registration adds the backend name to the list :
https://github.com/pytorch/pytorch/blob/fd0cd6a08f706b7bb1dedb296217b6441e4fb9ff/torch/distributed/distributed_c10d.py#L265
We are binding the process group creator constructs at run-time so if there are other distributed backend with the same device name they can safely add the device type to the dictionary
https://github.com/pytorch/pytorch/blob/fd0cd6a08f706b7bb1dedb296217b6441e4fb9ff/torch/distributed/distributed_c10d.py#L274
And add another entry to the dictionary with the same backend name ( but different device name )
https://github.com/pytorch/pytorch/blob/fd0cd6a08f706b7bb1dedb296217b6441e4fb9ff/torch/distributed/distributed_c10d.py#L268
In addition the out-of-tree devices can utilize the ```backend_list``` to check for successful backend registration eg: APIs like ```is_hccl_available```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,832,763,227
|
Improve error handling when checking CUDA version in case nvcc is not found
|
taras-janea
|
closed
|
[
"module: windows",
"triaged",
"open source",
"release notes: fx"
] | 2
|
COLLABORATOR
|
Fixes:
- https://github.com/pytorch/pytorch/issues/101138
**Description**
The PR enhances error handling in `_check_cuda_version` by verifying the existence of the `nvcc` executable before invoking `subprocess.check_output`. If `nvcc` is missing, a `FileNotFoundError` is raised with a clear message, guiding users to check their CUDA installation and path configuration.
**Testing**
Manually tested with and without `nvcc` present in the expected path.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,832,715,918
|
[Feat]: Improve KleidiAI 4 bit kernel performance
|
nikhil-arm
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"topic: performance",
"release notes: intel"
] | 13
|
COLLABORATOR
|
Description:
1. New thread blocking accelerates GEMVs
2. We increase throughput of the lhs quant pack + matmul pipeline by decoupling two operations.
3. The new blocking strategy blocks ```out_feature``` to accelerate GEMVs
Perf improvements:
12% speedup in LLM prefill phase and upto 16% speedup in autoregressive phase
Perf Benchmarking : https://github.com/pytorch/pytorch/issues/143289#issuecomment-2545773370
Change-Id: Ie574ff8459fdb75701ae366158b4e118c70694e4
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,832,535,507
|
fix: replace stderr with stdout for download messages in hub.py
|
yousoumar
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 14
|
CONTRIBUTOR
|
This PR addresses an issue where download logs in `hub.py` are sent to `stderr` instead of `stdout`. Hence, when running models with workers, these messages are incorrectly categorized as errors, leading to confusion.
| true
|
2,832,468,878
|
Fix torch.take_along_dim param type and default description
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 9
|
CONTRIBUTOR
|
## Changes
- Change type description to `LongTensor`, consistent with [`torch.take`](https://pytorch.org/docs/stable/generated/torch.take.html)
- Add `dim` param default value description
## Test Result
**Before**

**After**

| true
|
2,832,252,201
|
[export] Fix logger handler
|
angelayi
|
closed
|
[
"fb-exported",
"release notes: export"
] | 2
|
CONTRIBUTOR
|
Differential Revision: D69169179
| true
|
2,832,182,815
|
Refactoring pipeline parallelism test cases to be device agnostic [1/n]
|
AnantGulati
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: pipelining"
] | 15
|
CONTRIBUTOR
|
In this series of PR we intend to refactor pipeline parallelism test cases to enable to be completely device agnostic.
These changes will include the following approaches to do the same :
- Allowing for multiple device types using instantiate_device_type_test
- Replacing calls to cuda stream with torch.get_device_module(device) wherever it applies
This should result in improvement in usability for all devices
For this PR we have shown support for the following devices:
- CPU (wherever applicable)
- CUDA
- HPU
- XPU
To add other device new users can simply append their device to the device list
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,832,116,196
|
Automatic Dynamic does not handle Tuple fields blocking automatic dynamic on norm layers
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
CONTRIBUTOR
|
if we have the following:
program then automatic dynamic can handle it fine (meaning after the second compile we generate a program that is dynamic with respect to any inputs to the Model)
```python
import torch
import torch.nn as nn
# so automatic dynamic does work fine if we set those two to false.
torch._dynamo.config.force_nn_module_property_static_shapes = False
torch._dynamo.config.force_parameter_static_shapes = False
class Y(torch.nn.Module):
def __init__(self, n_input, n_output):
super().__init__()
self.x = n_input
self.compress = nn.Linear(n_input, n_output)
def forward(self, x):
return self.compress(x)
@torch.compile()
class M(torch.nn.Module):
def __init__(self,n_input, n_output):
self.n_input = n_input
self.n_output = n_output
super().__init__()
self.cle = Y(n_input, n_output)
@torch._dynamo.disable
def markDynamic(self, x: torch.Tensor):
# just do nothing lol
# torch._dynamo.mark_dynamic(x, 0)
return x
def forward(self, x):
# self.markDynamic(x)
return self.cle(x)*self.cle.x
model = M(3210, 1)
mode2 = M(33, 2)
mode3 = M(100, 3)
def func():
mode3(torch.rand(100))
print("hi")
model(torch.rand(3210))
mode2(torch.rand(33))
func()
```
```python
class GraphModule(torch.nn.Module):
def forward(self, primals_1: "Sym(s0)", primals_2: "Sym(s1)", primals_3: "f32[s0, s1][s1, 1]cpu", primals_4: "f32[s0][1]cpu", primals_5: "f32[s1][1]cpu", primals_6: "Sym(s4)"):
# File: /home/lsakka/pytorch/example4.py:15 in forward, code: return self.compress(x)
view: "f32[1, s1][s1, 1]cpu" = torch.ops.aten.view.default(primals_5, [1, primals_2])
permute: "f32[s1, s0][1, s1]cpu" = torch.ops.aten.permute.default(primals_3, [1, 0]); primals_3 = None
addmm: "f32[1, s0][s0, 1]cpu" = torch.ops.aten.addmm.default(primals_4, view, permute); primals_4 = view = permute = None
view_1: "f32[s0][1]cpu" = torch.ops.aten.view.default(addmm, [primals_1]); addmm = None
# File: /home/lsakka/pytorch/example4.py:34 in forward, code: return self.cle(x)*self.cle.x
mul_7: "f32[s0][1]cpu" = torch.ops.aten.mul.Tensor(view_1, primals_6); view_1 = None
return (mul_7, primals_5, primals_1, primals_2, primals_6)
class GraphModule(torch.nn.Module):
def forward(self, primals_1: "Sym(s0)", primals_2: "Sym(s1)", primals_6: "Sym(s4)", primals_5: "f32[s1][1]cpu", tangents_1: "f32[s0][1]cpu"):
# File: /home/lsakka/pytorch/example4.py:34 in forward, code: return self.cle(x)*self.cle.x
mul_9: "f32[s0][1]cpu" = torch.ops.aten.mul.Tensor(tangents_1, primals_6); tangents_1 = primals_6 = None
# File: /home/lsakka/pytorch/example4.py:15 in forward, code: return self.compress(x)
view_2: "f32[1, s0][s0, 1]cpu" = torch.ops.aten.view.default(mul_9, [1, primals_1]); mul_9 = None
permute_1: "f32[s0, 1][1, s0]cpu" = torch.ops.aten.permute.default(view_2, [1, 0])
view: "f32[1, s1][s1, 1]cpu" = torch.ops.aten.view.default(primals_5, [1, primals_2]); primals_5 = primals_2 = None
mm: "f32[s0, s1][s1, 1]cpu" = torch.ops.aten.mm.default(permute_1, view); permute_1 = view = None
sum_1: "f32[1, s0][s0, 1]cpu" = torch.ops.aten.sum.dim_IntList(view_2, [0], True); view_2 = None
view_3: "f32[s0][1]cpu" = torch.ops.aten.view.default(sum_1, [primals_1]); sum_1 = primals_1 = None
return (None, None, mm, view_3, None, None)
```
if we do a minor change and make and change
self.x = n_input to
self.x = (n_input,)
and then when we read it we do self.x[0]
then we end up with 3 compilations and do not have dynamic kernel
```python
import torch
import torch.nn as nn
# so automatic dynamic does work fine if we set those two to false.
torch._dynamo.config.force_nn_module_property_static_shapes = False
torch._dynamo.config.force_parameter_static_shapes = False
class Y(torch.nn.Module):
def __init__(self, n_input, n_output):
super().__init__()
self.x = (n_input,)
self.compress = nn.Linear(n_input, n_output)
def forward(self, x):
return self.compress(x)
@torch.compile()
class M(torch.nn.Module):
def __init__(self,n_input, n_output):
self.n_input = n_input
self.n_output = n_output
super().__init__()
self.cle = Y(n_input, n_output)
@torch._dynamo.disable
def markDynamic(self, x: torch.Tensor):
# just do nothing lol
# torch._dynamo.mark_dynamic(x, 0)
return x
def forward(self, x):
# yes dynamism lol
out = self.cle(x)*self.cle.x[0]
return out
model = M(3210, 1)
mode2 = M(33, 2)
mode3 = M(100, 3)
def func():
mode3(torch.rand(100))
print("hi")
model(torch.rand(3210))
mode2(torch.rand(33))
func()
```
Norm layer uses a tuple to store internal fields and this block making things that use it dynamic with automatic dynamic.
cc @chauhang @penguinwu @ezyang @bobrenjc93 @Chillee
| true
|
2,832,074,046
|
Fix torch.nn.functional.one_hot param num_classes optional description
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
`torch.nn.functional.one_hot` [document](https://pytorch.org/docs/stable/generated/torch.nn.functional.one_hot.html) describe param `num_classes` not optional, but user can call method without pass it.

```python
>>> import torch
>>> a = torch.arange(0, 5) % 3 # [0,1,2,0,1]
>>> torch.nn.functional.one_hot(a)
tensor([[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0],
[0, 1, 0]])
```
`num_classes` has default value -1
https://github.com/pytorch/pytorch/blob/93d98aca310af96dfdb6b2507d9ca80e6b1f74b7/aten/src/ATen/native/native_functions.yaml#L6154-L6157
## Test Result

| true
|
2,831,891,107
|
[ONNX] Improve deprecation messages
|
justinchuby
|
closed
|
[
"module: onnx",
"triaged"
] | 0
|
COLLABORATOR
|
- [x] Create warnings https://github.com/pytorch/pytorch/pull/146425 https://github.com/pytorch/pytorch/pull/146639
- [x] Create warnings for torch.onnx.export deprecated options
- [x] Remove mentioning of the deprecated dynamo classes in docs
- [x] Improve documentation
- [x] Deprecation warning for OperatorExportTypes and ExportTypes
| true
|
2,831,847,538
|
all_to_all_single hangs when some ranks don't send anything
|
Edenzzzz
|
closed
|
[
"oncall: distributed"
] | 7
|
NONE
|
### 🐛 Describe the bug
When using `all_to_all_single` with two ranks and **only one sending rank** in a diffusion model forward region, the code deadlocks. However, when taking only the all to all part out with the same send & recv splits and input sizes, the code runs smoothly. In the logs below, rank 1 recv buffer is `nil`, which should be expected because rank 1 receives 0 bytes from itself?
Have been stuck here for a few days. Thanks for any suggestions in advance!
## The inputs are:
```
rank 0 input_split_size_list [1, 1], output_split_size_list [1, 0],input_shape torch.Size([2, 24, 2304, 1, 128]), output_shape [1, 24, 2304, 1, 128]
rank 1 input_split_size_list [0, 0], output_split_size_list [1, 0],input_shape torch.Size([0, 2, 2304, 24, 128]), output_shape [1, 24, 2304, 1, 128]
```
## Full deadlock logs with `NCCL_DEBUG_SUBSYS=COLL NCCL_DEBUG=INFO`
```
NCCL version 2.21.5+cuda12.4
INFO 02-05 17:31:41 [comm.py:194] rank 1 input_split_size_list [0, 0], output_split_size_list [1, 0],input_shape torch.Size([0, 2, 2304, 24, 128]), output_shape [1, 24, 2304, 1, 128]
ip-172-31-59-18:52584:52584 [0] NCCL INFO Send: opCount 0 sendbuff (nil) recvbuff 0x74340e800000 count 7077888 datatype 9 op 0 root 0 comm 0x566f5e871ef0 [nranks=2] stream 0x566e37493cd0
ip-172-31-59-18:52585:52585 [1] NCCL INFO Send: opCount 0 sendbuff (nil) recvbuff (nil) count 0 datatype 7 op 0 root 0 comm 0x56701f1777d0 [nranks=2] stream 0x5670008a1ed0
ip-172-31-59-18:52584:52584 [0] NCCL INFO Recv: opCount 0 sendbuff (nil) recvbuff 0x74340d980000 count 7077888 datatype 9 op 0 root 0 comm 0x566f5e871ef0 [nranks=2] stream 0x566e37493cd0
ip-172-31-59-18:52584:52584 [0] NCCL INFO Send: opCount 0 sendbuff (nil) recvbuff 0x74340f580000 count 7077888 datatype 9 op 0 root 1 comm 0x566f5e871ef0 [nranks=2] stream 0x566e37493cd0
ip-172-31-59-18:52585:52585 [1] NCCL INFO Recv: opCount 0 sendbuff (nil) recvbuff 0x792e23000000 count 7077888 datatype 7 op 0 root 0 comm 0x56701f1777d0 [nranks=2] stream 0x5670008a1ed0
ip-172-31-59-18:52584:52584 [0] NCCL INFO Recv: opCount 0 sendbuff (nil) recvbuff 0x74340e700000 count 0 datatype 9 op 0 root 1 comm 0x566f5e871ef0 [nranks=2] stream 0x566e37493cd0
ip-172-31-59-18:52585:52585 [1] NCCL INFO Send: opCount 0 sendbuff (nil) recvbuff (nil) count 0 datatype 7 op 0 root 1 comm 0x56701f1777d0 [nranks=2] stream 0x5670008a1ed0
ip-172-31-59-18:52585:52585 [1] NCCL INFO Recv: opCount 0 sendbuff (nil) recvbuff 0x792e24b00000 count 0 datatype 7 op 0 root 1 comm 0x56701f1777d0 [nranks=2] stream 0x5670008a1ed0
ip-172-31-59-18:52585:52585 [1] NCCL INFO AllReduce: opCount 1 sendbuff 0x79248bb91200 recvbuff 0x79248bb91200 count 1 datatype 7 op 0 root 0 comm 0x56701f1777d0 [nranks=2] stream 0x5670008a1ed0
ip-172-31-59-18:52584:52584 [0] NCCL INFO AllReduce: opCount 1 sendbuff 0x7426bb191200 recvbuff 0x7426bb191200 count 1 datatype 7 op 0 root 0 comm 0x566f5e871ef0 [nranks=2] stream 0x566e37493cd0
```
## Test script decoupled from model forward:
```
import torch
import os
import torch.distributed as dist
def test_all_to_all_single(rank, world_size):
torch.cuda.set_device(rank)
dist.init_process_group(backend='nccl', rank=rank, world_size=world_size)
seq_len = 2304
hc = 24
hdim = 128
input = torch.ones([2, hc, seq_len, hdim], device='cuda') if dist.get_rank() == 0 else torch.tensor([], device='cuda')
input_split_sizes = [1, 1] if dist.get_rank() == 0 else [0, 0]
output_split_sizes = [1, 0]
print(f"rank {dist.get_rank()} input: {input.shape}, input_split_sizes: {input_split_sizes}, output_split_sizes: {output_split_sizes}")
output = torch.empty([1, hc, seq_len, hdim], device='cuda')
dist.all_to_all_single(output, input, output_split_sizes=output_split_sizes, input_split_sizes=input_split_sizes)
dist.barrier()
print(f"rank {dist.get_rank()} output: {output.shape}")
dist.destroy_process_group()
def mp():
world_size = 2
torch.multiprocessing.spawn(test_all_to_all_single, args=(world_size, ), nprocs=world_size, join=True)
if __name__ == '__main__':
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "29506"
mp()
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.127.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cudnn-frontend==1.5.1
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] optree==0.14.0
[pip3] pynvjitlink==0.2.3
[pip3] pytorch-triton==3.0.0+989adb9a2
[pip3] torch==2.6.0+cu126
[pip3] torch-tensorrt==2.5.0a0
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,831,802,832
|
Fix an issue where functional collectives don't force fx stride on inputs when compiled
|
yifuwang
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"module: inductor",
"ciflow/inductor"
] | 13
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146467
Fixes https://github.com/pytorch/pytorch/issues/146416
Also added contiguity checks in the C++ functional collective ops to prevent striding issues introduced during compilation manifest as silent correctness issues.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,831,724,217
|
Fix one_hot inconsistent errors after compile
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: dynamo"
] | 10
|
CONTRIBUTOR
|
Fixes #146274
**Test Result**
```python
>>> import torch
>>> f = torch.nn.functional.one_hot
>>> a = torch.arange(0, 5) % 3 # [0,1,2,0,1]
>>> num_classes = 0
>>> torch.nn.functional.one_hot(a,num_classes)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Class values must be smaller than num_classes.
>>> torch.compile(torch.nn.functional.one_hot)(a,num_classes)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zong/code/pytorch/torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/zong/code/pytorch/torch/_dynamo/external_utils.py", line 48, in inner
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
RuntimeError: Class values must be smaller than num_classes.
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @bdhirsh
| true
|
2,831,688,349
|
[MPS] Implement support for zeta (both eager and inductor).
|
dcci
|
closed
|
[
"Merged",
"topic: improvements",
"module: mps",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4
|
MEMBER
|
A test was failing in inductor (`test_pointwise_zeta`) -- and I realized the operation was missing also from eager.
Implemented for both, leveraging the kernel. Happy to split in two (one PR for eager, one for inductor) if folks prefer.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,831,643,171
|
[symbolic shapes] Log id for each SymNode
|
angelayi
|
closed
|
[
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,831,629,293
|
[inductor] fix custom op returning unbacked symint
|
ydwu4
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146463
Fixes https://github.com/pytorch/pytorch/issues/146457.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,831,624,833
|
[aotinductor] add option to disable runtime assertions
|
ColinPeppler
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 24
|
CONTRIBUTOR
|
A recent user experience is like this:
* User runs AOTI lowering, it's successful.
* They take AOTI model and run it with some sample inputs. Everything runs well
* Then they boot up a serving test that loads the AOTI model and runs it with a set of sample requests.
* They see that some of the requests fail. The logs show them this:
* AOTInductorModel run failed with input spec: [1, 32]:c10::BFloat16, [2]:long ...
* Error: u45 >= 2
* To the untrained eye, "AOTInductorModel run failed" is all they see. But, the true reason is Error: u45 >= 2
However, the assertion isn't always correct.
* In fact, u45 can actually be 0.
* So, why did AOTI say u45 ≥ 2? It's a two-piece combo:
* With 0/1 Specialization, the ShapeEnv creates symbolic shapes (e.g. s0) with a default value-range of [2, inf]
* In the graph, Dynamo traces torch.mul(A, B) where A is [s0, ...]and B is [u45, ...]. So, Dynamo learns Eq(s0, u45).
* Therefore, u45 also has a range of [2, inf]. Hence, the incorrect runtime assertion.
So, the motivation for this PR is to add an option to disable the logging. If you run into a situation like this. However, another way to avoid this is to call `mark_unbacked()` on all the dynamic dims.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146462
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
@diff-train-skip-merge
| true
|
2,831,616,243
|
[CUDA][SDPA] Compute reference in `test_triton_scaled_dot_product_attention_block_size_16_cuda_float32` in `float64`
|
eqy
|
closed
|
[
"module: sparse",
"module: cuda",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: sdpa"
] | 8
|
COLLABORATOR
|
Seems to currently fail with mismatches in the 1e-4 range presumably due to sdpa calling into the `MATH` backend here which is less fused than a triton kernel. Doing the ref computation in `float64` appears to fix it.
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @ptrblck @msaroufim
| true
|
2,831,592,851
|
[MPSInductor] Scope-down test_prod running in MPS
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146458
* __->__ #146460
As mutli-stage reductions are yet not a thing, but original `test_prod` just returned 0 for large reductions, so failures were reported as flaky ones, but if one to run the same test with `MTL_DEBUG_LAYER=1` than failure was obvious
```
2025-02-04 11:51:30.034 Python[16594:289093] Metal API Validation Enabled
test_prod (__main__.MPSBasicTests.test_prod) ... -[MTLDebugComputeCommandEncoder _validateThreadsPerThreadgroup:]:1266: failed assertion `(threadsPerThreadgroup.width(1) * threadsPerThreadgroup.height(2050) * threadsPerThreadgroup.depth(1))(2050) must be <= 1024. (device threadgroup size limit)'
```
Fixes https://github.com/pytorch/pytorch/issues/146430
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,831,586,059
|
[ONNX] Support torchvision ops
|
justinchuby
|
open
|
[
"module: onnx",
"triaged"
] | 0
|
COLLABORATOR
| null | true
|
2,831,584,820
|
[MPS] Add error checking when dispatching kernel
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146458
That thread-group size should not exceed maximum thread group size
Add regression test to validate that
Make failures like https://github.com/pytorch/pytorch/issues/146430 much easier to detect
| true
|
2,831,580,978
|
Inductor cannot handle custom ops that return unbacked symint
|
ydwu4
|
open
|
[
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: inductor",
"module: pt2-dispatcher"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Seems there's some code-gen issue for custom op that returns unbacked symints:
```python
import torch
@torch.library.custom_op("mylib::foo", mutates_args={})
def maybe_alias(x: torch.Tensor) -> int:
s = x.sum().to(torch.int64)
if s > 0:
return -s.item()
return s.item()
def fake_impl(x):
return x.sum().to(torch.int64).item()
maybe_alias.register_fake(fake_impl)
def f(x):
return torch.ops.mylib.foo(x.sin())
torch._dynamo.config.capture_scalar_outputs = True
torch.compile(f)(torch.randn(3, 4))
```
This gives the following assertion error:
```
Traceback (most recent call last):
File "/data/users/yidi/pytorch/test_custom_op.py", line 19, in <module>
torch.compile(f)(torch.randn(3, 4))
File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/data/users/yidi/pytorch/torch/_inductor/compile_fx.py", line 752, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/data/users/yidi/pytorch/torch/_inductor/compile_fx.py", line 737, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/data/users/yidi/pytorch/torch/_inductor/compile_fx.py", line 1399, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/data/users/yidi/pytorch/torch/_inductor/compile_fx.py", line 1121, in codegen_and_compile
num_bytes, nodes_num_elem, node_runtimes = graph.count_bytes()
File "/data/users/yidi/pytorch/torch/_inductor/graph.py", line 1985, in count_bytes
num_bytes = node.get_read_write_buffers_sizes()
File "<string>", line 5, in get_read_write_buffers_sizes_cache_on_self
File "/data/users/yidi/pytorch/torch/_inductor/scheduler.py", line 565, in get_read_write_buffers_sizes
return self.get_read_write_buffers_sizes_impl(
File "/data/users/yidi/pytorch/torch/_inductor/scheduler.py", line 585, in get_read_write_buffers_sizes_impl
self.get_read_write_buffer_accesses(
File "/data/users/yidi/pytorch/torch/_inductor/scheduler.py", line 711, in get_read_write_buffer_accesses
buf_bytes = get_buf_bytes(buf)
File "/data/users/yidi/pytorch/torch/_inductor/scheduler.py", line 690, in get_buf_bytes
assert isinstance(user.node, BaseSchedulerNode)
torch._inductor.exc.InductorError: AssertionError:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
on master
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @aakhundov @zou3519 @bdhirsh
| true
|
2,831,573,110
|
Fix workarea compute in lapackSyevd
|
wdvr
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend"
] | 6
|
CONTRIBUTOR
|
work-query APIs return floating point values, that could loose precision when converted back to int. Solve this by using `nextafter` and `ceil`
Add regression test
Fixes #145801
| true
|
2,831,572,388
|
[logging] Save compile state in CompiledFxGraph and make it available at runtime
|
masnesral
|
closed
|
[
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146455
Summary: To support logging the correct compile_id for runtime timings (like Triton autotuning), save the compile_id in CompiledFxGraph make it available to logging utilities, i.e., dynamo_timed.
The previous attempt put the compile_id in the inductor_metadata with the Triton output code, but that broke Triton caching and we reverted. This version does the following:
* When creating or deserializing a CompiledFxGraph, save the compile-time compile_id.
* Implement a class `RuntimeCompileContext` that's analogous to `CompileContext` where we can look up the compile_id at runtime.
* Set this runtime compile context during `CompiledFxGraph.__call__`.
* Removes the compile_id as a param to dynamo_timed; dynamo_timed can figure it out instead.
* Removes separate dynamo_timed params for compile-time and runtime dynamo_compile column names. We can use one param have dynamo_timed figure out whether to treat as a runtime or compile-time event.
Test Plan:
* tlparse (`python benchmarks/dynamo/torchbench.py --performance --training --amp --backend inductor --device cuda --print-compilation-time --repeat 5 --cold-start-latency --only nanogpt`): https://fburl.com/bu5i8efk
* dynamo_compile: https://fburl.com/scuba/dynamo_compile/sandbox/3d74ps92
* pt2_compile_events: https://fburl.com/scuba/pt2_compile_events/ooqoe5tu
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,831,525,752
|
[dynamo][fullgraph] Raise NoGraphError if no graph with fullgraph=True
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146454
* #146507
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,831,501,329
|
revert PTD's change that leads to signature mismatch of printNcclCommProxyTrace
|
dmwu
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 7
|
CONTRIBUTOR
|
Summary: D68801098 introduced this function signature mismatch issue for printNcclCommProxyTrace. Revert it so that trunk build can pass.
Test Plan:
With the change, build of APS model using rcclexp can now pass:
`sh scripts/ltian/run_jobs/fb_fm_v2/run_fb_fm_v2_job.sh -h T20_GTT_MI300X -n 16 -b 1024 -t [2024-12-06] -d ai_infra_ngs -e ai_infra_training_rnd_tc -x 0`
Reviewed By: c-p-i-o
Differential Revision: D69149588
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,831,461,744
|
cpp_wrapper: enable all CI inductor tests
|
benjaminglass1
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ci-no-test-timeout"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146452
* #146706
* #146424
* #146109
* #146449
* #144349
* #144293
* #146928
With the speedups from precompiled headers, we can now enable all currently enabled CI tests for inductor in cpp_wrapper mode.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,831,456,514
|
[XPU] Enable nightly builds for python 3.13t
|
atalman
|
closed
|
[
"triaged",
"topic: binaries",
"intel",
"module: xpu"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Related to https://github.com/pytorch/pytorch/issues/130249
Please enable support for Python 3.13t on XPU devices. Currently 3.13t nightly builds are disabled for XPU : https://github.com/pytorch/pytorch/blob/main/.github/scripts/generate_binary_build_matrix.py#L356
Example PR enabling MacOS and Windows 3.13t builds: https://github.com/pytorch/pytorch/pull/141806
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @EikanWang @fengyuan14 @guangyey @chuanqi129
### Versions
2.7.0
| true
|
2,831,425,833
|
distributed; If we are in fbcode, set a default log_line_prefix_template of
|
c00w
|
closed
|
[
"oncall: distributed",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146450
* #145122
[${role_name}${rank}|${local_rank}]:
Summary:
This removes the need for every fbcode user to try and remember the correct way
to set this to get log filtering to work
Test Plan:
Should only have impacts on trainers which do not set this and are within
fbcode
Reviewers:
Subscribers:
Tasks:
Tags:
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,831,397,504
|
cpp_wrapper: handle mixed-device C-shim fallbacks
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146991
* #146706
* #146424
* #146109
* __->__ #146449
Fixes an error from test_torch, where a CUDA cpp_wrapper run called a CUDA native C-shim kernel with two CPU tensors.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,831,382,261
|
[ROCm] Indexing perf optimization via Unroll/WideFetch/IdxReuse/OneDupOpt
|
amd-hhashemi
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Stale",
"release notes: cuda"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,831,376,535
|
Fix workflow for closing nonexistent disable issues
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
The workflow could not update issues because it didn't have permissions, and it looked green because it didn't check return codes.
Tested by running the workflow and seeing that issues did get closed
Fixes https://github.com/pytorch/pytorch/issues/145382
| true
|
2,831,327,964
|
API for custom error messages from graph break annotations (torch._dynamo.disable)
|
xmfan
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025",
"module: compile ux"
] | 2
|
MEMBER
|
### 🚀 The feature, motivation and pitch
Within our 1P features, we use `torch._dynamo.disable`. By default, this displays an error message like: `torch._dynamo.exc.Unsupported: call torch._dynamo.disable() wrapped function <function OpNamespace.add.<locals>.fn at 0x7f9b1aba7370>`. We will probably want to improve this error message, but I don't think there can be a blanket useful one. For example, we can't default to something like: "remove torch._dynamo.disable to remove the graph break" if the disable was put there intentionally. This also affects 3P frameworks, which the end user may not be able to modify easily.
UX wise, one proposal is to pass an error message argument to disable, something like:
```python
@torch._dynamo.disable(reason="torch.X API is not yet supported without graph breaks")
def fn(...)
@torch._dynamo.disable(reason="<feature> can't be traced by Dynamo and must always fallback to eager")
def fn2(...)
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @williamwen42 @zou3519 @anijain2305
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,831,320,616
|
fix tf32 issue in test_inductor_freezing.py unit tests
|
Fuzzkatt
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Test is hitting numerical mismatches in NVIDIA internal CI. Add tf32_on_and_off decorater, update check to assertEqual
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov @eqy @nWEIdia
| true
|
2,831,278,961
|
Stop poisoning fork on Dataloader creation when pin_memory is enabled
|
albanD
|
closed
|
[
"release notes: dataloader",
"topic: bug fixes"
] | 1
|
COLLABORATOR
|
Fixes https://github.com/pytorch/pytorch/issues/144687
Needs https://github.com/pytorch/pytorch/pull/146098 that already landed to fix the issue above
A longer-term fix would be to move cuda's non-poisoning is_available() check to c++. But that would be quite a bit of work.
This PR also updates the behavior of current_accelerator() in python to match getAccelerator() in C++ and update all docs to reflect that.
| true
|
2,831,272,217
|
[TreeSpec] Add custom comparision function
|
henryhu6
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Summary:
https://github.com/pytorch/pytorch/pull/145815 used caching to for treespec_loads calculation to speed up AOTI module call.
However, this made tests flaky due when comparing TreeSpec for objects in local scope. ie. 'test_export.TestExport.test_pytree_register_nested_data_class.<locals>.Inner'
Type comparison will yield False when local scopes are different due to lru_cache.
Since this comparison is only used for testing purpose, we will only test if str(type) are equal.
Test Plan:
```
PYTORCH_TEST_WITH_ROCM=1 python test/export/test_retraceability.py
```
Differential Revision: D69137706
| true
|
2,831,197,948
|
[ONNX] Run decomp creates different results
|
justinchuby
|
closed
|
[] | 0
|
COLLABORATOR
|
```py
import torch
from torch.onnx._internal.exporter._registration import ONNXRegistry
from torch.onnx._internal.exporter import _decomp
class Model(torch.nn.Module):
def forward(self, x):
return torch.ops.aten.flatten.using_ints(x, -2)
def decompose_with_registry(
exported_program: torch.export.ExportedProgram,
) -> torch.export.ExportedProgram:
"""Decompose the exported program with the given registry.
This function is needed so it shows clearly on the profiler results.
"""
onnx_registered_ops = set(
_decomp.get_onnx_implemented_overloads(ONNXRegistry.from_torchlib())
)
decomp_table = _decomp.create_onnx_friendly_decomposition_table(onnx_registered_ops)
return exported_program.run_decompositions(decomp_table)
model = Model()
ep = torch.export.export(model, (torch.randn(2, 3, 4, 5),))
print(ep)
ep_onnx = decompose_with_registry(ep)
print(ep_onnx)
ep_coreir = ep.run_decompositions()
print(ep_coreir)
```
```py
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, x: "f32[2, 3, 4, 5]"):
# File: /workspace/pytorch/testtest.py:8 in forward, code: return torch.ops.aten.flatten.using_ints(x, -2)
flatten: "f32[2, 3, 20]" = torch.ops.aten.flatten.using_ints(x, -2); x = None
return (flatten,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='flatten'), target=None)])
Range constraints: {}
/home/justinchu/anaconda3/envs/pytorch/lib/python3.11/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
/home/justinchu/anaconda3/envs/pytorch/lib/python3.11/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, x: "f32[2, 3, 4, 5]"):
# File: /workspace/pytorch/testtest.py:8 in forward, code: return torch.ops.aten.flatten.using_ints(x, -2)
collapse_view: "f32[2, 3, 20]" = torch.ops.prims.collapse_view.default(x, 2, 3); x = None
return (collapse_view,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='collapse_view'), target=None)])
Range constraints: {}
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, x: "f32[2, 3, 4, 5]"):
# File: /workspace/pytorch/testtest.py:8 in forward, code: return torch.ops.aten.flatten.using_ints(x, -2)
view: "f32[2, 3, 20]" = torch.ops.aten.view.default(x, [2, 3, 20]); x = None
return (view,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='view'), target=None)])
Range constraints: {}
```
| true
|
2,831,133,715
|
[sigmoid] Implement a OSS only model runner.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Summary: Implement an oss version of modelrunner with clean dependencies. The new oss model runner only removes thrift and only use json header to load the model.
Test Plan: Test will be added in the next diff separately. (D69060784)
Differential Revision: D68846877
| true
|
2,831,108,176
|
[Codemod][AddExplicitStrictExportArg] caffe2/torch
|
gmagogsfm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Differential Revision: D69068432
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,831,078,001
|
[export] make stack_trace optional in insert_custom_op_guards
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 9
|
CONTRIBUTOR
|
Summary: Fixes 1 PT2I exportability error
Test Plan: -
Differential Revision: D69132186
| true
|
2,831,051,955
|
Revert D68880766
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Summary:
This diff reverts D68880766
(The context such as a Sandcastle job, Task, SEV, etc. was not provided.)
Test Plan: NA
Differential Revision: D69129334
| true
|
2,831,048,438
|
[Testing] Reduce `test_exp` flakiness
|
malfet
|
open
|
[
"Merged",
"Reverted",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146436
By setting `reference_in_float` to false, as `exp(a + b)` could yield significantly different results than `exp(a.half()+b.half())` as one can see in the following example (which is accidentally the random values generated by MacOS RNG for this test)
```
>>> import torch
>>> x=torch.tensor(2.5599, dtype=torch.half)
>>> y=torch.tensor(0.6970, dtype=torch.half)
>>> (x + y).exp()
tensor(26., dtype=torch.float16)
>>> (x.float() + y.float()).exp()
tensor(25.9799)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,831,047,466
|
UNSTABLE slow / linux-focal-rocm6.3-py3.10 / test (slow)
|
atalman
|
closed
|
[
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 2
|
CONTRIBUTOR
|
https://github.com/pytorch/pytorch/issues/146409
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,831,046,805
|
UNSTABLE periodic / linux-focal-rocm6.3-py3.10 / test (distributed)
|
atalman
|
closed
|
[
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 2
|
CONTRIBUTOR
|
https://github.com/pytorch/pytorch/issues/146409
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,831,045,898
|
UNSTABLE inductor-rocm / rocm6.3-py3.10-inductor / test (inductor)
|
atalman
|
closed
|
[
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 2
|
CONTRIBUTOR
|
https://github.com/pytorch/pytorch/issues/146409
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,831,043,716
|
UNSTABLE rocm / linux-focal-rocm6.3-py3.10 / test (default)
|
atalman
|
closed
|
[
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 2
|
CONTRIBUTOR
|
https://github.com/pytorch/pytorch/issues/146409
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,831,038,029
|
Failing to free a tensor allocated while a torch.cuda.Mempool is active results in that tensor being freed with cudaFree() rather than the custom free function.
|
galv
|
open
|
[
"module: cuda",
"triaged"
] | 10
|
COLLABORATOR
|
### 🐛 Describe the bug
Currently, in the main branch, torch.cuda.MemPool was recently merged, which should allow for us to mix and match allocators in pytorch, but I have found that the current implementation is broken.
I was inspired by recent work to "suspend" temporary GPU buffers in cuda graphs used by cuda graphs, that both sglang and vLLM seem to have explored very recently:
https://github.com/sgl-project/sglang/pull/2630
https://github.com/vllm-project/vllm/pull/11743
CUDAGraphs "bake in" their virtual addresses. By using the CUDA VMM APIs, https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__VA.html, we can deallocate the physical backing memory without deallocating the virtual addresses, thus fixing the problem that cudagraphs hold memory that other cuda graphs or non cuda graph code cannot use. This is an alternative to sharing a mempool id, but more flexible in my opinion, because we can suspend "non-cudagraph" memory allocations (i.e., allocations not made during cuda graph capture; KV cache is the most obvious beneficiary of this) as well with this. Additionally, we can explore suspending the cudaGraphExec_t itself, which sits in GPU memory and can be substantial in size because each node is 10kiB and it is not abnormal to have a cuda graph with 10,000 or more nodes for large workloads, which is ~100 MiB. But that is not the immediate use case.
The ability to suspend memory when swapping between inference and training in the same process is important for new RL workloads. Otherwise, you may not be able to reach full performance in either training or inference due to memory constraints. It is also important for LLM inference use cases where it is normal to create a unique cuda graph for each input shape bucket, but this use case is sort of covered by sharing memory pool ids between cuda graphs.
I am not 100% certain, but I believe that @youkaichao encountered a similar issue to mine here: https://github.com/pytorch/pytorch/issues/145168
My reproducer is on commit 1c16cf70c37652dde7950ca174278b425af03611.
I applied this diff first to get better visibility:
```
diff --git a/c10/cuda/CUDACachingAllocator.cpp b/c10/cuda/CUDACachingAllocator.cpp
index 9f335c5fc1e..c02b7705cf5 100644
--- a/c10/cuda/CUDACachingAllocator.cpp
+++ b/c10/cuda/CUDACachingAllocator.cpp
@@ -2941,8 +2941,10 @@ class DeviceCachingAllocator {
// If there is an active mempool with a given allocator,
// we use the given allocator's delete function.
+ std::cout << "GALVEZ:custom allocator free" << std::endl;
active_pool->allocator()->raw_delete((void*)block->ptr);
} else {
+ std::cout << "GALVEZ:cudaFree()" << std::endl;
C10_CUDA_CHECK(cudaFree((void*)block->ptr));
}
total_allocated_memory -= block->size;
diff --git a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp
index 5e4a637f851..4a8aa614e0b 100644
--- a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp
+++ b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp
@@ -5390,6 +5390,7 @@ static void* _ncclMemAlloc(size_t size, int device, void* stream) {
false, "NCCL mem allocator is not supported in this NCCL version");
#else
LOG(INFO) << "NCCL mem allocator: allocating " << size << " bytes";
+ std::cout << "GALVEZ:_ncclMemAlloc()" << std::endl;
at::cuda::OptionalCUDAGuard gpuGuard(device);
void* ptr = nullptr;
TORCH_CHECK(ncclMemAlloc(&ptr, size) == ncclSuccess, "ncclMemAlloc failed");
@@ -5404,6 +5405,7 @@ static void _ncclMemFree(void* ptr, size_t size, int device, void* stream) {
false, "NCCL mem allocator is not supported in this NCCL version");
#else
LOG(INFO) << "NCCL mem allocator: freeing " << size << " bytes";
+ std::cout << "GALVEZ:_ncclMemFree()" << std::endl;
at::cuda::OptionalCUDAGuard gpuGuard(device);
TORCH_CHECK(ncclMemFree(ptr) == ncclSuccess, "ncclMemFree failed");
#endif // NCCL_HAS_MEM_ALLOC
```
I built like this:
```
USE_GLOG=ON USE_MEM_EFF_ATTENTION=0 USE_FLASH_ATTENTION=0 USE_DISTRIBUTED=1 USE_MKLDNN=0 BUILD_TEST=1 USE_FBGEMM=0 USE_NNPACK=0 USE_QNNPACK=0 USE_XNNPACK=0 DEBUG=1 TORCH_CUDA_ARCH_LIST="8.9" CMAKE_C_COMPILER_LAUNCHER=ccache CMAKE_CXX_COMPILER_LAUNCHER=ccache CMAKE_CUDA_COMPILER_LAUNCHER=ccache python setup.py develop
```
When you run:
```python
import torch
import torch.distributed as c10d
import torch._logging
import logging
import os
# torch._logging.set_logs(all=logging.INFO)
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "7999"
c10d.init_process_group(
backend="nccl", rank=0, world_size=1,
)
device = torch.device(f"cuda:0")
torch.cuda.set_device(0)
pg = c10d.distributed_c10d._get_default_group()
backend = pg._get_backend(torch.device(device))
def f(backend):
pool = torch.cuda.MemPool(backend.mem_allocator)
# allocate memory with ncclMemAlloc
with torch.cuda.use_mem_pool(pool):
x = torch.arange(1024 * 1024 * 2, device="cuda")
# Note: pool will be destroyed upon function return, but y, which
# was allocated via the pool is still alive.
return x
x = f(backend)
del x
# calls cudaFree() on x
torch.cuda.empty_cache()
c10d.destroy_process_group()
```
What you get is the following stdout:
```
(pytorch-5) dgalvez@dgalvez-dvt-01:~/code/asr/pytorch-5$ python repros/basic_mempool.py
GALVEZ:_ncclMemAlloc()
GALVEZ:cudaFree()
```
This means that _ncclMemAlloc() is not being matched to _ncclMemFree(). This is a serious error. There is no error reported, but clearly it is not okay.
I looked into it, and the problem is that the current logic assumes that the memory pool used to do allocation is still active when it comes time for the tensor to be freed:
https://github.com/pytorch/pytorch/blob/23fffb54d591c7b5ca6d19728d628dbe1e79d91c/c10/cuda/CUDACachingAllocator.cpp#L2935-L2947
This is not good for multiple reasons:
1. If the mempool used to do the original allocation is not active, then an error should reported. You should not fallback to cudaFree(), which is not necessarily correct for your custom memory allocator. We are just getting lucky that there is no error being reported right now.
2. cudaFree() is not called when a tensor is destroyed. It is called when the backing storage is destroyed, e.g. when torch.cuda.empty_cache()
is called or a cudaMalloc fails due to memory fragmentation. The user does not have control over this in general. Therefore, we cannot in general write code that always deletes backing memory when the appropriate memory pool is active without calling torch.cuda.empty_cache() at the end of each memory pool's "with" region, which synchronizes the cuda stream. I think the current design is therefore fundamentally flawed.
What I propose instead is the following:
- We require that a CUDAPluggableAllocator be live for the duration of all of its allocations' lifetimes. This seems easy to enforce to me given that the current design pattern is to use static instances of them right now.
- Each PrivatePool will have a new member pointing to the allocator used by the memory pool that created that private pool: https://github.com/pytorch/pytorch/blob/23fffb54d591c7b5ca6d19728d628dbe1e79d91c/c10/cuda/CUDACachingAllocator.cpp#L821-L844 This is an acceptable pattern because there is a one to one relationship between memory pool IDs and private pools IIUC: https://github.com/pytorch/pytorch/blob/23fffb54d591c7b5ca6d19728d628dbe1e79d91c/c10/cuda/CUDACachingAllocator.cpp#L1087-L1089
- The above two points allow us to always call the PluggableAllocator's free function regardless of whether the MemPool used for allocation is active at the time of freeing.
This seems much more correct and much less error prone, but I might be missing something.
What do you think @syed-ahmed @ezyang ?
### Versions
PyTorch version: 2.7.0a0+git1c16cf7
Is debug build: True
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L40S
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 45 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 25
On-line CPU(s) list: 0-24
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9454 48-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 25
Stepping: 1
BogoMIPS: 5491.74
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext invpcid_single ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor fsrm flush_l1d
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 800 KiB (25 instances)
L1i cache: 800 KiB (25 instances)
L2 cache: 25 MiB (25 instances)
L3 cache: 800 MiB (25 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-24
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+git1c16cf7
[conda] numpy 1.22.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.7.0a0+git1c16cf7 dev_0 <develop>
cc @ptrblck @msaroufim @eqy
| true
|
2,831,028,570
|
DISABLED test_prod (__main__.MPSBasicTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"module: macos",
"skipped",
"module: mps"
] | 4
|
NONE
|
Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_prod&suite=MPSBasicTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36656410343).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_prod`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_torchinductor.py", line 1917, in test_prod
self.common(fn, (torch.rand((1, 2050)),))
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13137283417/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_torchinductor.py", line 629, in check_model_gpu
check_model(
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_torchinductor.py", line 511, in check_model
self.assertEqual(
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13137283417/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4042, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: nan at index (0,) (up to 1e-05 allowed)
Greatest relative difference: nan at index (0,) (up to 1.3e-06 allowed)
The failure occurred for item [1]
To execute this test, run the following from the base repo dir:
python test/inductor/test_mps_basic.py MPSBasicTests.test_prod
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_mps_basic.py`
cc @clee2000 @wdvr @malfet @albanD @kulinseth @DenisVieriu97 @jhavukainen
| true
|
2,830,972,852
|
[MPSInductor] Implement `argmax`/`argmin`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146436
* __->__ #146429
* #146428
* #146423
TODOs:
- Find test with NaN
- Report internal compiler error when running `test_argmax_argmin1` (which is actually not enough shared memory)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.