id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,036,986,245
|
[export] add serialized_artifact test
|
ydwu4
|
open
|
[
"fb-exported",
"ciflow/inductor",
"release notes: export"
] | 2
|
CONTRIBUTOR
|
Summary: This diff adds a EXPECTTEST_ACCEPT_ARTIFACT environment variable for test_serialize. When set to 1, it will save the ep to serialized_artifact and override existing saved artifact then load the saved artifact. When set to 0, it will just look up existing stored artifact and compare with the newly exported artifact.
Test Plan: test-only change.
Differential Revision: D72487556
| true
|
3,036,972,173
|
[BE][Cleanup][Dynamo] Stop logging entire_frame_compile_time_s
|
Raymo111
|
open
|
[
"better-engineering",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"dynamo-logging"
] | 6
|
MEMBER
|
Note: Additional (whitespace) changes are from lintrunner.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,036,970,567
|
[BE][MPS] Pass `alpha` by reference
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152743
* __->__ #152737
* #152515
As it's always a scalar
| true
|
3,036,951,809
|
[MPS] TensorIterator and accuracy
|
malfet
|
open
|
[
"triaged",
"enhancement",
"module: mps"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
To capture followups from https://github.com/pytorch/pytorch/pull/152515 namely:
- How and when opmath_t should be used: I can't think of a scenario, when it would've been needed for simple ops like add/sub, but I've noticed it's caused accuracy issues in test_output_grad_match_sinc_mps_float16, when float16 tensor were multiplied by float32 scalar
- But perhaps this is only applicable when iterator arguments are of mixed data types
- Also, right now iterator is executed for output dtype, but may be it should be done in common_dtype
- Also semantic of two `val_at_offs` overloads are somewhat confusing , perhaps they should be called differently
### Alternatives
_No response_
### Additional context
_No response_
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
3,036,931,166
|
[ca][ddp] loud error instead of silent incorrectness under C++ Reducer
|
xmfan
|
open
|
[
"oncall: distributed",
"release notes: distributed (c10d)",
"module: inductor",
"ciflow/inductor"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152735
* #152689
C++ Reducer is silently incorrect under CA, its implementation is no-oping the collective. I'm guessing that it was no-op'd because in DDP + python reducer, the C++ reducer is still being initialized.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,036,839,797
|
docs: fix dead link in torch.compile docs
|
aniruddh-alt
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 6
|
NONE
|
Fixes #119272
This PR fixes a dead link in the `torch.compile` documentation. The original link to the "Registering Custom Backends" section was:
https://pytorch.org/docs/main/torch.compiler_custom_backends.html#registering-custom-backends
This has been updated to the correct and stable URL:
https://pytorch.org/docs/stable/torch.compiler_custom_backends.html#registering-custom-backends
**How to test:**
- Build the documentation locally (`make html` in the `docs` directory).
- Navigate to the `torch.compile` docs page in the generated HTML and verify that the "Registering Custom Backends" link now works and points to the correct section.

No other changes were made.
| true
|
3,036,829,756
|
[Cutlass] Handle broadcasting in EVT python codegen
|
mlazos
|
closed
|
[
"Merged",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152815
* #150907
* #151406
* #150906
* __->__ #152733
Previously merged:
* #151713
* #151405
* #150905
* #152306
* #152305
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,036,829,387
|
Make DispatchKeySet serializable; add `__eq__`
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152732
These seem like reasonable things to add. Also fixes a bug in vLLM for
me.
Test Plan:
- new tests
| true
|
3,036,824,020
|
Inconsistent float16 overflow behavior between CPU and CUDA devices
|
SilentTester73
|
open
|
[
"module: cuda",
"low priority",
"triaged",
"module: half",
"actionable",
"module: edge cases"
] | 3
|
NONE
|
### 🐛 Describe the bug
## Issue Description
There appears to be inconsistent behavior in how PyTorch handles float16 overflow conditions between CPU and CUDA devices. When performing `torch.add()` with an extremely large alpha value (beyond float16 range) on float16 tensors, CPU correctly throws an overflow error while CUDA silently completes the operation without error.
## Environment
- PyTorch version: 2.6.0+cu124
- Python version: 3.11
- Device setups tested:
- CPU
- CUDA
## Steps to Reproduce
I've created a minimal reproduction case at: https://colab.research.google.com/drive/1luRKi7O-wycVJeVSExFsCsc7HykH9lvN?usp=sharing
```python
import torch
import sys
# Test on CPU
device = torch.device("cpu")
print(f"Using device: {device}")
# Create sample tensors with float16 precision
tensor_shape = (8, 1, 1, 1)
tensor_dtype = torch.float16
tensor1 = torch.randn(tensor_shape, dtype=tensor_dtype, device=device) * 1e-5
tensor2 = torch.randn(tensor_shape, dtype=tensor_dtype, device=device) * 1e-5
print(f"Tensor shapes: {tensor1.shape}, dtype: {tensor1.dtype}")
# Use extremely large alpha value that exceeds float16 range
alpha = 1.04119e+32
f16_max = 65504.0 # Max for float16
print(f"Alpha: {alpha}, Max float16 value: {f16_max}")
# Attempt the operation
try:
result = torch.add(tensor1, tensor2, alpha=alpha)
print("Addition successful!")
except RuntimeError as e:
print(f"Error occurred: {e}")
# Now test with CUDA
if torch.cuda.is_available():
device = torch.device("cuda")
print(f"\nUsing device: {device}")
# Create same tensors on CUDA
tensor1 = torch.randn(tensor_shape, dtype=tensor_dtype, device=device) * 1e-5
tensor2 = torch.randn(tensor_shape, dtype=tensor_dtype, device=device) * 1e-5
# Attempt the same operation
try:
result = torch.add(tensor1, tensor2, alpha=alpha)
print("Addition successful!")
except RuntimeError as e:
print(f"Error occurred: {e}")
```
## Expected Behavior
Both CPU and CUDA implementations should behave consistently when handling potential float16 overflow conditions. Since alpha (1.04119e+32) far exceeds the maximum float16 value (65504.0), both implementations should either:
1. Throw a RuntimeError about overflow (as CPU does)
2. Handle the overflow
## Actual Behavior
- **CPU**: Correctly throws a RuntimeError: "value cannot be converted to type at::Half without overflow"
- **CUDA**: Silently proceeds with the operation and reports "Addition successful!"
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 17.0.6 (++20231209124227+6009708b4367-1~exp1~20231209124336.77)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-58-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,036,809,687
|
[Dynamo] Guard serialization for SEQUENCE_LENGTH
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152729
* #152961
* #152872
* #152865
* __->__ #152730
* #152728
* #152727
* #152725
Tests only; no other changes needed. Test logic uses a tuple function input to trigger installation of a SEQUENCE_LENGTH guard.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,809,593
|
[Dynamo] Guard serialization for BUILTIN_MATCH
|
jbschlosser
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152729
* #152961
* #152872
* #152865
* #152730
* #152728
* #152727
* #152725
Unsupported because it uses unsupported FUNCTION_MATCH guard.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,784,151
|
[Dynamo] Guard serialization for CLOSURE_MATCH
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152729
* #152961
* #152872
* #152865
* #152730
* __->__ #152728
* #152727
* #152725
Unsupported because it uses unsupported FUNCTION_MATCH.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,784,076
|
[Dynamo] Guard serialization for FUNCTION_MATCH
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152729
* #152961
* #152872
* #152865
* #152730
* #152728
* __->__ #152727
* #152725
Unsupported because it uses unsupported ID_MATCH.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,771,036
|
Improve error wording in _link_check.yml
|
shoumikhin
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
| null | true
|
3,036,740,597
|
[Dynamo] Guard serialization for NN_MODULE
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152729
* #152961
* #152872
* #152865
* #152730
* #152728
* #152727
* __->__ #152725
Throws an error when attempting to serialize an NN_MODULE guard. It is not supported because it uses the unsupported ID_MATCH guard (#152330):
https://github.com/pytorch/pytorch/blob/a6dd1c2208f29a3169c1fe96bf4e79a10aa5647d/torch/_dynamo/guards.py#L1738-L1739
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,707,916
|
[Dynamo] Guard serialization for CONSTANT_MATCH
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152961
* #152729
* #152872
* #152865
* #152730
* #152728
* #152727
* #152725
* __->__ #152724
* #152704
This PR adds testing only; no non-test changes were needed.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,686,641
|
[dynamo] Guard serialization for DICT_KEYS_MATCH
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152723
* #152721
* #152716
* #152687
* #152616
* #152615
DICT_KEYS_MATCH
Differential Revision: [D74091886](https://our.internmc.facebook.com/intern/diff/D74091886/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,674,378
|
Data dependent free reshape
|
laithsakka
|
open
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152722
* #148872
```
import torch
# reshape (u0, u1) -> (u3, u4)
@torch.compile(fullgraph=True, dynamic=False)
def func2(x, y):
t = torch.reshape(x, (y.size()[0],y.size()[0]))
return t
x = torch.arange(8)
x = x.as_strided((4,), (2,))
torch._dynamo.decorators.mark_unbacked(x, 0)
y = torch.rand(2)
torch._dynamo.decorators.mark_unbacked(y, 0)
# torch._dynamo.decorators.mark_unbacked(z, 0)
func2(x, y)
```
| true
|
3,036,670,168
|
[dynamo] Guard serialization for MAPPING_KEYS_CHECK
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152723
* __->__ #152721
* #152716
* #152687
* #152616
* #152615
MappingProxyType
Differential Revision: [D74091363](https://our.internmc.facebook.com/intern/diff/D74091363/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,641,460
|
When scoped_libary is destroyed the fake impls are not cleared
|
ydwu4
|
open
|
[
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
with torch.library._scoped_library("mylib", "FRAGMENT") as lib:
torch.library.define(
"mylib::foo",
"(Tensor a, Tensor b, Tensor? c) -> (Tensor, Tensor?)",
tags=torch.Tag.pt2_compliant_tag,
lib=lib,
)
@torch.library.register_fake("mylib::foo")
def foo_impl(a, b, c):
res2 = None
if c is not None:
res2 = c + a + b
return a + b, res2
with torch.library._scoped_library("mylib", "FRAGMENT") as lib:
torch.library.define(
"mylib::foo",
"(Tensor a, Tensor b, Tensor? c) -> (Tensor, Tensor?)",
tags=torch.Tag.pt2_compliant_tag,
lib=lib,
)
@torch.library.register_fake("mylib::foo")
def foo_impl(a, b, c):
res2 = None
if c is not None:
res2 = c + a + b
return a + b, res2
```
This gives us
```
RuntimeError: register_fake(...): the operator mylib::foo already has an fake impl registered at /data/users/yidi/pytorch/test.py:13.
```
This happens when i try to parametrize a unit test that registered a custom op. It may take some refactoring to make this work. For now, we could bypass the error by setting allow_override=True.
### Versions
I'm on master
cc @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
3,036,633,074
|
[BE][CI] Merge regular and MPS test config shards
|
malfet
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152719
* #153057
Unsure why there were separate to beging with
| true
|
3,036,631,800
|
[Mergebot] Adding ciflow/pull in PR without pull and lint workflows
|
huydhn
|
open
|
[
"module: ci",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
To help mitigate https://github.com/pytorch/pytorch/issues/152697, we now have `ciflow/pull` from https://github.com/pytorch/pytorch/pull/152567 to trigger the missing pull and lint workflows manually. The logical next step here is to let mergebot does this instead:
* When `@pytorchbot merge` command is issued, adding `ciflow/pull` if pull or lint workflows are missing. The bot has already done this for `ciflow/trunk`, so the mechanism is therre.
* Double check if pull and lint start after adding `ciflow/pull` (don't take GitHub words for it), and retry if needed (remove the tag and push a new one)
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,036,623,131
|
Cannot mask a DTensor
|
pbontrager
|
open
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 1
|
NONE
|
### 🐛 Describe the bug
I'm attempting to mask a sharded d_tensor and everything that I've tried has failed so far. I have a DTenosr sharded on the last dim and I create a boolean mask to select a subset of the tensor (selecting from earlier dims not the sharded dim). I have tried to select using both a local mask and a replicated dtensor mask, I've also tried using torch.masked_select and regular tensor[mask] indexing. I am not sure if this is a bug with DTensor or I'm attempting to do this wrong.
```python
import torch
from torch.distributed import init_process_group, get_rank
from torch.distributed.device_mesh import init_device_mesh
from torch.distributed.tensor import distribute_tensor
init_process_group('nccl')
torch.cuda.set_device(get_rank())
mesh = init_device_mesh('cuda', (2,), mesh_dim_names=('tp',))
embed = torch.randn(4, 32, 64)
d_embed = distribute_tensor(embed, mesh)
mask = (torch.arange(4*32, device='cuda').reshape(4, 32) % 2).bool()
d_embed[mask]
```
Run the above with `torchrun --nproc_per_node 2 mask_dtensor.py` and I get `RuntimeError: aten.index.Tensor: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators!`.
I can also distributed the mask with `d_mask = distribute_tensor(mask, mesh)` which gives me the error `torch._subclasses.fake_tensor.DynamicOutputShapeException: aten.nonzero.default`
### Versions
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250416+cu126
[pip3] torchao==0.10.0
[pip3] torchaudio==2.6.0.dev20250416+cu126
[pip3] torchdata==0.11.0
[pip3] torchtune==0.0.0
[pip3] torchvision==0.22.0.dev20250416+cu126
[pip3] triton==3.3.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-sphinx-theme 0.0.24 dev_0 <develop>
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0.dev20250416+cu126 pypi_0 pypi
[conda] torchao 0.10.0 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250416+cu126 pypi_0 pypi
[conda] torchdata 0.11.0 pypi_0 pypi
[conda] torchtune 0.0.0 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250416+cu126 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @tianyu-l @XilunWu
| true
|
3,036,621,496
|
[dynamo] Guard serialization for WEAKREF_ALIVE
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152723
* #152721
* __->__ #152716
* #152687
* #152616
* #152615
Punt on WEAREF_ALIVE as weakref won't live across the process and users might need to drop them upfront.
Differential Revision: [D74088735](https://our.internmc.facebook.com/intern/diff/D74088735/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,606,408
|
[BE] remove outdated warning about TORCH_CUDA_ARCH_LIST
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: build"
] | 3
|
CONTRIBUTOR
|
I saw this warning when compiling a 3rd party lib and did not agree with it. I'm not sure the original reason why we would want to force people to pass in TORCH_CUDA_ARCH_LIST to cmake vs set it as an env var. As a developer, it's much easier to set it as an env var or have it be autodetected. I also realized this warning was from before 2018!!! 7 years ago! And there are no plans to actually enforce this (nor should there be), so let's remove this misleading warning.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152715
| true
|
3,036,602,619
|
[caffe2] Make c10::str works with scoped enum (#152705)
|
Mizuchi
|
open
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
NONE
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/152705
Test Plan:
```
buck2 test fbcode//caffe2/c10/test:util_base_tests --fail-fast
```
Differential Revision: D74087796
| true
|
3,036,590,673
|
remove conda from devcontainer
|
wdvr
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #148341
| true
|
3,036,580,692
|
dtensors TP+DP issues
|
NouamaneTazi
|
closed
|
[
"oncall: distributed"
] | 4
|
NONE
|
## 🐛 Bug/Feature Request: Issues with DTensor, Replicate, and Cross-Mesh AllReduce in PyTorch 2.6
**PyTorch version:** 2.6.0+cu124
**Setup:** TP=2, DP=2
**APIs used:** `torch.distributed._composable.replicate`, DTensor, DeviceMesh
### Summary
I'm experimenting with minimal tensor parallel (TP) and data parallel (DP) training using the new PyTorch distributed APIs. I encountered two issues:
---
### 1. `model.parameters()`/`state_dict()` changes after forward pass
**Repro:**
```python
from torch.distributed._composable.replicate import replicate
# model, dp_mesh are already defined
model = replicate(model, device_mesh=dp_mesh, bucket_cap_mb=100)
print(list(model.named_parameters())) # Shows all parameters
outputs = model(**batch) # Forward pass
print(list(model.named_parameters())) # Only returns [('lm_head.weight', ...)]
```
**Observation:**
After the forward pass, `model.parameters()` and `state_dict()` only return the tied parameter (`lm_head.weight` in my case). Before the forward, all parameters are present.
**Question:**
Is this expected? Why do the parameters disappear after the forward pass?
---
### 2. Cross-mesh DTensor gradient allreduce not supported
I want to allreduce gradients across the DP mesh. I tried:
```python
if dp_mesh.size() > 1:
for name, param in model.named_parameters():
param.grad = param.grad.redistribute(
device_mesh=dp_mesh,
placements=[Replicate()]
)
```
**Error:**
```
NotImplementedError: Cross device mesh comm not supported yet!
```
**Workaround:**
```python
if dp_mesh.size() > 1:
for name, param in model.named_parameters():
local_grad = param.grad.to_local()
torch.distributed.all_reduce(
local_grad,
op=torch.distributed.ReduceOp.AVG,
group=dp_mesh.get_group()
)
param.grad._local_tensor = local_grad
```
**Feedback/Request:**
- Is it possible to support cross-mesh communication for DTensor, or at least allow passing a process group to `redistribute`?
- Alternatively, could there be a simple way to allreduce a DTensor over any process group?
---
### Additional Info
- I was using `tp_mesh` to initialize, but want to do comms on another mesh (`dp_mesh`). I believe this should be fine since only the process group is needed for the allreduce.
- you can find a full minimal script [here](https://github.com/huggingface/transformers/blob/c3e5c5eda019de0d16a2091914858c76633473a7/test_train.py)
---
**cc:** @wconstab @albanD
### Versions
**PyTorch version:** 2.6.0+cu124
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,036,578,350
|
Rename "startup-tracing-compile" to "compile-time" in label_to_label.yml
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152711
| true
|
3,036,570,904
|
[FSDP2] NO_SHARD as fully_shard(mesh=(Replicate, Shard)) with shard of world size 1
|
weifengpy
|
open
|
[
"oncall: distributed",
"triaged"
] | 4
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
FSDP1 has NO_SHARD, meaning no all-gather/reduce-scatter, just all-reduce for gradients. It's useful to handle modules with low flops or small number of parameters
FSDP2/HSDP2 can support the same with a 2D device mesh
* fully_shard(mesh=(Replicate, Shard))
* Shard world size of 1, Replicate world size of N
* Gradients will be all-reduced in the backward
NO_SHARD VS DDP
* DDP wraps root model (does not need nested wrapping). It use gradient buckets to overlap all-reduce with compute
* NO_SHARD need nested wrapping to overlap all-reduce with compute
* For DDP + mixed precision (torch.amp), the casting happens at per parameter level
* For NO_SHARD + mixed precision, casting happens for parameter groups and is more performant
When to use NO_SHARD instead of DDP? If a model is already fsdp parallelsized, user can use NO_SHARD to special treat small modules
What would user do if FSDP2 do not support NO_SHARD
* use FSDP1 NO_SHARD, do not migrate to FSDP2
* migrate to FSDP2, but init DDP, but ignore fsdp parameters, and let user implement mixed precision through torch.amp
There are few things to discuss
What should `model.parameters()` return
* DTensor(placement=Replicate, Shard)
* plain tensor
I would prefer DTensor(placement=Replicate, Shard) to align with mesh
Should we support fully_shard(mesh=Replicate) to officially support NO_SHARD? Internally we can reuse the logic of HSDP2. The priority seems to be low since HSDP can support it already
Performance-wise, should we have a fast-path to avoid copy-in/copy-out, AG/RS with pg with size 1? This requires some prototype to understand if it makes the code harder to understand. all-gather and gradient reduction streams dependencies are complicated
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,036,564,502
|
Move warning from item to specific number conversions
|
albanD
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: improvements"
] | 3
|
COLLABORATOR
|
Follow up to https://github.com/pytorch/pytorch/pull/143261 to not warn when a plain .item() is done.
| true
|
3,036,559,781
|
Scheduler Flops refactor
|
exclamaforte
|
open
|
[
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 2
|
CONTRIBUTOR
|
This refactors `estimate_flops` and `get_estimated_runtime` on scheduler nodes:
1. New function on BaseSchedulerNode: `estimate_flops`. Works with all types of ir nodes now, not just `ExternalKernels`.
1. Extends `get_estimated_runtime` to work with non-`ExternalKernels`.
Prelude to: https://github.com/pytorch/pytorch/pull/149697
Testing:
New unit tests cover functionality.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,036,551,436
|
[Memento] Add PT2 to Memory Snapshot
|
sraikund16
|
open
|
[
"enhancement",
"fb-exported",
"release notes: profiler",
"module: dynamo"
] | 5
|
CONTRIBUTOR
|
Summary:
To add PT2 information to memory snapshot we piggyback off of the Kineto implementation using record_function similar to adding the user annotations. To do this we add the following:
1. Stack implementation that we instantiate to keep track of which compile context stack we are currently in (top element of the stack). The stack will be per device and thread-local since different threads of a process can be in different compile contexts at a given time. For this reason, we do not need to add mutexes to our stack impl since no two threads will touch a given stack
2. RecordFunction hooks to properly pipe the correct events to the compile context stack. These hooks are similar to the annotation ones in the fact that we just register them lazily and DO NOT unregister them. This is done out of convenience. In the future, we should save the handles and unregister them to minimize overhead after profiling is finished. As of now, we are registering this at the FUNCTION scope which is wide; however, we treat any function that does not start with "Torch-Compiled Region" as a no-op so we anticipate the difference in performance to be negligible during and after profiling.
3. Piping for compile context to pickle output
Test Plan:
In D74039793, we add CompileContext to the visualizer and we see the following {F1977654658}
Differential Revision: D74028214
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,547,129
|
[PGNCCL] Add FP8 support
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152706
NCCL added support for `Float8e4m3` and `Float8e5m2` in 2.24.
NVIDIA GPUs does not seem to support the following "no negative zero" versions: `Float8_e4m3fnuz` and `Float8_e5m2fnuz`, see https://onnx.ai/onnx/technical/float8.html. So we continue to error out for these two upon a reduction op.
Test plan:
- test_allreduce_float8
- test_reduce_scatter_float8
Resolves #148344
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,036,518,703
|
[caffe2] Make c10::str works with scoped enum
|
Mizuchi
|
closed
|
[
"fb-exported",
"topic: not user facing"
] | 6
|
NONE
|
Test Plan:
```
buck2 test fbcode//caffe2/c10/test:util_base_tests --fail-fast
```
Differential Revision: D73872860
| true
|
3,036,517,292
|
[Dynamo] Guard serialization for EQUALS_MATCH
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152961
* #152729
* #152872
* #152865
* #152730
* #152728
* #152727
* #152725
* #152724
* __->__ #152704
This PR:
* Makes no changes to non-test code to support serialization for EQUALS_MATCH
* Adds test logic involving a custom-defined constant type to trigger the guard installation here:
https://github.com/pytorch/pytorch/blob/72337bdcf2f86eb72f289fdbd7eb63fd664aaa86/torch/_dynamo/variables/user_defined.py#L792
Q: Is there a better way to trigger installation of this guard or is this sufficient?
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,509,818
|
Gradient can be backpropagated through only certain distributions
|
AlbertoSinigaglia
|
closed
|
[
"module: distributions",
"module: autograd",
"triaged"
] | 4
|
NONE
|
### 🐛 Describe the bug
Using Normal, I can avoid having to preserve gradients:
```python
params = torch.tensor([[0.2], [0.3]]).float().to(device)
params.requires_grad = True
with torch.no_grad():
distr = torch.distributions.normal.Normal(params, params*0+1)
sample = distr.sample().squeeze()
distr.log_prob(sample).mean().backward()
print(params.grad)
# runs fine
```
Using Categorical, this is not the case:
```python
params = torch.tensor([[0.2, 0.8], [0.3, 0.7]]).float().to(device)
params.requires_grad = True
with torch.no_grad():
distr = torch.distributions.categorical.Categorical(params)
sample = distr.sample().squeeze()
distr.log_prob(sample).mean().backward()
print(params.grad)
# RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
I'm not sure why this is the case, log-probs might be something like `torch.log(probs[one_hot(samples)].mean(axis=-1))` and it's completely differntiable, so should be able to do it without gradient in the distribution
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.36
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.1.0-31-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA L40S
GPU 1: NVIDIA L40S
GPU 2: NVIDIA L40S
GPU 3: NVIDIA L40S
GPU 4: NVIDIA L40S
GPU 5: NVIDIA L40S
Nvidia driver version: 535.216.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9224 24-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 58%
CPU max MHz: 3706.0540
CPU min MHz: 1500.0000
BogoMIPS: 4999.84
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.7.0
[pip3] torchvision==0.22.0
[pip3] triton==3.3.0
[conda] No relevant packages
```
cc @fritzo @neerajprad @alicanb @nikitaved @ezyang @albanD @gqchen @soulitzer @Varal7 @xmfan
| true
|
3,036,507,871
|
Removing conda references from PyTorch Docs
|
anitakat
|
open
|
[
"topic: not user facing"
] | 3
|
NONE
|
Addresses #148339
| true
|
3,036,502,874
|
MPS internal assertion with jacfwd and concatenation
|
inventshah
|
open
|
[
"module: crash",
"triaged",
"module: mps",
"module: functorch"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Using `torch.func.jacfwd` on a function that contains a concatenation type operator (e.g., `torch.stack`, `torch.cat`, `torch.vstack`) triggers an assertion ```RuntimeError: !self.is_mps() INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp":1408, please report a bug to PyTorch. as_strided_tensorimpl does not work with MPS; call self.as_strided(...) instead```
Minimal repro:
```python
import torch
def example(x, y):
return torch.cat((x, y))
jac = torch.func.jacfwd(example)
x = torch.tensor([0.0], device="mps")
jac(x, x)
```
Note `torch.func.jacrev` does not cause the error. Looks related to #111547.
Ran with `TORCH_SHOW_CPP_STACKTRACES=1`
```
RuntimeError: !self.is_mps() INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp":1408, please report a bug to PyTorch. as_strided_tensorimpl does not work with MPS; call self.as_strided(...) instead
Exception raised from as_strided_tensorimpl at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:1408 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>) + 52 (0x101d9bfd8 in libc10.dylib)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) + 140 (0x101d98c4c in libc10.dylib)
frame #2: c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, char const*) + 72 (0x101d98e4c in libc10.dylib)
frame #3: at::native::as_strided_tensorimpl(at::Tensor const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, std::__1::optional<long long>) + 472 (0x1387f50b8 in libtorch_cpu.dylib)
frame #4: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, std::__1::optional<c10::SymInt>), &at::(anonymous namespace)::(anonymous namespace)::wrapper_ZeroTensor__as_strided(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, std::__1::optional<c10::SymInt>)>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, std::__1::optional<c10::SymInt>>>, at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, std::__1::optional<c10::SymInt>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, std::__1::optional<c10::SymInt>) + 104 (0x13a22a374 in libtorch_cpu.dylib)
frame #5: at::_ops::as_strided::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, std::__1::optional<c10::SymInt>) + 476 (0x138c2ad68 in libtorch_cpu.dylib)
frame #6: at::Tensor::as_strided(c10::ArrayRef<long long>, c10::ArrayRef<long long>, std::__1::optional<long long>) const + 236 (0x13801424c in libtorch_cpu.dylib)
frame #7: at::native::expand(at::Tensor const&, c10::ArrayRef<long long>, bool) + 348 (0x1387f4178 in libtorch_cpu.dylib)
frame #8: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool), &torch::ADInplaceOrView::(anonymous namespace)::expand(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool)>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool>>, at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool) + 116 (0x13c1a40a4 in libtorch_cpu.dylib)
frame #9: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool), &torch::autograd::VariableType::(anonymous namespace)::expand(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool)>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool>>, at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool) + 996 (0x13b8a2f24 in libtorch_cpu.dylib)
frame #10: c10::impl::make_boxed_from_unboxed_functor<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool), &torch::autograd::VariableType::(anonymous namespace)::expand(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool)>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool>>, false>::call(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*) + 112 (0x13b8a3dd4 in libtorch_cpu.dylib)
frame #11: c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*) const + 144 (0x138032c14 in libtorch_cpu.dylib)
frame #12: at::functorch::Interpreter::sendToNextInterpreter(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*, bool) + 76 (0x1382bf918 in libtorch_cpu.dylib)
frame #13: at::functorch::dynamicLayerBack(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*, bool) + 212 (0x1382be400 in libtorch_cpu.dylib)
frame #14: at::_ops::expand::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, bool) + 528 (0x1391328f4 in libtorch_cpu.dylib)
frame #15: at::functorch::ensure_has_bdim(at::Tensor const&, bool, c10::SymInt) + 280 (0x13817c7f8 in libtorch_cpu.dylib)
frame #16: at::functorch::(anonymous namespace)::cat_batching_rule(c10::IListRef<at::Tensor> const&, long long) + 948 (0x1382c0a98 in libtorch_cpu.dylib)
frame #17: __decay(c10::guts::infer_function_traits<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(c10::IListRef<at::Tensor> const&, long long), at::Tensor, c10::guts::typelist::typelist<c10::IListRef<at::Tensor> const&, long long>>>::type::return_type) c10::impl::call_functor_with_args_from_stack_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(c10::IListRef<at::Tensor> const&, long long), at::Tensor, c10::guts::typelist::typelist<c10::IListRef<at::Tensor> const&, long long>>, false, 0ul, 1ul, c10::IListRef<at::Tensor> const&, long long>(c10::OperatorKernel*, c10::DispatchKeySet, std::__1::vector<c10::IValue, c10::DispatchKeySet::allocator<std::__1::vector>>*, c10::DispatchKeySet::integer_sequence<unsigned long, 0ul, 1ul>, c10::guts::typelist::typelist<c10::IListRef<at::Tensor> const&, long long>*) + 152 (0x13806c7ec in libtorch_cpu.dylib)
frame #18: c10::impl::make_boxed_from_unboxed_functor<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(c10::IListRef<at::Tensor> const&, long long), at::Tensor, c10::guts::typelist::typelist<c10::IListRef<at::Tensor> const&, long long>>, false>::call(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*) + 40 (0x13806c698 in libtorch_cpu.dylib)
frame #19: c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*) const + 144 (0x138032c14 in libtorch_cpu.dylib)
frame #20: at::functorch::Interpreter::process(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*) + 76 (0x1382bf7c4 in libtorch_cpu.dylib)
frame #21: void c10::BoxedKernel::make_boxed_function<&at::functorch::dynamicLayerFrontFallback(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*)>(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*) + 376 (0x1382bde0c in libtorch_cpu.dylib)
frame #22: c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*) const + 144 (0x138032c14 in libtorch_cpu.dylib)
frame #23: at::functorch::autogradBasedTransformSendToNext(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*, at::functorch::Interpreter const&, at::functorch::TransformType, std::__1::optional<bool>, std::__1::optional<bool>, bool) + 1216 (0x13817aabc in libtorch_cpu.dylib)
frame #24: at::functorch::Interpreter::sendToNextInterpreter(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*, bool) + 104 (0x1382bf934 in libtorch_cpu.dylib)
frame #25: at::functorch::dynamicLayerBack(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*, bool) + 212 (0x1382be400 in libtorch_cpu.dylib)
frame #26: c10::impl::BoxedKernelWrapper<at::Tensor (c10::IListRef<at::Tensor> const&, long long), void>::call(c10::BoxedKernel const&, c10::OperatorHandle const&, c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long) + 80 (0x138c3904c in libtorch_cpu.dylib)
frame #27: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long), &torch::autograd::VariableType::(anonymous namespace)::cat(c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long)>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long>>, at::Tensor (c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long)>::call(c10::OperatorKernel*, c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long) + 640 (0x13b36c170 in libtorch_cpu.dylib)
frame #28: at::_ops::cat::call(c10::IListRef<at::Tensor> const&, long long) + 304 (0x138c38400 in libtorch_cpu.dylib)
frame #29: torch::autograd::generated::details::cat_jvp(c10::IListRef<at::Tensor> const&, long long) + 876 (0x13cd89a34 in libtorch_cpu.dylib)
frame #30: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long), &torch::autograd::VariableType::(anonymous namespace)::cat(c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long)>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long>>, at::Tensor (c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long)>::call(c10::OperatorKernel*, c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long) + 1040 (0x13b36c300 in libtorch_cpu.dylib)
frame #31: c10::impl::make_boxed_from_unboxed_functor<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long), &torch::autograd::VariableType::(anonymous namespace)::cat(c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long)>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long>>, false>::call(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*) + 148 (0x13b36cad8 in libtorch_cpu.dylib)
frame #32: c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*) const + 144 (0x138032c14 in libtorch_cpu.dylib)
frame #33: at::functorch::autogradBasedTransformProcess(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*, long long, at::functorch::TransformType) + 528 (0x13817a014 in libtorch_cpu.dylib)
frame #34: at::functorch::Interpreter::process(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*) + 104 (0x1382bf7e0 in libtorch_cpu.dylib)
frame #35: void c10::BoxedKernel::make_boxed_function<&at::functorch::dynamicLayerFrontFallback(c10::OperatorHandle const&, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*)>(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue>>*) + 376 (0x1382bde0c in libtorch_cpu.dylib)
frame #36: c10::impl::BoxedKernelWrapper<at::Tensor (c10::IListRef<at::Tensor> const&, long long), void>::call(c10::BoxedKernel const&, c10::OperatorHandle const&, c10::DispatchKeySet, c10::IListRef<at::Tensor> const&, long long) + 80 (0x138c3904c in libtorch_cpu.dylib)
frame #37: at::_ops::cat::call(c10::IListRef<at::Tensor> const&, long long) + 416 (0x138c38470 in libtorch_cpu.dylib)
frame #38: torch::autograd::THPVariable_cat(_object*, _object*, _object*) + 568 (0x1039cde9c in libtorch_python.dylib)
<omitting python frames>
frame #49: start + 6000 (0x182f06b4c in dyld)
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.4.1 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.0.13.3)
CMake version: version 3.31.5
Libc version: N/A
Python version: 3.12.8 (main, Mar 24 2025, 16:58:11) [Clang 16.0.0 (clang-1600.0.26.6)] (64-bit runtime)
Python platform: macOS-15.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Max
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] torch==2.7.0
[conda] Could not collect
cc: @kulinseth @albanD @malfet
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @zou3519 @Chillee @samdow @kshitij12345
| true
|
3,036,492,056
|
DISABLED test_2d_mlp_with_nd_mesh (__main__.TestFullyShardNDTraining)
|
jithunnair-amd
|
open
|
[
"module: rocm",
"triaged",
"skipped"
] | 1
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it is failing on main branch (on MI200s only) ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22distributed%2F_composable%2Ffsdp%2Ftest_fully_shard_training.py%3A%3ATestFullyShardNDTraining%3A%3Atest_2d_mlp_with_nd_mesh%22%5D)).
Initial analysis indicates that this test needs a newer rccl version to work.
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,036,490,757
|
[CI] [anaconda] Triton windows build
|
atalman
|
open
|
[
"module: ci",
"triaged",
"better-engineering"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Related to https://github.com/pytorch/pytorch/issues/138506
CI Build and Test scripts to replace:
.github/scripts/windows/build_triton.bat
We would like to remove Anaconda build dependency
### Versions
2.8.0
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,036,489,281
|
[ez] Disable failing test in periodic no gpu no avx
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/periodic",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Failing on periodic after it was added in #152542
Ex
inductor/test_cpu_repro.py::CPUReproTests::test_tanh_atan2_use_decompose_tanh [GH job link](https://github.com/pytorch/pytorch/actions/runs/14775755628/job/41485185829) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/6f6acb412828844ee3bcdbf277283144faba2524)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @hl475
| true
|
3,036,467,369
|
CI workflows being skipped on PR
|
wdvr
|
open
|
[
"module: ci",
"triaged"
] | 1
|
CONTRIBUTOR
|
## Issue
PRs are not always running all CI workflows, preventing merges in some cases. If this happens, you will typically see @pytorchmergebot job errors like
> Merge of XYZ failed due to: 2 mandatory check(s) are pending/not yet run. The first few are:
> - Lint
> - pull
Or no CI being ran at all.
## Current Status
**Ongoing**
See [detailed issue](https://github.com/pytorch/pytorch/issues/151322)
## Mitigation
Mitigate by either running a rebase (`@pytorchmergebot rebase`), or adding the [ciflow/pull](https://github.com/pytorch/pytorch/labels/ciflow%2Fpull) label to the PR. The latter will manually kick off the pull jobs.
## Error looks like
- @pytorchmergebot times out when merging (typically 90 or less jobs have run in the 'Checks' tab)
- No CI at all is run
## Incident timeline (all times pacific)
Started April 15th or earlier
## User impact
*How does this affect users of PyTorch CI?*
Delays in merging / failure to merge.
## Root cause
This is a GitHub bug and we're actively working with the GitHub team to find the root cause.
## Prevention/followups
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,036,466,534
|
torch._foreach_pow(DTensor, float) and torch._foreach_pow_(DTensor, float) do not work
|
dbusbridge
|
open
|
[
"oncall: distributed",
"module: dtensor"
] | 0
|
NONE
|
### 🐛 Describe the bug
```python
import os
import torch
import torch.distributed as dist
from torch.distributed.tensor.device_mesh import init_device_mesh
from torch.distributed.tensor.placement_types import Shard
from torch.distributed.tensor import distribute_tensor
os.environ["RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "29503"
dist.init_process_group(backend="gloo")
mesh = init_device_mesh("cpu", (1,))
local_tensor = torch.randn(10, 5, dtype=torch.float32)
my_dtensor = distribute_tensor(local_tensor, mesh, [Shard(0)]) # Or [Replicate()]
torch._foreach_mul([my_dtensor], 2.0) # Fine
torch._foreach_pow([my_dtensor], 2.0) # Breaks
dist.destroy_process_group()
```
Result:
```
(base) ➜ /tmp python scratch.py
/Users/dbusbridge/miniconda3/lib/python3.11/site-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using `python3 -m pip install --upgrade 'optree>=0.13.0'`.
warnings.warn(
/Users/dbusbridge/miniconda3/lib/python3.11/site-packages/torch/distributed/tensor/_random.py:44: UserWarning: DTensor random operators may not have complete support on cpu device mesh
warnings.warn(
[rank0]: Traceback (most recent call last):
[rank0]: File "/private/tmp/scratch.py", line 21, in <module>
[rank0]: torch._foreach_pow([my_dtensor], 2.0) # Breaks
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/Users/dbusbridge/miniconda3/lib/python3.11/site-packages/torch/_compile.py", line 32, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/Users/dbusbridge/miniconda3/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/Users/dbusbridge/miniconda3/lib/python3.11/site-packages/torch/distributed/tensor/_api.py", line 346, in __torch_dispatch__
[rank0]: return DTensor._op_dispatcher.dispatch(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/Users/dbusbridge/miniconda3/lib/python3.11/site-packages/torch/distributed/tensor/_dispatch.py", line 167, in dispatch
[rank0]: op_info = self.unwrap_to_op_info(op_call, args, kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/Users/dbusbridge/miniconda3/lib/python3.11/site-packages/torch/distributed/tensor/_dispatch.py", line 400, in unwrap_to_op_info
[rank0]: assert mesh is not None, f"found no DeviceMesh from dtensor args for {op_call}!"
[rank0]: ^^^^^^^^^^^^^^^^
[rank0]: AssertionError: found no DeviceMesh from dtensor args for aten._foreach_pow.Scalar!
```
Related to:
- https://github.com/pytorch/pytorch/issues/132017 (similar issue for div, resolved)
- https://github.com/pytorch/pytorch/pull/132066 (addressed div)
### Versions
```
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.4.1 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.0.13.3)
CMake version: version 4.0.1
Libc version: N/A
Python version: 3.11.4 (main, Jul 5 2023, 08:40:20) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.12.1
[pip3] torch==2.6.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @tianyu-l @XilunWu
| true
|
3,036,463,335
|
set CUDA_MODULE_LOADING for older drivers only
|
ptrblck
|
open
|
[
"module: cuda",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
`CUDA_MODULE_LOADING=LAZY` is the default for all drivers shipped with CUDA >=12.2 and we should check the driver version before setting the env variable.
(the `LOG(WARNING)` has to be removed before merging)
cc @msaroufim @eqy @jerryzh168
| true
|
3,036,432,230
|
[reland] Detailed triton kernel logging
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152694
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,036,427,761
|
[ez] Fsspec Filesystem ls details should be false
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (checkpoint)",
"skip-url-lint",
"ciflow/pull"
] | 6
|
CONTRIBUTOR
|
Summary: The default action for ls for the local filesystem is with details=False, but this isn't the case for all filesystems (eg. huggingface), so setting details=False explicitly so that the return type of ls is a list of strings, and not a list of dictionaries, which is what it would be with details=True.
Test Plan: tested in notebook
Differential Revision: D74080572
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,036,392,523
|
[aotinductor] Don't alloc weights if they don't exist
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 16
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/152356
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,036,299,567
|
inductor-periodic failures 5/2/2025
|
zou3519
|
closed
|
[
"high priority",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
See https://hud.pytorch.org/hud/pytorch/pytorch/main/1?per_page=50&name_filter=inductor-periodic&mergeEphemeralLF=true
- hf_BigBird (dynamic_inductor) went from fail_to_run to fail_accuracy. This is silent incorrectness.
- hf_BigBird (dynamo) went from 0 to 9 graph breaks
@ydwu4 @BoyuanFeng your PRs look the most suspicious: https://github.com/pytorch/pytorch/pull/152472, https://hud.pytorch.org/pytorch/pytorch/commit/5b5938929f06d0228e38cf15b46b141400f6ca7f

hi-pri for silent incorrectness (fail_accuracy)
cc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu
| true
|
3,036,273,849
|
Add sm_86 (Ampere) and sm_89 (Ada) SASS in aarch64 builds
|
agirault
|
open
|
[
"module: build",
"module: cuda",
"triaged",
"module: arm",
"module: jetson"
] | 8
|
NONE
|
### 🚀 The feature, motivation and pitch
### Request
Add SASS support for sm_86 and sm_89 to the aarch64 (sbsa) wheels
### Motivation
Support [NVIDIA IGX](https://www.nvidia.com/en-us/edge-computing/products/igx/) (aarch64) with discrete GPU in the same build as the SBSA wheels. The IGX supports the NVIDIA A6000 (Ampere) and NVIDIA RTX 6000 Ada.
### Alternatives
_No response_
### Additional context
- IGX is based on Arm Cortex-A78AE with a base arch of Armv8.2-a, higher than the [Armv8.1-a requirement for NVPL](https://docs.nvidia.com/nvpl/latest/index.html#cpu-support) which is needed in current arm64 builds.
- IGX is "Orin" like Jetson Orin (same SOC) but the dGPU stack is the same as SBSA, not as Jetpack (iGPU)
- I assume this would also enable running latest pytorch builds on [NVIDIA L40S](https://www.nvidia.com/en-us/data-center/l40s/) and [NVIDIA L4](https://www.nvidia.com/en-us/data-center/l4) in arm datacenters.
cc @malfet @seemethere @ptrblck @msaroufim @eqy @jerryzh168 @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01 @puririshi98
| true
|
3,036,271,444
|
[ca][dtensor] run real PG dtensor tests under CA
|
xmfan
|
open
|
[
"oncall: distributed",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152735
* __->__ #152689
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,036,267,564
|
Add a test for AsyncCollectiveTensor handling for maybe-view ops
|
bdhirsh
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
We never added a proper test for the fix from https://github.com/pytorch/pytorch/pull/134661
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151719
* __->__ #152688
* #152195
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,036,208,882
|
[dynamo] Guard serialization for DUPLICATE_INPUT.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152723
* #152721
* #152716
* __->__ #152687
* #152616
* #152615
Seems this guard is not very active. Adding a test to detect error handling at least.
Differential Revision: [D74074837](https://our.internmc.facebook.com/intern/diff/D74074837/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,051,276
|
Avoid triggering ignored requires_grad warning in our code
|
albanD
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: python_frontend",
"topic: improvements",
"ci-no-td"
] | 9
|
COLLABORATOR
|
This one is ok to silence as we're just doing formatting
| true
|
3,036,022,416
|
torch.library.custom_op string support
|
zou3519
|
open
|
[
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
guess we don't have it yet
cc @chauhang @penguinwu @bdhirsh
| true
|
3,035,959,967
|
DISABLED test_comprehensive_select_scatter_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_select_scatter_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41529115784).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_select_scatter_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 676, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 895, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 879, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1495, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1382, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2219, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2266, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpkz3l_4bj/6f/c6fyvhyv76xytlzk73kow2ueza7blp3225dabnzirykad5j64gm6.py", line 86, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmptpzliv7t/triton/CAAZH3NVT3BORETJ4DYPUCNSVUTTJ3TEDVZPNDVHOYGVRGSDN2KQ/triton_poi_fused_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 2: SampleInput(input=Tensor[size=(5, 5, 5), device="cuda:0", dtype=torch.float16], args=(Tensor[size=(5, 5), device="cuda:0", dtype=torch.float16],-1,-1), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=2 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_select_scatter_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,035,930,537
|
Flex attention strides
|
ngc92
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 0
|
NONE
|
### 📚 The doc issue
The [FlexAttention](https://pytorch.org/docs/stable/nn.attention.flex_attention.html#module-torch.nn.attention.flex_attention) docs nicely document the expected shapes of inputs, but does not specify anything about stride. In contrast, for example cuDNN [documents](https://docs.nvidia.com/deeplearning/cudnn/frontend/latest/operations/Attention.html) that strides can be freely chosen except for requiring the last dimension to be contiguous. Knowing available options for striding is important, as that informs, e.g., whether it is possible to merge the QKV matmuls into a single matmul.
Also (and independently), `return_lse` is missing from the output documentation.
### Suggest a potential alternative/fix
_No response_
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
3,035,821,321
|
Update the signature and test of torch.hamming_window()
|
ILCSFNO
|
open
|
[
"triaged",
"open source",
"release notes: python_frontend"
] | 5
|
CONTRIBUTOR
|
Fixes #146590
| true
|
3,035,748,666
|
Fix signature of torch.sparse_coo_tensor()
|
ILCSFNO
|
open
|
[
"triaged",
"open source",
"release notes: sparse"
] | 3
|
CONTRIBUTOR
|
Fixes #145371
@pearu Searched all and find these codes, wondering whether is the root cause of the issue, could you have a review? Thanks a lot!
| true
|
3,035,567,377
|
Add pad limit of avg_poolnd and AvgPoolnd
|
ILCSFNO
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn"
] | 4
|
CONTRIBUTOR
|
Fixes #152156
| true
|
3,035,423,880
|
[RFC] Universal Device Context and Safe GPU/CPU Execution Decorators
|
Tunahanyrd
|
open
|
[
"triaged",
"enhancement",
"needs research",
"module: python frontend"
] | 2
|
NONE
|
### Feature
I propose a small but useful utility package named `cuda_tools` that provides:
- A `DeviceContext` context manager for clean device/AMP/cache handling
- Simple and advanced decorators (`@cuda`, `@cuda.advanced`) to make any function run safely on GPU or CPU
- Optional automatic tensorization (int, list, np.ndarray → torch.Tensor)
- Memory profiling, retry on error, timeout, automatic fallback to CPU on OOM
- AMP and multi-GPU support (optional)
### Why
Working with GPU-accelerated functions often requires boilerplate code:
- Device selection
- `.to(device)` calls
- Cache clearing
- Error handling for CUDA OOM
- AMP context setup
- Converting NumPy / CuPy / TensorFlow objects into torch.Tensor
This toolset wraps all that logic in a minimal, reusable, and modular design.
### Package Structure (already implemented)
```text
cuda_tools/
├── __init__.py
├── context.py # DeviceContext
├── decorators.py # @cuda, @cuda.advanced
├── utils.py # tensor conversion, CuPy patching, etc.
```
## Demo Repository:
https://github.com/Tunahanyrd/universal-cuda-tools
Note
This is not a request to include the entire codebase as-is.
Rather, the components are highly modular, and any part that fits the PyTorch core philosophy could be selectively adapted into torch.utils, torch.cuda, or elsewhere.
If there's interest, I’d be happy to refine this further or submit a minimal PR.
Thanks for considering!
### Alternatives
Alternative solutions typically involve manual `.to(device)` calls, AMP wrapping, and cache clearing. However, these require repetitive boilerplate and are not reusable. While other utility wrappers exist in personal or third-party codebases, they are often specific and not modular like this.
### Additional context
This toolset was developed during real-world training workflows involving long-running model training on limited-GPU hardware. It aims to reduce boilerplate while improving safety and clarity in device management.
Example usage:
```python
@cuda(device="cuda", retry=1, auto_tensorize=True)
def train_step(batch):
# Works with raw ints/lists/np arrays; runs safely on selected device
...
cc @albanD
| true
|
3,035,368,952
|
[Autotune Cache] Fix the bug of using the wrong key for recording artifacts in CacheArtifactManager
|
dongji-gao
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 9
|
CONTRIBUTOR
|
Summary: Replace the key (path) from `<hash>.best_config` to `<parent_dir>/<hash>.best_config` to ensure that Autotune artifacts in MegaCache are loaded to the correct location locally.
Test Plan: NA
Differential Revision: D74052400
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,035,364,845
|
[export] Dynamo symint support
|
angelayi
|
open
|
[
"ciflow/trunk",
"fx",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Basically adds native _IntWrapper support to dynamo. Here's my process of trying to make symint input support work on dynamo, and how I ended up with this approach [(doc)](https://docs.google.com/document/d/1GvNRQd8BnxlMay_hrEVgEta6VUeUW_hcFeRuB7q1nDY/edit?tab=t.0).
What I did was, before passing inputs to dynamo.export, I first wrap them with a class, `_IntWrapper`. When processing dynamic shapes, I will then add the corresponding dynamic shape specification to the `dynamism` field stored on the `_IntWrapper`. If there is no dynamism specified, then this will get unwrapped back to an integer. When dynamo tracing, when we encounter an `_IntWrapper`, we will convert this to a symint if the dynamism was specified as `Dim.DYNAMIC/AUTO`. Dynamo will then trace a graph that contains symint inputs, which will get passed to AOTAutograd and so on.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,035,350,766
|
[float16]: Fix the accumulation type for dot and gemv
|
f2013519
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"topic: bug fixes"
] | 4
|
CONTRIBUTOR
|
Fixes #147860
Also, partially address: https://github.com/pytorch/pytorch/issues/125438
Use float32 for accumulation with float16 and and bfloat16 types
| true
|
3,035,314,056
|
[invoke_subgraph] Run missing graph passes recursively
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/pull"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152806
* __->__ #152675
* #152770
* #152772
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,035,311,431
|
DISABLED AotInductorTest.BasicPackageLoaderTestCuda (build.bin.test_aoti_inference)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 18
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=AotInductorTest.BasicPackageLoaderTestCuda&suite=build.bin.test_aoti_inference&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41519872229).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `AotInductorTest.BasicPackageLoaderTestCuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
unknown file
C++ exception with description "CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /var/lib/jenkins/workspace/c10/cuda/CUDAException.cpp:42 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x9c (0x7f697b6bb1cc in /var/lib/jenkins/workspace/build/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x104 (0x7f697b64b73a in /var/lib/jenkins/workspace/build/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x40d (0x7f697b788e3d in /var/lib/jenkins/workspace/build/lib/libc10_cuda.so)
frame #3: void at::native::gpu_kernel_impl_nocast<at::native::BinaryFunctor<float, float, bool, at::native::(anonymous namespace)::CompareEqFunctor<float> > >(at::TensorIteratorBase&, at::native::BinaryFunctor<float, float, bool, at::native::(anonymous namespace)::CompareEqFunctor<float> > const&) + 0x7fd (0x7f696645c69d in /var/lib/jenkins/workspace/build/lib/libtorch_cuda.so)
frame #4: void at::native::gpu_kernel_impl<at::native::BinaryFunctor<float, float, bool, at::native::(anonymous namespace)::CompareEqFunctor<float> > >(at::TensorIteratorBase&, at::native::BinaryFunctor<float, float, bool, at::native::(anonymous namespace)::CompareEqFunctor<float> > const&) + 0x44f (0x7f696645cddf in /var/lib/jenkins/workspace/build/lib/libtorch_cuda.so)
frame #5: void at::native::gpu_kernel<at::native::BinaryFunctor<float, float, bool, at::native::(anonymous namespace)::CompareEqFunctor<float> > >(at::TensorIteratorBase&, at::native::BinaryFunctor<float, float, bool, at::native::(anonymous namespace)::CompareEqFunctor<float> > const&) + 0x35b (0x7f696645d57b in /var/lib/jenkins/workspace/build/lib/libtorch_cuda.so)
frame #6: void at::native::opmath_symmetric_gpu_kernel_with_scalars<float, bool, at::native::(anonymous namespace)::CompareEqFunctor<float> >(at::TensorIteratorBase&, at::native::(anonymous namespace)::CompareEqFunctor<float> const&) + 0x195 (0x7f6966483445 in /var/lib/jenkins/workspace/build/lib/libtorch_cuda.so)
frame #7: at::native::compare_eq_ne_kernel(at::TensorIteratorBase&, at::native::(anonymous namespace)::EqOpType) + 0x178 (0x7f6966432128 in /var/lib/jenkins/workspace/build/lib/libtorch_cuda.so)
frame #8: <unknown function> + 0x3cbae29 (0x7f69684bae29 in /var/lib/jenkins/workspace/build/lib/libtorch_cuda.so)
frame #9: <unknown function> + 0x3cbaef8 (0x7f69684baef8 in /var/lib/jenkins/workspace/build/lib/libtorch_cuda.so)
frame #10: at::_ops::eq_Tensor::call(at::Tensor const&, at::Tensor const&) + 0x1b2 (0x7f697dc76fb2 in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #11: at::native::isclose(at::Tensor const&, at::Tensor const&, double, double, bool) + 0xbe (0x7f697d678a7e in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #12: <unknown function> + 0x30d56dc (0x7f697e8d56dc in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #13: at::_ops::isclose::call(at::Tensor const&, at::Tensor const&, double, double, bool) + 0x1ee (0x7f697e3a1aae in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #14: at::native::allclose(at::Tensor const&, at::Tensor const&, double, double, bool) + 0x37 (0x7f697d6767d7 in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #15: <unknown function> + 0x5027cc5 (0x7f6980827cc5 in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #16: at::_ops::allclose::call(at::Tensor const&, at::Tensor const&, double, double, bool) + 0x1cd (0x7f697dc6e5bd in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #17: <unknown function> + 0x336ff (0x5573a72216ff in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #18: torch::aot_inductor::AotInductorTest_BasicPackageLoaderTestCuda_Test::TestBody() + 0x41 (0x5573a7221ad1 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #19: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x51 (0x5573a7273271 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #20: <unknown function> + 0x750a0 (0x5573a72630a0 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #21: testing::TestInfo::Run() + 0x40a (0x5573a72635ba in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #22: <unknown function> + 0x79699 (0x5573a7267699 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #23: testing::internal::UnitTestImpl::RunAllTests() + 0xf28 (0x5573a7268ae8 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #24: testing::UnitTest::Run() + 0x93 (0x5573a72692b3 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #25: main + 0x104 (0x5573a721d8f4 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #26: __libc_start_main + 0xf3 (0x7f696407e083 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #27: _start + 0x2e (0x5573a721f48e in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
" thrown in the test body.
unknown file:0: C++ failure
```
</details>
Test file path: `` or `test/run_test`
Error: Error retrieving : 400, test/run_test: 404
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
| true
|
3,035,311,361
|
DISABLED test_comprehensive_std_cuda_float64 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_std_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41519872219).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_std_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads
return torch.autograd.grad(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2191, in backward
return impl_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2177, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2272, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
disable(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 857, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2242, in bw_compiler
return inner_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 727, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 845, in _compile_fx_inner
mb_compiled_graph, cache_info = FxGraphCache.load_with_key(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1405, in load_with_key
compiled_graph, cache_info = FxGraphCache._lookup_graph(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1156, in _lookup_graph
artifact_path = graph.after_deserialization(constants)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 709, in after_deserialization
code_cache = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpmim7h_tc/4p/c4p445frjsllyjtd5mdojwjckmfsqk2juy672hcadctwru4i4cb4.py", line 139, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3528, in result
self.static_autotuner.precompile( # type: ignore[union-attr]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpmim7h_tc/triton/AP7RGW3WA3RFQD7GUVY4DA3ZL4GT3NEK6BMLCK54YY64JHDWG6EQ/triton_per_fused_div_eq_masked_fill_mean_mul_sub_0.cubin')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 8: SampleInput(input=Tensor[size=(5, 5, 5), device="cuda:0", dtype=torch.float64], args=(), kwargs={'dim': 'None', 'correction': 'None'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=8 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_std_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,035,311,300
|
DISABLED test_comprehensive_cummin_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_cummin_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41520086944).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_cummin_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/mock.py", line 1424, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
~~~~~~~~~~~~~~~~~~~~^
fn,
^^^
...<2 lines>...
**adjusted_kwargs,
^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
~~~~~~~~~~~^
self,
^^^^^
...<13 lines>...
output_process_fn_grad=output_process_fn_grad,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads
return torch.autograd.grad(
~~~~~~~~~~~~~~~~~~~^
flat_diff_results,
^^^^^^^^^^^^^^^^^^
...<3 lines>...
retain_graph=True,
^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
outputs,
...<5 lines>...
accumulate_grad=False,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
t_outputs, *args, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^
) # Calls into the C++ engine to run the backward pass
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2191, in backward
return impl_fn()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2177, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2272, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
~~~~~~~~~~~~~~~~~~~~~~^
copy.deepcopy(bw_module), placeholder_list
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
disable(
~~~~~~~~
bw_compiler_fn, reason="do not trace backward compiler function"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
)(*args, **kwargs),
~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 857, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2242, in bw_compiler
return inner_compile(
gm,
...<5 lines>...
boxed_forward_device_index=forward_device,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 727, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
gm,
^^^
example_inputs,
^^^^^^^^^^^^^^^
**kwargs,
^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 895, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
e.__traceback__
) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 879, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
gm, example_inputs, inputs_to_check, **graph_kwargs
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1495, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1382, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2219, in compile_to_module
return self._compile_to_module()
~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2266, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
key,
...<2 lines>...
attrs={**self.constants, **self.torchbind_constants},
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/tmp0oeuehfd/kg/ckgh2zdoxk3cbwcnsrzhjyu3uxlzcbvu3hponad4r3o3dpe5r3sl.py", line 113, in <module>
async_compile.wait(globals())
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
~~~~~~~~~~~~~~~~~^
warm_cache_only=False,
^^^^^^^^^^^^^^^^^^^^^^
reload_kernel=reload_kernel_in_parent,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
static_triton_bundle_key=CompiledTritonKernels.key(source_code),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
"Cubin file saved by TritonBundler not found at %s", cubin_location
)
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpsfkhla7e/triton/M4ZV7FWQHOAK7P7X2TA3UR7KHGYK7DD3KTUXO7EYESA6B4KC5OPA/triton_poi_fused_scatter_add_zeros_0.cubin')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 2: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.float16], args=(0), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=2 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_cummin_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,035,311,227
|
DISABLED test_comprehensive_polygamma_polygamma_n_0_cuda_float64 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_polygamma_polygamma_n_0_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41520086944).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_polygamma_polygamma_n_0_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/mock.py", line 1424, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
~~~~~~~~~~~~~~~~~~~~^
fn,
^^^
...<2 lines>...
**adjusted_kwargs,
^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
~~~~~~~~~~~^
self,
^^^^^
...<13 lines>...
output_process_fn_grad=output_process_fn_grad,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads
return torch.autograd.grad(
~~~~~~~~~~~~~~~~~~~^
flat_diff_results,
^^^^^^^^^^^^^^^^^^
...<3 lines>...
retain_graph=True,
^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
outputs,
...<5 lines>...
accumulate_grad=False,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
t_outputs, *args, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^
) # Calls into the C++ engine to run the backward pass
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2191, in backward
return impl_fn()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2177, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2272, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
~~~~~~~~~~~~~~~~~~~~~~^
copy.deepcopy(bw_module), placeholder_list
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
disable(
~~~~~~~~
bw_compiler_fn, reason="do not trace backward compiler function"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
)(*args, **kwargs),
~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 857, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2242, in bw_compiler
return inner_compile(
gm,
...<5 lines>...
boxed_forward_device_index=forward_device,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 727, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
gm,
^^^
example_inputs,
^^^^^^^^^^^^^^^
**kwargs,
^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 895, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
e.__traceback__
) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 879, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
gm, example_inputs, inputs_to_check, **graph_kwargs
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1495, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1382, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2219, in compile_to_module
return self._compile_to_module()
~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2266, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
key,
...<2 lines>...
attrs={**self.constants, **self.torchbind_constants},
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/tmpljzrw080/7i/c7irlnvmhqn7v5ctyocaq4mar4mkjat4fuuot6c6ckj2owdf7sss.py", line 76, in <module>
async_compile.wait(globals())
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
~~~~~~~~~~~~~~~~~^
warm_cache_only=False,
^^^^^^^^^^^^^^^^^^^^^^
reload_kernel=reload_kernel_in_parent,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
static_triton_bundle_key=CompiledTritonKernels.key(source_code),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
"Cubin file saved by TritonBundler not found at %s", cubin_location
)
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpeil4pd_4/triton/OOFF3ODYJG4RK7L3D6HOIKYUGGVZHZ64GTPSTQZIHACDQY5SMZBQ/triton_poi_fused_mul_0.cubin')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 3: SampleInput(input=Tensor[size=(5, 5), device="cuda:0", dtype=torch.float64], args=(4), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=3 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_polygamma_polygamma_n_0_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,035,249,515
|
add codegen layer specialization dispatch
|
bobrenjc93
|
closed
|
[
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152757
* __->__ #152670
* #152601
* #152597
* #152596
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,035,221,842
|
Added documentation for nonzero_static function (#152347)
|
sanjai-11
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
NONE
|
Fixes #152347
This PR updates the documentation for the nonzero_static function to include examples, arguments, and the behavior of the function.
| true
|
3,035,152,969
|
Torch BF16 group gemm hangs in backward pass - core issue isolated, needs proper resolution.
|
lessw2020
|
open
|
[
"module: cuda",
"module: error checking",
"triaged",
"module: deadlock"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When using the native torch.__group_gemm enabled via https://github.com/pytorch/pytorch/pull/150374,
users reported hanging after a certain number of iterations in the backward pass. (https://github.com/pytorch/torchtitan/issues/1118)
With a deep dive from and big credit to @rciocoiu, a min repro case has been established that points to the core issue being that if an individual m_offsets is empty (i.e an expert has no assigned tokens) then torch gg will be fine in forward, but hang in backwards.
We find that padding any empty offset with at least 8, avoids the hang.
From there, additional testing was done b/c users also reported that Adam vs AdamW made a difference in the hang with Adam running for longer.
Using titan llama4 and running with Adam, I verified that the hang occurs as soon as a given expert encounters a zero token's assigned.
By contrast, when running with AdamW, this hang is ultimately encountered much sooner b/c with AdamW, an expert only has to get to the min_aligment_m used in offset creation - tested and verified with both 8 and 16 as min_alignment in the create permute indices.
As soon as an expert hits that number of tokens, it will hang there rather than getting to zero like with Adam and thus matches user reported differences based on optimizer.
example with Adam and an expert with zero tokens - screenshot of hang:
<img width="1016" alt="Image" src="https://github.com/user-attachments/assets/5fc1637c-d97e-40bd-9beb-f77a0ef32b1c" />
by contrast with AdamW and min_alignment of 8 - screenshot of hang and token to expert assignment...note that 2 experts have 8 tokens assigned:
<img width="1016" alt="Image" src="https://github.com/user-attachments/assets/579483ff-f884-4893-b748-441f1946f076" />
Easy repro scenario:
~~~
import torch
num_experts = 4
M, K, N = 48, 8, 16
# to repro hang, make a given expert have 0 tokens ala (0, 8, 16, 32, 40) or (8,8,16,32,40)
m_offsets_hang = (8, 8, 32, 40)
m_offsets = (8, 16, 32, 40)
x = torch.randn(M, K, dtype=torch.bfloat16, device="cuda", requires_grad=True)
print(f"{x.shape=}")
w = torch.randn(
num_experts, K, N, dtype=torch.bfloat16, device="cuda", requires_grad=True
)
print(f"{w.shape=}")
offs = torch.tensor(m_offsets, dtype=torch.int32, device="cuda")
print(f"Running simple forward...")
o = torch._grouped_mm(x, w, offs)
print(f"forward completed!")
print(f"Running backward...")
o.backward(torch.randn_like(o))
print(f"backward completed!")
torch.cuda.synchronize()
print(f"Completed! {o.shape=}")
~~~
Probably two resolutions here:
a - for implementation side fix, will work on padding out any empty m_offsets to avoid passing in zero via our generate_permute_indices kernel.
b - ideally, the kernel itself can correct the issue in the backwards? otherwise, likely needs to also check if any offsets are zero, or at least we need to document that it requires no empty offsets?
c - unclear what difference the optimizer is making but clearly there is a subtle difference as shown above. However, maybe we don't care if this all goes away with min padding and enforcement of no zero offsets.
### Versions
latest nightly will work fine to run min repro.
Use current torchtitan and llama4 in experimental to repro there.
for completeness:
PyTorch version: 2.8.0.dev20250430+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk20_zion_2830_g3e5ab162667d-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
GPU 4: NVIDIA H100
GPU 5: NVIDIA H100
GPU 6: NVIDIA H100
GPU 7: NVIDIA H100
Nvidia driver version: 535.154.05
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.9.8.0
/usr/lib64/libcudnn_adv.so.9.8.0
/usr/lib64/libcudnn_cnn.so.9.8.0
/usr/lib64/libcudnn_engines_precompiled.so.9.8.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib64/libcudnn_graph.so.9.8.0
/usr/lib64/libcudnn_heuristic.so.9.8.0
/usr/lib64/libcudnn_ops.so.9.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 84%
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4792.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.8.0.87
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250430+cu128
[pip3] torchao==0.11.0+git2fcab01d
[pip3] torchdata==0.11.0
[pip3] torchtitan==0.0.2
[pip3] triton==3.3.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.8.0.87 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0.dev20250430+cu128 pypi_0 pypi
[conda] torchao 0.11.0+git2fcab01d dev_0 <develop>
[conda] torchdata 0.11.0 pypi_0 pypi
[conda] torchtitan 0.0.2 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
cc @ptrblck @msaroufim @eqy @jerryzh168 @malfet @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
3,035,140,229
|
[StaticCudaLauncher] Ensure cuda context exists before launching kernels
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152667
Triton does this already due to https://github.com/triton-lang/triton/pull/3731/files, in order to fix https://github.com/pytorch/pytorch/issues/124565. We need to do the same thing as triton here, so that in cases with no compilation we still have a cuda context in the backward autograd thread.
Fixes https://github.com/pytorch/pytorch/issues/152639
| true
|
3,035,120,540
|
DISABLED test_comprehensive_nansum_cuda_int32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 6
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nansum_cuda_int32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41514137478).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_nansum_cuda_int32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 676, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 895, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 879, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1495, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1382, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2219, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2266, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmp4agyq00r/ps/cpsl6ls62w4zdcezatmq26huxfmf6ubfaurtepqoljne6uc4be7f.py", line 88, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmp293puv6s/triton/DVMG2XNEYK6PZSRZWPLOPYELLAXCLLIOG6I46OTBG7APKD7MVTFQ/triton_poi_fused_nansum_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.int32], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_nansum_cuda_int32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,035,120,471
|
MXFP8 Fix broken bias support for mxfp8
|
drisspg
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152665
* #152744
| true
|
3,035,112,432
|
Raise error when no record on extra_files
|
ILCSFNO
|
open
|
[
"oncall: jit",
"triaged",
"open source",
"release notes: jit"
] | 2
|
CONTRIBUTOR
|
Fixes #152178
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,035,109,993
|
[MPS][BE] Do not dispatch empty kernels
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152515
* __->__ #152663
If `iter.numel()` is zero no need to dispatch kernel
| true
|
3,035,085,705
|
Re-enable FakeTensor caching for SymInts
|
aorenste
|
open
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152662
* #152661
Summary:
This backs out D60320595 which itself turned off FakeTensor caching when a SymInt was present.
Tests seem to pass so I'm assuming some dynamic shape work fixed what was breaking previously.
Test Plan: Reran the tests listed in T196779132 and they seem to pass.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,035,085,623
|
Fix evaluate_expr to include suppress_guards_tls in cache key
|
aorenste
|
open
|
[
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
ShapeEnv.evaluate_expr() behaves differently based on the (tls) global "suppress_guards" - so its cache key needs to include that value.
This came up because #152662 triggered it in the test `test/dynamo/test_exc.py::ExcTests::test_trigger_bisect_on_error` - fixing this caused that test to work again.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152661
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,035,085,365
|
[Inductor] Fix kernel argument ordering when using dynamic shapes with workspace
|
NikhilAPatel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152660
Summary:
This PR fixes a bug in the Triton kernel invocation path where the `workspace_tensor` was inserted before the unpacked `extra_args` list in the final kernel argument list. This broke the expected ordering of arguments when dynamic shape size hints are emitted.
When dynamic shapes are used, `extra_args` contains both size hint arguments and grid arguments. The kernel expects the argument list to follow the order: **size hints → workspace tensor → grid args**. But previously, the `workspace_tensor` was inserted before unpacking `extra_args`, resulting in: **workspace tensor → size hints → grid args**, which is incorrect.
This fix constructs the workspace tensor earlier, allowing it to be slotted in after the size hints and before the grid arguments, restoring the expected argument layout.
Test Plan:
contbuild and OSS CI
Reviewers: paulzhan
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,035,082,511
|
temp
|
NikhilAPatel
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152659
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,035,059,337
|
Fix the basic description of torch.min(), torch.max(), torch.all(), torch.any()
|
ILCSFNO
|
open
|
[
"triaged",
"open source",
"release notes: python_frontend"
] | 1
|
CONTRIBUTOR
|
Fixes #152176
| true
|
3,035,059,279
|
cleanup, refactor and add missing self._dde_suppressed checks
|
laithsakka
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor",
"ci-no-td"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152657
so two things other than cleanups and refactoring
1) do not use propagate_real_tensors to resolve eval under guard_or_true/guard_or_false .
2) do not guard for dimensions of type DimDynamic.OBLIVIOUS_SIZE under guard_or_true/guard_or_false .
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,035,057,671
|
Add explicit error message for def infer_size(a, b): that specificy that non broadcast path was picked due to unbacked existing in both inputs.
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
CONTRIBUTOR
|
follow up task for my comment on this PR
https://github.com/pytorch/pytorch/pull/152146/files
```
you can land it as for the future i might revisit this to make the code more understandable
basically first we want to check if there is broadcasting using guard_or_none.
if any of them did not return none we are done.
if both return non i would want to add an explicit extra message to the torch_check that says we have assumed this path because both sizeA == sizeB are unbacked
not action required from you at this moment. i will file issue for this
```
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
3,035,038,067
|
UNSTABLE docker-cache-mi300 / docker-cache
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 3
|
COLLABORATOR
|
## Reason
Docker caching is not working on the new MI300 runners. Temporarily disabling it until we can set up the docker caching properly.
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,035,029,793
|
Make assertion about pass callable print the bad pass
|
wconstab
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152654
If you passed an invalid string now you can easily see what it is
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,035,029,693
|
Remove incorrect assertion
|
wconstab
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152654
* __->__ #152653
* #152565
It's only aspirational that the 'improvement' value is positive. In fact
the pass could make a collective more exposed and we shouldn't assert
here in that case
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,035,020,643
|
Refactor some common autotune-related utils into a new file
|
masnesral
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152652
Summary: I'll need some of the benchmark-related functions surfaced in a common utils file so I can use them for remote autotuning. This PR is almost a straight move of a few nested functions from inside AlgorithmSelectorCache, plus the datatypes they depend on, except:
* All hell breaks loose with mypy on select_algorithm.py with this refactor, so I disabled it. Initial investigations looks like there are potentially a ton of type problems in this file, so I'll tackle fixing types in a follow-up diff.
* Introduced a new interface "Benchmarkable" as the base type that gets benchmarked. This is looking forward to the remote autotuning implementation where the types that we'll send across the wire must be serializable. So we won't be using the exact same types we pass to these utilities now, e.g., ExternKernelCaller.
* Rather than checking types explicitly, I introduced an is_extern() method to the Benchmarkable interface.
Test Plan: Existing unit tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,035,014,338
|
Add assert_fp8_close helper for FP8 tensor comparisons
|
vedant713
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 16
|
NONE
|
This PR adds a new helper function `assert_fp8_close` in `torch/testing/_comparison.py` that makes it easy to compare FP8 tensors with a sane default tolerance:
- Casts both `actual` and `expected` FP8 tensors to `float32`
- Calls `torch.testing.assert_close` with `rtol=1e-1` and `atol=1e-2`
- Allows overriding those tolerances via the usual kwargs
This addresses the problem in #152647 where FP8 comparisons fall back to zero tolerance.
Fixes #152647
| true
|
3,034,976,121
|
thread through specialization to compile_fx
|
bobrenjc93
|
closed
|
[
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152670
* __->__ #152650
* #152601
* #152600
* #152597
* #152596
* #152598
* #151407
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,034,971,993
|
DISABLED test_nvshmem
|
kwen2501
|
closed
|
[
"module: ci"
] | 3
|
CONTRIBUTOR
|
## Reason
NVSHMEM is not installed on CI machines yet. Disabling for now.
`test_nvshmem` is from [pull / linux-jammy-py3.9-gcc11 / test (distributed, 1, 2, ephemeral.linux.2xlarge)](https://hud.pytorch.org/pr/pytorch/pytorch/151498#41511007289)
## Enable plan:
(i) Build PyTorch with NVSHMEM by default;
(ii) Install nvshmem wheels in CI workflows.
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,034,965,481
|
[Flight Recorder] Added logging after FR dump completed
|
VieEeEw
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ciflow/pull"
] | 8
|
CONTRIBUTOR
|
Summary: TSIA
Test Plan: eyes
Differential Revision: D74041147
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,034,959,894
|
Check for if two tensors are overall similar instead of bitwise similar?
|
henrylhtsang
|
open
|
[
"triaged",
"module: testing"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
So existing testing util is `torch.testing.assert_close`, which checks for bitwise similarity. But for quantization related ops, usually you can't expect every element pair in the tensors are close.
Suggestion:
Something like https://github.com/deepseek-ai/DeepGEMM/blob/d374456787c51ad9c4b3c5dbc3668ea3ad9bb9c2/deep_gemm/utils.py#L161
Which checks for overall similarity instead of bitwise similarity.
### Alternatives
_No response_
### Additional context
_No response_
| true
|
3,034,929,223
|
[do-not-land][ca] default on for CI
|
xmfan
|
open
|
[
"module: dynamo",
"ciflow/inductor",
"keep-going",
"module: compiled autograd"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152646
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,034,905,302
|
ProcessGroupGloo.allgather_into_tensor_coalesced crashes with CUDA tensors
|
d4l3k
|
open
|
[
"oncall: distributed",
"triaged"
] | 0
|
MEMBER
|
### 🐛 Describe the bug
```py
import os
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "0"
os.environ["RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"
import torch.distributed as dist
import torch
dist.init_process_group("gloo", rank=0, world_size=1)
tensor = torch.zeros(10, device="cuda")
with dist._coalescing_manager():
dist.all_gather_into_tensor(tensor, tensor)
```
stack trace
```
(lldb) bt
* thread #78, name = 'python', stop reason = signal SIGSEGV: invalid permissions for mapped object (fault address=0x7ffc0fe00200)
* frame #0: 0x00007ffff7d90a72 libc.so.6`__memmove_avx512_unaligned_erms + 178
frame #1: 0x00007fffdb9c6d88 libtorch_cpu.so`gloo::allgather(gloo::AllgatherOptions&) + 264
frame #2: 0x00007fffd97d3f3a libtorch_cpu.so`c10d::(anonymous namespace)::AsyncAllgatherCoalescedWork::allgather_coalesced() + 506
frame #3: 0x00007fffd97e0e98 libtorch_cpu.so`c10d::ProcessGroupGloo::AsyncWork::execute(c10::intrusive_ptr<c10d::ProcessGroupGloo::AsyncWork, c10::detail::intrusive_target_default_null_type<c10d::ProcessGroupGloo::AsyncWork>> const&) + 104
frame #4: 0x00007fffd97e0fd7 libtorch_cpu.so`c10d::ProcessGroupGloo::runLoop(int) + 231
frame #5: 0x00007fff794dbbf4 libstdc++.so.6`std::execute_native_thread_routine(__p=0x0000000006fba530) at thread.cc:82:18
```
### Versions
Collecting environment information...
PyTorch version: 2.8.0.dev20250501+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.1
/usr/lib64/libcudnn.so.9.1.1
/usr/lib64/libcudnn_adv.so.9.1.1
/usr/lib64/libcudnn_adv_infer.so.8.8.1
/usr/lib64/libcudnn_adv_train.so.8.8.1
/usr/lib64/libcudnn_cnn.so.9.1.1
/usr/lib64/libcudnn_cnn_infer.so.8.8.1
/usr/lib64/libcudnn_cnn_train.so.8.8.1
/usr/lib64/libcudnn_engines_precompiled.so.9.1.1
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.1.1
/usr/lib64/libcudnn_graph.so.9.1.1
/usr/lib64/libcudnn_heuristic.so.9.1.1
/usr/lib64/libcudnn_ops.so.9.1.1
/usr/lib64/libcudnn_ops_infer.so.8.8.1
/usr/lib64/libcudnn_ops_train.so.8.8.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 184
On-line CPU(s) list: 0-183
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 184
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 11.5 MiB (184 instances)
L1i cache: 11.5 MiB (184 instances)
L2 cache: 92 MiB (184 instances)
L3 cache: 2.9 GiB (184 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-183
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy_extensions==1.1.0
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.8.0.87
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250501+cu128
[pip3] torchaudio==2.6.0.dev20250501+cu128
[pip3] torchdata==0.11.0
[pip3] torchft==0.1.1
[pip3] torchvision==0.22.0.dev20250501+cu128
[pip3] triton==3.3.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.8.0.87 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0.dev20250501+cu128 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250501+cu128 pypi_0 pypi
[conda] torchdata 0.11.0 pypi_0 pypi
[conda] torchft 0.1.1 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250501+cu128 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
3,034,903,036
|
[inductor] Realize bucketize/searchsorted output
|
davidberard98
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152644
**Context**:
bucketize is relatively expensive, computationally. So it's not always profitable to fuse it if it means doing extra computation. For example, this repro:
https://gist.github.com/davidberard98/7fd6af7e6291787c246c705945a25554
shows a slowdown from 56us (eager) to ~100us (torch.compile-d): instead of computing 2\*\*15 binary searches, the fused version does 2\*\*15 * 384 - one for each of the broadcasted outputs.
**Solution**:
Realize the output of bucketize (and searchsorted, which also uses inductor's ops.bucketize). If there's an opportunity to do non-broadcasted fusions, the scheduler can still apply such fusions later on.
After this PR, instead of a slowdown, we see an improvement from 56us (eager) to 33us (compiled).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D74036850](https://our.internmc.facebook.com/intern/diff/D74036850)
| true
|
3,034,902,203
|
[BE]remove vulkan test
|
yangw-dev
|
open
|
[
"module: vulkan",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
during the investigation of test jobs we run in pull request, It seems we do not have build environemtn for vulkan, removing the logic and the test for cleaning
| true
|
3,034,897,673
|
[CUTLASS][WIP] Gate rowwise matmul CUTLASS kernels by compute capability
|
eqy
|
open
|
[
"module: cuda",
"triaged",
"open source",
"topic: not user facing",
"module: float8"
] | 1
|
COLLABORATOR
|
Does this abate some compile-time warning spam?
cc @ptrblck @msaroufim @jerryzh168 @yanbing-j @vkuzo @albanD @kadeng @penguinwu
| true
|
3,034,878,069
|
[FlexAttention] explicilty create grad_q w/ strides
|
drisspg
|
closed
|
[
"module: performance",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152641
Fixes: #147463
There is a mismatch between inductor's lowering for empty_like and it does not match the behavior of eager. The strides do not match preserve format
https://github.com/pytorch/pytorch/issues/144699
cc @msaroufim @jerryzh168 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @eellison
| true
|
3,034,870,797
|
[AOTAutogradCache][Easy] Move `"einops.einops.rearrange"` to `SAFE_NON_TORCH_FUNCTIONS`
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152640
As title.
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.