id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,885,850,636
|
Checks kv pair indexing in OrderedPreservingDictTest.test_range_insert
|
redwrasse
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 12
|
CONTRIBUTOR
|
`OrderedPreservingDictTest.test_range_insert` has an [unused loop variable `j`](https://github.com/pytorch/pytorch/blob/main/c10/test/util/ordered_preserving_dict_test.cpp#L186), I think taken from the [inspired project](https://github.com/pytorch/pytorch/blob/main/c10/test/util/ordered_preserving_dict_test.cpp#L165) testcase for range inserts, where it [checks kv pair indexing/order](https://github.com/Tessil/ordered-map/blob/master/tests/ordered_map_tests.cpp#L136) for the ordered dict.
This just adds in that functionality to the test case.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,836,943
|
Remove manylinux 2014 artifacts
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
1. Switch Magma build to Manylinux 2.28 base
2. Use manylinux 2.28 as default in populate_binary_env.sh
3. Remove manylinux 2014 docker builds
| true
|
2,885,831,144
|
add skips to test_notifies_oom and test_set_per_process_memory_fraction
|
Fuzzkatt
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Tests fail in NVIDIA internal CI since we do not support nvml on Jetson, but nvml is required for OOM reporting to work properly, so we are skipping the failing tests for now.
cc @nWEIdia @eqy
| true
|
2,885,790,600
|
[MPS] fix empty place holder error for smooth l1 loss
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"topic: bug fixes",
"module: mps",
"release notes: mps"
] | 6
|
COLLABORATOR
|
Fixes #123171
And parametrizes the tests for it
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,885,784,252
|
WIP enable aten convolution out in lowerings
|
exclamaforte
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
So that the convolution op can plan its output memory for fusion opportunities.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,781,910
|
[Inductor] Use real input to autotune user defined triton kernels
|
muchulee8
|
closed
|
[
"ciflow/trunk",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148131
Summary:
User defined Triton kernel sometimes rely on real inputs to determine
the path of execution. We need real inputs to invoke the correct
behavior of the user defined triton kernels (see example in test case,
where we have an early return for random inputs)
Test Plan:
Included in the commit.
python test/inductor/test_aot_inductor.py -k triton_autotuning
python test/inductor/test_aot_inductor.py -k triton_mutated_autotuning
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @amjames @chauhang @aakhundov
Differential Revision: [D70472404](https://our.internmc.facebook.com/intern/diff/D70472404)
| true
|
2,885,775,164
|
Use correct boxed_forward_device_index when running `CompiledFxGraph.post_compile`
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"release notes: AO frontend"
] | 20
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148130
This PR threads through the correct boxed_forward_device_index from graph_kwargs to CompiledFXGraph.post_compile. This allows us to correctly update BoxedDeviceIndex from cache hits.
We don't actually need to save `boxed_forward_device_index` in CompiledFXGraph because its value is in the cache key, so it always matches to the ambient one anyway. On forward with cudagraphs enabled, derive `boxed_forward_device_index`'s value from `device_idxs`.
Testing:
```
python benchmarks/dynamo/cachebench.py --mode training --benchmark torchbench --model BERT_pytorch --device cuda --repeat 1 --dynamic --output="dynamic.json"
```
Now cache hits properly on FXGraphCache. AOTAutogradCache has a guard failure. Will look into that as a followup.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,774,163
|
ci: Remove manylinux builds for triton, except for XPU
|
seemethere
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148129
We're dropping regular old manylinux so let's drop it here too
Relates to #123649
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,885,772,185
|
[MPS] Add inductor support for the `entr()` operator.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,767,445
|
Long queue for macOS runners
|
huydhn
|
closed
|
[
"ci: sev"
] | 1
|
CONTRIBUTOR
|
## Current Status
Ongoing https://hud.pytorch.org/metrics. The queue is at 6 hours now.
## Error looks like
Jobs in queue
## Incident timeline (all times pacific)
00:00 Feb 27th
## User impact
Waiting for runner on PRs
## Root cause
MacOS runners were terminated probably after this workflow ran last night https://github.com/pytorch-labs/pytorch-gha-infra/actions/runs/13561682539
## Mitigation
Not sure yet
@malfet has started the cattlespa process at https://github.com/pytorch-labs/pytorch-gha-infra/actions/runs/13575790703, but it didn't seem to help
## Prevention/followups
TBD
| true
|
2,885,740,879
|
ci: Only run CI specific things when in CI
|
seemethere
|
closed
|
[
"Merged",
"topic: not user facing"
] | 5
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148129
* __->__ #148126
This was blocking me from running this locally so don't run it like this
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,885,740,564
|
[dtensor][fix] fix _scaled_dot_product_flash_attention sharding
|
XilunWu
|
closed
|
[
"oncall: distributed",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"module: dtensor",
"module: context parallel"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148125
### Summary
https://github.com/pytorch/pytorch/pull/146372/ changed the op signature of `_scaled_dot_product_flash_attention` and as a consequence DTensor needs to change its sharding defined at https://github.com/pytorch/pytorch/blob/40ad5e01dff05c7d64e070fb01683820e678f788/torch/distributed/tensor/_ops/_matrix_ops.py#L232
### Test
`pytest test/distributed/tensor/test_attention.py`
### Follow-up
It's still unclear why the CP unit tests were not run over the original PR which is BC-breaking.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l
| true
|
2,885,733,682
|
Add a stable TORCH_LIBRARY to C shim
|
janeyx99
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cpp",
"ciflow/inductor",
"ci-no-td",
"no-runner-experiments"
] | 26
|
CONTRIBUTOR
|
This PR adds two main parts:
- shim.h stable C APIs into torch::Library APIs
- a higher level API in torch/csrc/stable/library.h that calls into this shim.h + otherwise is self contained
Goal: custom kernel writers should be able to call the apis in the directories above in order to register their library in a way that allows their custom extension to run with a different libtorch version than it was built with.
Subplots resolved:
- Do we want a whole separate StableLibrary or do we want to freeze torch::Library and add `m.stable_impl(cstring, void (*fn)(void **, int64_t, int64_t)` into it
- Yes, we want a separate StableLibrary. We cannot freeze Library and it is NOT header only.
- Should I use unint64_t as the common denominator instead of void* to support 32bit architectures better?
- Yes, and done
- Should I add a stable `def` and `fragment` when those can be done in python?
- I think we do want these --- and now they're done
- Where should library_stable_impl.cpp live? -- no longer relevant
- I need some solid test cases to make sure everything's going ok. I've intentionally thrown in a bunch of random dtypes into the signature, but I still haven't tested returning multiple things, returning nothing, complex dtypes, etc.
- Have since tested all the torch library endpoints. the others can be tested in a followup to separate components that need to be in shim.h vs can be added later
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148832
* __->__ #148124
| true
|
2,885,714,105
|
Conv/pool doc on ceilmode wrong
|
albertz
|
closed
|
[
"triaged",
"topic: docs"
] | 1
|
CONTRIBUTOR
|
### 📚 The doc issue
The doc on maxpool/conv states this output shape:
$$L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernelsize} - 1) - 1}{\text{stride}} + 1\right\rfloor$$
Then it says for the `ceil_mode` arg: "If True, will use ceil instead of floor to compute the output shape"
This doc on `ceil_mode` seems to be wrong, or misleading, I'm not sure. I would guess it would mean:
$$L_{out}^{\text{ceilmode}} = \left\lceil \frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernelsize} - 1) - 1}{\text{stride}} + 1\right\rceil$$
For `padding=0`, `dilation=1`, `kernel_size=1`, `stride=2`:
* For `L_in=10`, `ceil_mode=False`: The formula gives `L_out=5`. That's the case.
* For `L_in=11`, `ceil_mode=False`: The formula gives `L_out=6`. That's the case.
* For `L_in=10`, `ceil_mode=True`: The formula gives `L_out=6`. But that's not the case: `torch.nn.functional.max_pool1d(torch.zeros(1,1,10),1,2,ceil_mode=True).shape` gives `torch.Size([1, 1, 5])`.
So, `ceil_mode` seems to mean sth different. But what exactly?
Btw, related to that, the option name is anyway confusing to me. Because the original formula is already like ceil-mode to me, because you have this identity:
$$L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernelsize} - 1) - 1}{\text{stride}} + 1\right\rfloor = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernelsize} - 1) + (\text{stride} - 1)}{\text{stride}}\right\rfloor = \left\lceil \frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernelsize} - 1)}{\text{stride}}\right\rceil$$
I find this actually a much more natural way to write this and to think about it. But when written like that, what does `ceil_mode` mean now?
More specifically, how exactly is the formula of `L_out` with `ceil_mode=True`?
### Suggest a potential alternative/fix
Clarify/reword the doc of `ceil_mode`, and add the exact formula of the output shape for `ceil_mode=True`.
| true
|
2,885,705,066
|
[cutlass backend] C++ compile error for CUTLASS config only get resolved in autotuning stage
|
henrylhtsang
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I think we should try to filter the bad configs out and/or filter them. But inductor can help as well.
internal post:
https://fb.workplace.com/groups/1037051091797916/posts/1059321392904219/
repro:
```py
import logging
import os
os.environ["TORCH_LOGS"] = "+output_code,+benchmarking,+inductor"
import torch
import torch._inductor.config
torch._inductor.config.max_autotune = True
# torch._inductor.config.coordinate_descent_tuning = True
# torch._inductor.config.coordinate_descent_check_all_directions = True
torch._inductor.config.force_disable_caches = True
torch._inductor.config.autotune_num_choices_displayed = None
# torch._inductor.config.autotune_in_subproc = False
torch._inductor.config.max_autotune_gemm_backends = "CUTLASS,TRITON"
torch._inductor.config.cuda.cutlass_max_profiling_configs = 2
torch._inductor.config.cuda.cutlass_instantiation_level = "3333"
torch._inductor.config.cuda.cutlass_op_allowlist_regex = "cutlass3x_sm90_tensorop_s64x56x16gemm_f16_f16_f32_void_f16_128x112x64_4x1x1_0_ttn_align8_warpspecialized_pingpong_epi_tma"
class MatMulModel(torch.nn.Module):
def forward(self, A, B):
return A @ B
def main():
M, N, K = 2048, 2048, 2048
dtype = torch.float16
A = torch.randn(M, K, device="cuda", dtype=dtype)
B = torch.randn(K, N, device="cuda", dtype=dtype)
model = MatMulModel().cuda()
compiled_model = torch.compile(model, fullgraph=True)
_ = compiled_model(A, B)
print("done")
if __name__ == "__main__":
main()
```
logs:
```shell
select_algorithm.py:1804] Precompilation complete for future: <Future at 0x7fbe7422a8c0 state=finished raised CUDACompileError>, elapsed time: 13.26s
AUTOTUNE mm(2048x2048, 2048x2048)
triton_mm_17 0.0299 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=128, BLOCK_N=128, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
triton_mm_9 0.0372 ms 80.4% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=128, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
triton_mm_16 0.0376 ms 79.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=128, BLOCK_N=128, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
triton_mm_13 0.0387 ms 77.3% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=128, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
triton_mm_11 0.0409 ms 73.1% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=64, BLOCK_N=128, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
triton_mm_10 0.0414 ms 72.3% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=128, EVEN_K=True, GROUP_M=8, num_stages=4, num_warps=8
triton_mm_12 0.0414 ms 72.2% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=64, BLOCK_N=128, EVEN_K=True, GROUP_M=8, num_stages=4, num_warps=4
triton_mm_18 0.0418 ms 71.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=128, BLOCK_N=128, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=8
triton_mm_14 0.0419 ms 71.4% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=128, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=4, num_warps=8
triton_mm_7 0.0466 ms 64.2% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=64, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=8
triton_mm_6 0.0554 ms 54.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=2, num_warps=4
triton_mm_15 0.0600 ms 49.8% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=128, BLOCK_N=128, EVEN_K=True, GROUP_M=8, num_stages=2, num_warps=8
triton_mm_8 0.0867 ms 34.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=64, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=4
triton_mm_5 0.0900 ms 33.3% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=16, BLOCK_M=64, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=2, num_warps=4
triton_mm_1 0.1152 ms 26.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=2, num_warps=4
triton_mm_3 0.1333 ms 22.4% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=8
triton_mm_4 0.1427 ms 21.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=64, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=4
triton_mm_2 0.1522 ms 19.7% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=32, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=8
triton_mm_0 0.2468 ms 12.1% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=16, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=1, num_warps=2
cuda_cutlass_gemm_0 inf ms 0.0% cutlass3x_sm90_tensorop_s64x56x16gemm_f16_f16_f32_void_f16_128x112x64_4x1x1_0_ttn_align8_warpspecialized_pingpong_epi_tma swizzle=1
cuda_cutlass_gemm_1 inf ms 0.0% cutlass3x_sm90_tensorop_s64x56x16gemm_f16_f16_f32_void_f16_128x112x64_4x1x1_0_ttn_align8_warpspecialized_pingpong_epi_tma swizzle=2
cuda_cutlass_gemm_2 inf ms 0.0% cutlass3x_sm90_tensorop_s64x56x16gemm_f16_f16_f32_void_f16_128x112x64_4x1x1_0_ttn_align8_warpspecialized_pingpong_epi_tma swizzle=4
SingleProcess AUTOTUNE benchmarking takes 34.4710 seconds and 13.1879 seconds precompiling for 22 choices
```
But then it still takes 10 second each in the benchmarking stage.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @eellison @drisspg
### Versions
trunk
| true
|
2,885,700,527
|
[Inductor][Tests] Use generic device or require CUDA for newly added tests
|
alexbaden
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"merging",
"ciflow/xpu"
] | 4
|
COLLABORATOR
|
Some tests added to Inductor as part of #147038 and #145583 were cuda specific and failing on XPU. This PR changes the attrs dict replacement test to use the generic device variable. The first graph partitioning test (`test_graph_partition`) fails on XPU - the graph does not appear to be properly partitioned. Subsequent tests pass if I use the generic `.to(device=self.device)` call. But since I was not sure if graph partitioning was expected to work on XPU I opted to explicitly skip the tests on XPU for now and leave them as is.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,688,727
|
Torch 2.7.0 nightly cuda 12.6 and cuda 12.8 builds are broken on Amazon linux 2023
|
atalman
|
closed
|
[
"module: binaries",
"module: build",
"topic: build",
"topic: binaries"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Failure can be seen here:
https://github.com/pytorch/test-infra/actions/runs/13558218752/job/37934127476
Error:
```
2025-02-27T16:00:09.7113764Z + python3 .ci/pytorch/smoke_test/smoke_test.py --package torchonly
2025-02-27T16:00:09.7114301Z Traceback (most recent call last):
2025-02-27T16:00:09.7114936Z File "/pytorch/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 11, in <module>
2025-02-27T16:00:09.7115416Z import torch
2025-02-27T16:00:09.7115840Z File "/usr/local/lib64/python3.9/site-packages/torch/__init__.py", line 401, in <module>
2025-02-27T16:00:09.7116346Z from torch._C import * # noqa: F403
2025-02-27T16:00:09.7116835Z ImportError: libcufile.so.0: cannot open shared object file: No such file or directory
2025-02-27T16:00:09.7117696Z File "/home/ec2-user/actions-runner/_work/test-infra/test-infra/test-infra/.github/scripts/run_with_env_secrets.py", line 102, in <module>
2025-02-27T16:00:09.7118364Z main()
2025-02-27T16:00:09.7118954Z File "/home/ec2-user/actions-runner/_work/test-infra/test-infra/test-infra/.github/scripts/run_with_env_secrets.py", line 98, in main
2025-02-27T16:00:09.7119687Z run_cmd_or_die(f"docker exec -t {container_name} /exec")
2025-02-27T16:00:09.7120450Z File "/home/ec2-user/actions-runner/_work/test-infra/test-infra/test-infra/.github/scripts/run_with_env_secrets.py", line 39, in run_cmd_or_die
2025-02-27T16:00:09.7121360Z raise RuntimeError(f"Command {cmd} failed with exit code {exit_code}")
```
Looks like this is result of cufile addition: https://github.com/pytorch/pytorch/pull/145748
### Versions
torch-2.7.0.dev20250227+cu126
cc @seemethere @malfet @mikaylagawarecki
| true
|
2,885,673,206
|
Add int32 support to torch.gather
|
byphilipp
|
open
|
[
"triaged",
"module: advanced indexing",
"function request",
"module: scatter & gather ops"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
The int32 indexing added in torch 2.0
But torch.gather support only int64 indexing now
### Alternatives
_No response_
### Additional context
The int64 indexes is memory expensive, espetially if we use half-precision calculations
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,885,663,123
|
[inductor][ck] manual kBatch heuristic
|
coconutruben
|
closed
|
[
"module: rocm",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Summary:
# Why
Leverage kBatch parameter for large splitK examples for CK for better than ATEN performance
# What
replace default kBatch = 1 with a manual heuristic
- if K > 16 * max (M,N)
- leverage k_per_block, and K and number of SMs on the chip
- upper bound to 128, lower bound to 1
This is better than defaulting to 1, cheap to calculate, and shows performance beyond ATEN
This is of course subject to change and improvement
Test Plan:
with minor modifications to to run torch.mm on the shape `M, N, K = 2048, 2048, 524288`
```
buck2 run -c fbcode.re_gpu_tests=False mode/opt-amd-gpu fbcode//deeplearning/aot_inductor/benchmark/sampling:test_gemm_autotune_benchmark_AMD_block_0
```
```
AUTOTUNE mm(2048x524288, 524288x2048)
rocm_ck_gemm_template_49 10.4972 ms 100.0%
rocm_ck_gemm_template_8 10.6132 ms 98.9%
rocm_ck_gemm_template_9 10.6907 ms 98.2%
[...]
mm 18.9880 ms 55.3%
```
Reviewed By: ColinPeppler
Differential Revision: D70224591
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,616,551
|
[triton 3.3] Fix aoti cpp wrapper remaining 5 issue. (following #148051)
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 18
|
CONTRIBUTOR
|
Summary:
Fix the following 5 on a100:
- test_foreach_cpp_wrapper_cuda_gpu_wrapper
- test_enable_dynamic_shapes_cpp_wrapper_cuda_gpu_wrapper
- test_dynamic_shapes_persistent_reduction_mixed_x_dim_cuda_gpu_wrapper
- test_enable_dynamic_shapes_cpp_wrapper_cuda_dynamic_shapes_gpu_wrapper
- test_dynamic_shapes_persistent_reduction_mixed_x_dim_cuda_dynamic_shapes_gpu_wrapper
Test Plan:
oss :
```
TORCHINDUCTOR_COMPILE_THREADS=1 TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS=TRITON TORCH_LOGS="+inductor, output_code" TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 CPLUS_INCLUDE_PATH=/usr/local/cuda-12.6/include:$CPLUS_INCLUDE_PATH python test/inductor/test_gpu_cpp_wrapper.py -k test_foreach_cpp_wrapper_cuda_gpu_wrapper
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,612,110
|
[inductor][triton] Explicit kernel-arg mismatch checks
|
davidberard98
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Let's add explicit checks to check when a kernel expects different args than the args it's getting. Particularly if we can do this at the cpp_wrapper level, it'll be quite helpful
From @chenyang78 :
> BTW, I feel like this kind of kernel-argument mismatch issue is a really painpoint. It’s hard to debug. For example, we’ve seen another mysterious CUDA kernel failure caused by kernel-arg mismatch (https://github.com/pytorch/pytorch/pull/148102), although it’s caused by Inductor’s bug instead of Triton compiler ABI changes.
>
> I am wondering if we could have some kinds of more proactive checks to prevent such mismatches being leaked through?
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,885,610,379
|
Redundant Try Block in backward()
|
Jason1ien
|
closed
|
[
"module: autograd",
"triaged",
"module: custom-operators"
] | 3
|
NONE
|
Inside of autograd.py, apart of the library directory, there is a redundant try block.
This try block should be resolved by checking if info._backward_fn is none. In the case that it is none, we should raise an exception error instead.
Below is the link to the file:
https://github.com/pytorch/pytorch/blob/main/torch/_library/autograd.py
My systems specs:
Windows 11 Home
Intel 11th Gen Core i7-11800H @ 2.30GHz
Nvidia RTX 3050 @ 45 Watts
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,885,596,932
|
We should never throw vanilla C++ exceptions
|
albanD
|
open
|
[
"module: cpp",
"triaged",
"better-engineering",
"actionable"
] | 0
|
COLLABORATOR
|
We have a lot of them at the moment in a few places in the codebase: https://github.com/search?q=repo%3Apytorch%2Fpytorch%20std%3A%3Aruntime_error(&type=code
We should migrate ALL of them to TORCH_CHECK-like macros and add a lint rule to enforce it going forward.
cc @jbschlosser @Skylion007 in case you would be interested in this cleanup!
| true
|
2,885,545,474
|
[inductor][triton] introduce better "APIs" in triton that can clean up our triton/inductor integration
|
davidberard98
|
open
|
[
"triaged",
"upstream triton"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Let's review all the issues in https://github.com/orgs/pytorch/projects/94/views/1, the stack from https://github.com/pytorch/pytorch/pull/145515, and other relevant pull requests we saw in the triton 3.3 / pytorch 2.7 release and see how we can clean up the inductor/triton integration.
A few early comments:
* AttrsDescriptor removal was somewhat disruptive
* Changing cpp interface was challenging to deal with...
Some of these things may be expected as Triton changes their internal implementation. But in some cases (in particular the AttrsDescriptor refactor/removal) we may want to re-introduce some other more stable surfaces that we can integrate with to reduce the effort required to do a triton pin update.
### Alternatives
_No response_
### Additional context
_No response_
cc @bertmaher @int3 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov
| true
|
2,885,545,242
|
MLA with Learnable RoPE Tensors is Broken with Flex Attention
|
cora-codes
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 0
|
NONE
|
### 🐛 Describe the bug
I'm encountering an `AssertionError` during the backward pass when using `flex_attention` if the tensors used require gradients.
```python
import torch
from torch import Tensor
from torch.nn.attention.flex_attention import flex_attention
torch._inductor.config.unroll_reductions_threshold = 65
def generate_mla_rope_score_mod(query_rope: Tensor, key_rope: Tensor, num_heads: int, scale: float = 1.0):
def mla_rope_score_mod(score: Tensor, b: Tensor, h: Tensor, q_idx: Tensor, kv_idx: Tensor) -> Tensor:
return score + (scale * torch.dot(query_rope[b, h, q_idx], key_rope[b, h // num_heads, kv_idx]))
return mla_rope_score_mod
def main(device: str = "cuda"):
B, H, SEQ_LEN, LATENT_HEAD_DIM, ROPE_HEAD_DIM = 1, 128, 8, 512, 64
query = torch.rand(B, H, SEQ_LEN, LATENT_HEAD_DIM, device=device, requires_grad=True)
key = torch.rand(B, H, SEQ_LEN, LATENT_HEAD_DIM, device=device, requires_grad=True)
value = torch.rand(B, H, SEQ_LEN, LATENT_HEAD_DIM, device=device, requires_grad=True)
query_pe = torch.rand(B, H, SEQ_LEN, ROPE_HEAD_DIM, device=device, requires_grad=True)
key_pe = torch.rand(B, H, SEQ_LEN, ROPE_HEAD_DIM, device=device, requires_grad=True)
score_mod = generate_mla_rope_score_mod(query_pe, key_pe, ROPE_HEAD_DIM)
out = torch.compile(flex_attention)(query, key, value, score_mod=score_mod)
out.sum().backward()
main()
```
```
AssertionError:
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 3440, in fn
assert len(idx) == len(output_size)
```
The full traceback shows that it's happening in the `flex_attention_backward` function within the inductor's kernel module.
I can confirm it works if you don't require gradients for `query_pe` and `key_pe`, but it seems like it should work if you do (and I have an adjacent application where they need to require gradients).
### Versions
'2.7.0a0+gitce805a5'
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,885,530,386
|
[triton 3.3][cpp_wrapper] TypeError: 'NoneType' object is not subscriptable
|
davidberard98
|
closed
|
[] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Example stacktrace below
```
======================================================================
ERROR: test_foreach_cpp_wrapper_cuda_gpu_wrapper (__main__.TestGpuWrapper)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/data/users/dberard/triton-env/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/data/users/dberard/triton-env/pytorch/test/inductor/test_torchinductor.py", line 12577, in new_test
return value(self)
File "/home/dberard/.conda/envs/triton-env/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/dberard/triton-env/pytorch/test/inductor/test_gpu_cpp_wrapper.py", line 150, in fn
_, code = test_torchinductor.run_and_get_cpp_code(
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/utils.py", line 2311, in run_and_get_cpp_code
result = fn(*args, **kwargs)
File "/data/users/dberard/triton-env/pytorch/test/inductor/test_foreach.py", line 267, in test_foreach_cpp_wrapper_cuda
self._test_single_list(op=torch._foreach_add)
File "/data/users/dberard/triton-env/pytorch/test/inductor/test_foreach.py", line 235, in _test_single_list
self.check_model_cuda(
File "/home/dberard/.conda/envs/triton-env/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/dberard/triton-env/pytorch/test/inductor/test_torchinductor.py", line 618, in check_model_gpu
check_model(
File "/data/users/dberard/triton-env/pytorch/test/inductor/test_torchinductor.py", line 459, in check_model
actual = run(*example_inputs, **kwargs)
File "/data/users/dberard/triton-env/pytorch/torch/_dynamo/eval_frame.py", line 589, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/compile_fx.py", line 746, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/compile_fx.py", line 731, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/compile_fx.py", line 1403, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/compile_fx.py", line 1123, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/graph.py", line 2011, in compile_to_module
return self._compile_to_module()
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/graph.py", line 2017, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/graph.py", line 1820, in codegen_with_cpp_wrapper
return self.codegen()
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/graph.py", line 1930, in codegen
self.scheduler.codegen()
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/scheduler.py", line 3956, in codegen
return self._codegen()
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/scheduler.py", line 4035, in _codegen
backend.codegen_combo_kernel(node)
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/codegen/cuda_combined_scheduling.py", line 113, in codegen_combo_kernel
return self._triton_scheduling.codegen_combo_kernel(*args, **kwargs)
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/codegen/simd.py", line 1655, in codegen_combo_kernel
kernel.call_kernel(V.graph.wrapper_code, kernel_name)
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/codegen/triton_combo_kernel.py", line 1073, in call_kernel
V.graph.wrapper_code.generate_kernel_call(
File "/data/users/dberard/triton-env/pytorch/torch/_inductor/codegen/cpp_wrapper_gpu.py", line 574, in generate_kernel_call
signature = triton_meta["signature"]
torch._inductor.exc.InductorError: TypeError: 'NoneType' object is not subscriptable
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
python test/inductor/test_gpu_cpp_wrapper.py TestGpuWrapper.test_foreach_cpp_wrapper_cuda_gpu_wrapper
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
There are 5 tests displaying this error (on A100):
* [ ] test_foreach_cpp_wrapper_cuda_gpu_wrapper
* [ ] test_enable_dynamic_shapes_cpp_wrapper_cuda_gpu_wrapper
* [ ] test_dynamic_shapes_persistent_reduction_mixed_x_dim_cuda_gpu_wrapper
* [ ] test_enable_dynamic_shapes_cpp_wrapper_cuda_dynamic_shapes_gpu_wrapper
* [ ] test_dynamic_shapes_persistent_reduction_mixed_x_dim_cuda_dynamic_shapes_gpu_wrapper
### Versions
triton from the last ~week
viable/strict from feb 27
A100/H100
| true
|
2,885,489,050
|
[CI] Remove conda usage from lint related jobs
|
clee2000
|
closed
|
[
"module: ci",
"triaged"
] | 1
|
CONTRIBUTOR
|
Remove conda usage from the linter docker images pytorch-linux-focal-linter and pytorch-linux-jammy-cuda11.8-cudnn9-py3.9-linter
Relevant files:
.ci/docker/linter/Dockerfile
.ci/docker/linter-cuda/Dockerfile
.github/scripts/lintrunner.sh
.github/workflows/lint-autoformat.yml
.github/workflows/lint.yml
tools/linter/*
Context: https://docs.google.com/document/d/1lIdRv4oE8c9eXSnxHJgjsD3yQI5X5zZRC9p71gunVoU/edit?tab=t.0#heading=h.gx4t9juedorw
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,885,488,710
|
[triton 3.3] cpp wrapper aoti remaining test failure following #148051
|
YUNQIUGUO
|
closed
|
[
"ciflow/trunk",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
This should hopefully fix the remaining cpp_wrapper aoti tests failure in this paste: [P1741802912](https://www.internalfb.com/phabricator/paste/view/P1741802912)
following https://github.com/pytorch/pytorch/pull/148051
Related issue: #147734
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,481,751
|
Use nightly-wheel-upload env for triton wheel publishing
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Required for publishing triton builds
| true
|
2,885,413,548
|
SubgraphLoweringException in flex_attention when using custom score_mod with torch.dot (MLA)
|
cora-codes
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
I'm getting a `SubgraphLoweringException` when using `torch.compile(flex_attention)` with a custom score modification function that uses `torch.dot()`. The error occurs during the inductor compilation phase.
```python
import torch
from torch import Tensor
from torch.nn.attention.flex_attention import score_mod_signature, flex_attention
def generate_mla_rope_score_mod(
query_rope: Tensor,
key_rope: Tensor,
num_heads: int,
scale: float = 1.0,
) -> score_mod_signature:
"""Returns an MLA RoPE score modification function to be used w/ FlexAttention"""
def mla_rope_score_mod(
score: Tensor, b: Tensor, h: Tensor, q_idx: Tensor, kv_idx: Tensor
) -> Tensor:
return score + (
scale * torch.dot(query_rope[b, h, q_idx], key_rope[b, h // num_heads, kv_idx])
)
mla_rope_score_mod.__name__ = f"mla_rope_score_mod_scale_{scale}"
return mla_rope_score_mod
def main(device: str = "cuda"):
# Example dimensions
B, H, SEQ_LEN, LATENT_HEAD_DIM = 1, 128, 8, 512
ROPE_HEAD_DIM = 64
# Create random tensors
query = torch.rand(B, H, SEQ_LEN, LATENT_HEAD_DIM, device=device)
key = torch.rand(B, H, SEQ_LEN, LATENT_HEAD_DIM, device=device)
value = torch.rand(B, H, SEQ_LEN, LATENT_HEAD_DIM, device=device)
# Create positional embeddings
query_pe = torch.rand(B, H, SEQ_LEN, ROPE_HEAD_DIM, device=device)
key_pe = torch.rand(B, H, SEQ_LEN, ROPE_HEAD_DIM, device=device)
score_mod = generate_mla_rope_score_mod(query_pe, key_pe, ROPE_HEAD_DIM)
torch.compile(flex_attention)(
query,
key,
value,
score_mod=score_mod,
return_lse=True,
)
main()
```
Running the reproduction will give you:
```
torch._inductor.exc.InductorError: LoweringException: SubgraphLoweringException: Buffers cannot be created while lowering a pointwise subgraph. This could be for a good reason (e.g. you're calling an op we can't codegen as a pointwise op), but it could also be a bug. Please file a bug report if you think this should be supportable.
While executing %sum_1 : [num_users=1] = call_function[target=torch.ops.aten.sum.default](args = (%mul,), kwargs = {})
```
### Versions
2.7.0a0+gitce805a5
| true
|
2,885,359,114
|
NotImplementedError: FlexAttentionHigherOrderVariable() has no type
|
cora-codes
|
open
|
[
"triaged",
"bug",
"oncall: pt2",
"module: dynamo",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 5
|
NONE
|
### 🐛 Describe the bug
I'm working on a reproduction, but I've ran into the following error with flex attention: "NotImplementedError: FlexAttentionHigherOrderVariable() has no type". I believe it tries to access `block_mask.shape[-1]` and then fails .
### Versions
'2.7.0a0+gitce805a5'
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,885,259,314
|
Use myst_nb in docs
|
zou3519
|
open
|
[
"Stale"
] | 3
|
CONTRIBUTOR
|
myst_nb is a plugin that:
1) allows rendering of jupyter notebooks in documentation 2) allows execution of code blocks in markdown docs
Execution of code blocks in markdown docs may slow down build time. In order to limit the impact:
- by default, we do not execute code blocks when run locally. To run them locally, set the PYTORCH_NB_EXECUTE=1 envvar
- code blocks will be executed in CI. We could tweak this more, but right now the biggest problem with doc iteration time in CI isn't the docs build, it's needing to wait for the pytorch build.
- there is a 30 second timeout per md file. We want to emphasize that notebook execution should not be abused for long-running things.
Test Plan:
- I switched over torch.cond's documentation to markdown.
- The new torch.cond doc has some executable code blocks. They are not all executable.
- The rendering might look goofy, but I'm confident that everything will render correctly with the pydata-sphinx-theme, so I don't want to spend time trying to figure out the CSS right now.
| true
|
2,885,155,319
|
Refactor layout constraint selection logic
|
zou3519
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ci-no-td"
] | 18
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150511
* __->__ #148104
This PR:
- cleans up some existing comments that don't make sense anymore
- hooks up the "custom_op_default_layout_constraint" back (that seems to
have broken)
- cleans up the "lazy registration path" which seems to never get hit
anymore
- adds dislike_padding to nodes that require exact strides
Test Plan:
- tests + CI
disable padding
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,134,749
|
Initial investigation for removing MOD_SKIPLIST
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
In the ideal state we want to clear out everything from MOD_SKIPLIST: https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/trace_rules.py#L3333
Removing something from MOD_SKIPLIST is equivalent to adding it into LEGACY_MOD_INLINELIST.
However, removing things from these lead to various CI errors.
We should try to remove 3-5 of them, identify what the major issues are, and fix those. Hopefully this makes removing more things from MOD_SKIPLIST in the future easier.
Let's target some modules that are "popular" and mostly Python-only:
- [ ] "torch.nn"
- [ ] "torch.distributions"
- [ ] "torch.testing"
- [ ] "torch.utils._pytree"
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,885,122,553
|
Fix None and equal_to_1 arguments issue in Triton kernel generated by AOTI
|
renganxu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 11
|
CONTRIBUTOR
|
Summary:
When a Triton kernel has arguments with None values followed by arguments with value 1, AOTI attempts to remove the None arguments and update the indices of the equal_to_1 arguments in triton_meta["configs"]. However, if the same kernel is called multiple times, this optimization process is repeated. Prior to this diff, the indices of equal_to_1 arguments from subsequent calls (second and later) were based on the updated indices from the previous call, resulting in incorrect behavior.
This diff aims to localize the updated indices for equal_to_1 arguments within the optimization process of the current call, ensuring accurate and consistent results.
Test Plan:
Unit Test:
```
buck2 run mode/dev-nosan caffe2/test/inductor:test_aot_inductor -- -r test_triton_kernel_with_none_inputs_and_equal_to_1_arg
```
Differential Revision: D69998314
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,068,499
|
Move expanded dim require_exact_stride handling to api from sdpa lowering
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148101
See issue: https://github.com/pytorch/pytorch/issues/147156#issue-2852362217.
Original tests from https://github.com/pytorch/pytorch/pull/146054 should cover these changes, and I tested that the perf on https://github.com/pytorch/pytorch/issues/145760 remains fixed.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,061,238
|
[Experiment] meaure the effect of combining cpp-wrapper and cudagraphs
|
desertfire
|
closed
|
[
"Stale"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148100
| true
|
2,885,030,440
|
HSDP custom hook UTs are multi-threaded - can't set device rank
|
pragupta
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic"
] | 22
|
CONTRIBUTOR
|
HSDP custom hook UTs are multi-threaded and using single physical GPU. If we set rank in each thread, then we are referencing the same GPU with multiple ranks, which isn't right. Therefore, removing the rank setting from these UTs. Now, they are passing with 1, 2, 4 GPUs.
Fixes #147767 and #147769
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,885,025,598
|
onnx dynamo export does not support aten bucketize
|
peterbjorgensen
|
open
|
[
"module: onnx",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
aten::bucketize should be supported by ONNX, but the following code does not work:
```
import torch
import torch.nn as nn
class MyBucketizer(nn.Module):
def forward(self, x):
return torch.bucketize(x, torch.linspace(0, 1, 100))
def main():
x = torch.rand(100)
my_model = MyBucketizer()
my_program = torch.onnx.dynamo_export(my_model, x)
if __name__ == "__main__":
main()
```
It fails with the following stack trace
```
python sandbox/peter/torch_bucketize.py
/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/torch/onnx/_internal/_exporter_legacy.py:101: UserWarning: torch.onnx.dynamo_export only implements opset version 18 for now. If you need to use a different opset version, please register them with register_custom_op.
warnings.warn(
Traceback (most recent call last):
File "/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/torch/onnx/_internal/_exporter_legacy.py", line 779, in dynamo_export
).export()
^^^^^^^^
File "/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/torch/onnx/_internal/_exporter_legacy.py", line 546, in export
graph_module = self.options.fx_tracer.generate_fx(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 217, in generate_fx
return self.pre_export_passes(options, model, graph_module, updated_model_args) # type: ignore[return-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 226, in pre_export_passes
return _exporter_legacy.common_pre_export_passes(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/torch/onnx/_internal/_exporter_legacy.py", line 832, in common_pre_export_passes
).analyze(infra.levels.ERROR)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/analysis/unsupported_nodes.py", line 85, in analyze
self._lint(analysis_result, diagnostic_level)
File "/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/torch/onnx/_internal/fx/analysis/unsupported_nodes.py", line 37, in _lint
self.diagnostic_context.log_and_raise_if_error(diagnostic)
File "/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 356, in log_and_raise_if_error
raise RuntimeErrorWithDiagnostic(diagnostic)
torch.onnx._internal.diagnostics.infra.context.RuntimeErrorWithDiagnostic: Unsupported FX nodes: {'call_function': ['aten.bucketize.Tensor']}.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/peter/code/models_torch/sandbox/peter/torch_bucketize.py", line 18, in <module>
main()
File "/home/peter/code/models_torch/sandbox/peter/torch_bucketize.py", line 14, in main
my_program = torch.onnx.dynamo_export(my_model, x)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/torch/onnx/__init__.py", line 525, in dynamo_export
return dynamo_export(
^^^^^^^^^^^^^^
File "/home/peter/code/models_torch/.venv/lib/python3.12/site-packages/torch/onnx/_internal/_exporter_legacy.py", line 790, in dynamo_export
raise errors.OnnxExporterError(message) from e
torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at 'report_dynamo_export.sarif'. SARIF is a standard format for the output of static analysis tools. SARIF logs can be loaded in VS Code SARIF viewer extension, or SARIF web viewer (https://microsoft.github.io/sarif-web-component/). Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
```
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.8 (++20240731025043+3b5b5c1ec4a3-1~exp1~20240731145144.92)
CMake version: version 3.30.4
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9655P 96-Core Processor
CPU family: 26
Model: 2
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 46%
CPU max MHz: 4509.3750
CPU min MHz: 1500.0000
BogoMIPS: 5199.76
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d debug_swap
Virtualization: AMD-V
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.0
[pip3] pytorch-tcn==1.2.1
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
```
| true
|
2,885,015,251
|
[ROCm][Windows] Fix OpenMP Flags for clang-cl
|
m-gallus
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 19
|
CONTRIBUTOR
|
When clang-cl parses its command line arguments, it expects MSVC-style arguments (beggining with `/` such as `/WX`, `/MD`, etc.) to be provided, and clang-style arguments to be preceded by `-Xclang`, otherwise, the clang-style parameters are ignored as they are interpreted unrecognized compiler options.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,884,917,247
|
[BE][Ez]: Remove extra copy in dtensor parallel loss
|
Skylion007
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Remove an extra copy of the input to `_log_softmax` when there is a dtype and memory format change. Fuse the copies instead.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,884,897,094
|
DISABLED test_njt_causal_bfloat16 (__main__.TestFlexAttention)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_njt_causal_bfloat16&suite=TestFlexAttention&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37916878876).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_njt_causal_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_attention.py", line 2016, in test_njt_causal
self.run_test_with_paged_attention(causal_njt, dtype)
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_attention.py", line 711, in run_test_with_paged_attention
self._check_out(
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_attention.py", line 371, in _check_out
self._check_equal(golden_out, ref_out, compiled_out, fudge_factor, "Out")
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_attention.py", line 344, in _check_equal
self.assertTrue(False, "Output/Grad with NaN")
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : Output/Grad with NaN
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/inductor/test_flex_attention.py TestFlexAttention.test_njt_causal_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,884,896,983
|
DISABLED test_split_dynamic (__main__.AutoFunctionalizeTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: functionalization",
"oncall: pt2",
"module: pt2-dispatcher"
] | 6
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_split_dynamic&suite=AutoFunctionalizeTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37918361488).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_split_dynamic`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_auto_functionalize.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @bdhirsh @ezyang @chauhang @penguinwu @zou3519
| true
|
2,884,896,981
|
DISABLED test_dynamo_timed (__main__.TestDynamoTimed)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 20
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_dynamo_timed&suite=TestDynamoTimed&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37916682229).
Over the past 3 hours, it has been determined flaky in 18 workflow(s) with 36 failures and 18 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_dynamo_timed`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_utils.py", line 282, in test_dynamo_timed
self.assertExpectedInline(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 3098, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/unittest/case.py", line 1217, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.9/lib/python3.9/unittest/case.py", line 676, in fail
raise self.failureException(msg)
AssertionError: "{'ac[2230 chars]: 0.0,\n 'structured_logging_overhead_us': 0,\[157 chars]one}" != "{'ac[2230 chars]: 0.014741,\n 'structured_logging_overhead_us'[166 chars]one}"
{'accumulated_cache_size': 0,
'aot_autograd_cumulative_compile_time_us': 0,
'backend_compile_time_s': 0.0,
'backward_cumulative_compile_time_us': None,
'cache_size': 0,
'co_filename': None,
'co_firstlineno': None,
'co_name': 'forward',
'code_gen_time_s': 0.0,
'compile_id': '1/0',
'compile_time_autotune_time_us': None,
'compliant_custom_ops': set(),
'config_inline_inbuilt_nn_modules': False,
'config_suppress_errors': False,
'cuda_synchronize_time_us': None,
'cuda_version': None,
'distributed_ephemeral_timeout_us': None,
'duration_us': 0,
'dynamo_compile_time_before_restart_us': 0,
'dynamo_config': None,
'dynamo_cumulative_compile_time_us': 0,
'dynamo_time_before_restart_s': 0.0,
'end_time_us': 100,
'entire_frame_compile_time_s': 0.0,
'fail_reason': None,
'fail_type': None,
'fail_user_frame_filename': None,
'fail_user_frame_lineno': None,
'frame_key': '1',
'gc_time_us': 0,
'graph_input_count': 1,
'graph_node_count': 3,
'graph_op_count': 1,
'guard_count': 8,
'has_guarded_code': True,
'inductor_code_gen_cumulative_compile_time_us': 0,
'inductor_compile_time_s': 0.0,
'inductor_config': None,
'inductor_cumulative_compile_time_us': 0,
'inductor_fx_remote_cache_backend_type': None,
'inductor_fx_remote_cache_hit_count': None,
'inductor_fx_remote_cache_hit_keys': None,
'inductor_fx_remote_cache_miss_count': None,
'inductor_fx_remote_cache_miss_keys': None,
'is_forward': True,
'is_runtime': False,
'joint_graph_pass_time_us': 0,
'log_format_version': 3,
'non_compliant_ops': set(),
'num_graph_breaks': 0,
'num_triton_bundles': None,
'post_grad_pass_time_us': 0,
'pre_grad_pass_time_us': 0,
'recompile_reason': None,
'remote_cache_time_saved_s': None,
'remote_cache_version': None,
'remote_fx_graph_cache_get_time_ms': None,
'remote_fx_graph_cache_get_time_us': None,
'remote_fx_graph_cache_put_time_ms': None,
'remote_fx_graph_cache_put_time_us': None,
'restart_reasons': set(),
'runtime_cudagraphify_time_us': None,
'runtime_triton_autotune_time_us': None,
'shape_env_guard_count': 0,
'specialize_float': False,
'start_time': 0.0001,
'start_time_us': 100,
- 'structured_logging_overhead_s': 0.0,
+ 'structured_logging_overhead_s': 0.014741,
? +++++
- 'structured_logging_overhead_us': 0,
? ^
+ 'structured_logging_overhead_us': 14741,
? ^^^^^
'tensorify_float_attempt': None,
'tensorify_float_failure': None,
'tensorify_float_success': None,
'triton_compile_time_us': 0,
'triton_version': None} : To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
python test/dynamo/test_utils.py TestDynamoTimed.test_dynamo_timed
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_utils.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,884,884,665
|
Rename node.meta["arg_kwarg_vals"] to node.meta["eager_input_vals"]
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150511
* #148104
* #150495
* __->__ #148092
* #148091
* #148063
* #148046
And added a comment about it. Otherwise it might be confusing
Test Plan:
- wait for CI
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,884,884,546
|
Implement needs_exact_strides for mutable custom operators
|
zou3519
|
closed
|
[
"Merged",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150511
* #148104
* #150495
* #148092
* __->__ #148091
* #148063
* #148046
Mutable custom operators get wrapped into an auto_functionalized HOP, so
we need to store the arg_kwarg_vals on the auto_functionalized HOP
itself.
When Inductor does the re-inplacing, it'll use the pattern matcher to
decompose the auto_functionalized HOP back into the original op (and
0+ other view or clone operations). The pattern matcher uses the
arg_kwarg_vals to trace the subgraph to do the decomposition, so it
ultimately sets arg_kwarg_vals on the original op's node correctly.
Test Plan:
- new test
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,884,875,838
|
only print GraphModule during fx.Interpreter errors if valid
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 6
|
CONTRIBUTOR
|
Came up in https://www.internalfb.com/diff/D69057074?dst_version_fbid=970771615000938&transaction_fbid=1723357345264461 - we need to make sure the GraphModule is valid before calling `print_readable` on it
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #133044
* #147561
* __->__ #148090
* #147749
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,884,867,581
|
Build a storage reader/writer to write checkpoints in HF format
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: new features"
] | 8
|
CONTRIBUTOR
|
Summary: D69984656 caused issues by adding the fsspec dependency to torch distributed when many packages internally didn't have it. In this diff I'm not adding HFStorageReader/Writer to __init__.py so that HFStorage components don't get imported internally and in turn there is no fsspec import that happens. I did the removal from __init__.py in D70286926 to fix the failing tests but the revert was done concurrently. I'll add the classes to __init__.py when I figure out a better way to get fsspec added as a dependency everywhere
Test Plan:
signals pass
buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/distributed/checkpoint:test_hf_storage
Differential Revision: D70324090
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,884,824,321
|
Fix minor typo in python_nccl
|
x41lakazam
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
| null | true
|
2,884,815,054
|
[inductor] ignore block ptr advancements for removed buffers
|
kundaMwiza
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 9
|
CONTRIBUTOR
|
Follow up to https://github.com/pytorch/pytorch/pull/147193. Some buffers are removed only when the kernel context is exited so defer the lines instead.
Added `use_block_ptr` as a parameter to test case that fails if run with block ptrs enabled.
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,884,735,181
|
FSDP2 without sharding works slower than DDP
|
JanRocketMan
|
open
|
[
"oncall: distributed",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
Hi, great thanks for introducing FSDP2, it's been a major quality-of-life improvement for me over the past months!
One caveat I noticed is that communication volume has noticeably increased even in cases where it should not, e.g. if I remove all sharding with custom device `mesh`.
Here is an example script that demonstrates this:
```python
import os
from argparse import ArgumentParser
from contextlib import nullcontext
from datetime import timedelta
from functools import partial
from pathlib import Path
from typing import Dict
import torch
from torch.distributed import destroy_process_group, init_process_group
from torch.distributed.device_mesh import init_device_mesh
from torch.distributed.fsdp import MixedPrecisionPolicy, fully_shard
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.profiler import ProfilerActivity, profile, schedule
class ResidualSequential(torch.nn.Sequential):
def forward(self, input):
return input + super().forward(input)
def get_blk(n_feat: int) -> torch.nn.Sequential:
return ResidualSequential(
torch.nn.Conv2d(n_feat, n_feat, 3, padding=1, bias=False),
torch.nn.ReLU(),
torch.nn.Conv2d(n_feat, n_feat, 3, padding=1, bias=False),
)
def get_model(n_blocks=16, n_feat=768) -> torch.nn.Sequential:
return torch.nn.Sequential(
torch.nn.Conv2d(3, n_feat, 8, stride=8, bias=False),
torch.nn.Sequential(*[get_blk(n_feat=n_feat) for _ in range(n_blocks)]),
torch.nn.ConvTranspose2d(n_feat, 3, 8, stride=8, bias=False),
)
def training_step(
model: torch.nn.Module,
optimizer: torch.optim.AdamW,
precision_ctx,
device: str,
) -> Dict[str, torch.Tensor]:
# prepare model and data for training step
model.train()
optimizer.zero_grad(set_to_none=True)
batch = {"lr": torch.randn(4, 3, 512, 512).to(device), "hr": torch.randn(4, 3, 512, 512).to(device)}
with precision_ctx:
batch["total_loss"] = (model(batch["lr"]) - batch["hr"]).abs().mean()
batch["total_loss"].backward()
optimizer.step()
return batch
def trace_handler(p, export_path: str, use_ddp: bool):
rank = int(os.environ.get("RANK", 0))
fend = "_ddp" if use_ddp else "_fsdp2"
if rank == 0:
p.export_chrome_trace(export_path + "/trace_step_" + str(p.step_num) + fend + ".json")
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--log_folder", type=str, default=".")
parser.add_argument("--use_ddp", action="store_true", default=False)
args = parser.parse_args()
args.log_folder = str(Path(args.log_folder).resolve())
# init distributed
assert torch.cuda.is_available()
local_rank, world_size = int(os.environ.get("LOCAL_RANK", 0)), int(os.environ.get("WORLD_SIZE", 1))
assert world_size > 1
init_process_group(backend="nccl", timeout=timedelta(minutes=20))
device = f"cuda:{local_rank}"
torch.cuda.set_device(device=device)
# set global torch params
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = True
torch.set_float32_matmul_precision("medium")
# init model
model = get_model().to(device)
if local_rank == 0:
print("ml", model)
# setup ddp-like policies for fsdp2
mp_policy = MixedPrecisionPolicy(param_dtype=torch.bfloat16, reduce_dtype=torch.float32)
mesh = init_device_mesh("cuda", (world_size, 1), mesh_dim_names=("replicate", "shard"))
# wrap with fsdp2 before optimizer
if not args.use_ddp:
for blk in model[1]:
fully_shard(blk, mesh=mesh, mp_policy=mp_policy)
fully_shard(model, mesh=mesh, mp_policy=mp_policy, reshard_after_forward=False)
optimizer = torch.optim.AdamW(model.parameters())
# wrap with ddp after optimizer
model = DDP(model, device_ids=[local_rank]) if args.use_ddp else model
precision_ctx = torch.autocast("cuda", dtype=torch.bfloat16) if args.use_ddp else nullcontext()
# profile training
results = []
profile_ctx = profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
schedule=schedule(skip_first=2, wait=2, warmup=1, active=2, repeat=2),
record_shapes=True,
profile_memory=True,
with_stack=True,
with_modules=True,
on_trace_ready=partial(trace_handler, export_path=args.log_folder, use_ddp=args.use_ddp),
)
with profile_ctx as prof:
max_steps = 2 + (2 + 1 + 2) * 2
for _ in range(max_steps):
res = training_step(model=model, optimizer=optimizer, precision_ctx=precision_ctx, device=device)
results += [res["total_loss"].cpu()]
prof.step()
destroy_process_group()
```
When I run it on 8xA6000 machine (with `python -u -m torch.distributed.run --nproc_per_node=8 --standalone`) I get the following profiles (for FSDP2/DDP cases), that show much increased communication time of FSDP2:
[trace_step_7.zip](https://github.com/user-attachments/files/19011831/trace_step_7.zip)
For convenience here are screenshots, for DDP:

For FSDP:

Forward/Backward passes work with the same performance in both cases, yet the communication stream of FSDP2 is ~1.5 times slower (231ms vs 159ms).
I would guess that in this case my fsdp splitting strategy may not be optimal, but I haven't found any official guidelines on this so I've simply copied the strategy used in [torchtitan llama training](https://github.com/pytorch/torchtitan/blob/0047aa27999b2635ae9b05d4e1c43dd95041b859/torchtitan/models/llama/parallelize_llama.py#L337).
Is this expected and what can I do to improve FSDP2 performance in this case?
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.15 (main, Sep 9 2024, 22:15:21) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7763 64-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1769.319
CPU max MHz: 2450.0000
CPU min MHz: 1500.0000
BogoMIPS: 4899.70
Virtualization: AMD-V
L1d cache: 4 MiB
L1i cache: 4 MiB
L2 cache: 64 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-63
NUMA node1 CPU(s): 64-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0+cu124
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,884,721,166
|
td does not detect required test for mkl-dnn OneDNN update
|
atalman
|
closed
|
[
"triaged",
"module: infra"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
OneDNN update to 3.7: https://github.com/pytorch/pytorch/pull/147955 not triggering
``test_mkldnn.py::TestMkldnnCPU::test_mul_cpu`` look like TD skips this test on this PR: https://github.com/pytorch/pytorch/actions/runs/13539463441/job/37838465868?pr=147955#step:15:13520
This test is failing if you attach ``ci-no-td`` label with. Failure:``test_mkldnn.py::TestMkldnnCPU::test_mul_cpu Windows fatal exception: access violation``
This is the PR you see the failure: https://github.com/pytorch/pytorch/pull/147498
### Versions
2.7.0
| true
|
2,884,668,256
|
Refine test_preserves_strides to support different GPUs
|
EikanWang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148084
cc @voznesenskym @penguinwu @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,884,284,127
|
[dynamo] WeakRefVar reconstruct
|
IvanKobzarev
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148083
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,884,044,311
|
Consistently use load_torchbind_test_lib in tests
|
Flamefire
|
closed
|
[
"triaged",
"module: mkldnn",
"open source",
"Merged",
"ciflow/trunk",
"release notes: export",
"ciflow/linux-aarch64"
] | 6
|
COLLABORATOR
|
The same code is repeated multiple times with slightly different implementations.
Use the existing function for brevity and consistency.
In the function the code from `test_export` is used which does a single `load_library` with cleaner conditions
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,883,998,480
|
Add XPU device to LayerNormKernel devices
|
min-jean-cho
|
closed
|
[
"oncall: distributed",
"module: cpu",
"open source",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: distributed (checkpoint)",
"ciflow/xpu",
"release notes: xpu",
"module: xpu"
] | 8
|
COLLABORATOR
|
Work with https://github.com/intel/torch-xpu-ops/pull/1416 .
Moved to https://github.com/pytorch/pytorch/pull/148593.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @gujinghui @fengyuan14 @guangyey
| true
|
2,883,968,700
|
DISABLED test_split (__main__.AutoFunctionalizeTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: functionalization",
"oncall: pt2",
"module: pt2-dispatcher"
] | 7
|
NONE
|
Platforms: mac, macos, rocm, asan, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_split&suite=AutoFunctionalizeTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37902172609).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_split`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_auto_functionalize.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @bdhirsh @ezyang @chauhang @penguinwu @zou3519
| true
|
2,883,929,581
|
Fix test_tensorboard when started w/o tensorboard package
|
Flamefire
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
If `TEST_TENSORBOARD == False` then `DataType` is not defined or imported. However it is used unconditionally when defining the test with `parametrize` which leads to an NameError crashing the test execution on start.
Provide a Dummy to make it syntactially correct. Tests will be skipped on start.
```
File "/dev/shm/build/pytorch-v2.2.1/test/test_tensorboard.py", line 885, in <module>
class TestTensorProtoSummary(BaseTestCase):
File "/dev/shm/build/pytorch-v2.2.1/test/test_tensorboard.py", line 889, in TestTensorProtoSummary
(torch.float16, DataType.DT_HALF),
^^^^^^^^
NameError: name 'DataType' is not defined
Got exit code 1, retrying...
test_tensorboard 1/1 failed! [Errno 2] No such file or directory: '/dev/shm/build/pytorch-v2.2.1/.pytest_cache/v/cache/stepcurrent/test_tensorboard_0_0dba8bc00bbe233f'
```
| true
|
2,883,887,624
|
Video-Llama (version 1) runs much slower using Float16 than Float32 on Kunpeng CPU
|
luentong
|
open
|
[
"module: cpu",
"triaged",
"module: arm",
"topic: performance"
] | 2
|
NONE
|
This is a kinda niche question, so I just want some one who've worked on this area some suggestions.
I've tested thoroughly that, Video-Llama runs much slower when using Float16 than Float32 on Kunpeng Arm Arch64 CPU (at least x200 slower). On the other hand, other LLMs such as Llama3.2, Qwen2.5, even Video-Llama 2 or 3 has at least comparable or faster speed with Float16 than Float32 on both Kunpeng and X86 CPUs. I haven't managed to run Video-Llama on X86 machine that's why this is not tested, and only managed to run Video-Llama (2 and 3) on X86 and Kunpeng. Sorry for this.
Then I did perf with each on Kunpeng Arm Arch64 CPU. Most time-consuming calls for Qwen2.5 on Float16 is image attached below:

Most time-consuming calls for Video-Llama on Float16 is image attached below:

So if I'm correct, the difference here is `at::internal::invoke_parallel<at::parallel_for<at::native::SVE256::fp16_gemv_trans_fp32_arith_by_dot_products>`
gets call in Qwen for Float16 calculations (29 billion samples in 30-50 secs), while at::internal::invoke_parallel<at::parallel_for`<at::native::cpublas::(anonymous namespace)::gemm_transa_<c10::Half, float> `gets called in Video-LLama for Float16 calculations (2126 billion samples in 11 mins).
So my question is, for Float16 Video-Llama, why does `<at::native::cpublas::(anonymous namespace)::gemm_transa_<c10::Half, float>` need to be called at least x100 more times than `<at::parallel_for<at::native::SVE256::fp16_gemv_trans_fp32_arith_by_dot_products>` for Float16 Qwen?
while in Float32 they both run at about the same speed.
Thanks a lot for any suggestion.
Flame Graph for Qwen in Float16 on Kunpeng (30s-50s):

Flame Graph for Video-Llama in Float16 on Kunpeng (11 minutes):

cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,883,864,407
|
Feature Request: CUDA-Optimized Queue Buffer for PyTorch
|
songyuc
|
open
|
[
"module: cuda",
"triaged",
"needs research",
"needs design"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
## Problem Statement
When implementing reinforcement learning algorithms like SAC, we need efficient experience replay buffers to store and sample transitions (state, action, reward, next_state). Currently, many implementations resort to custom circular arrays instead of using Python's built-in data structures like `collections.deque`. This is because, as noted in [[ManiSkill issue #881](https://github.com/haosulab/ManiSkill/issues/881)](https://github.com/haosulab/ManiSkill/issues/881), storing CUDA tensors in Python's deque is inefficient as it's designed primarily for CPU objects and Python primitives.
## Feature Request
I propose adding a CUDA-optimized queue/circular buffer data structure to PyTorch that efficiently handles tensor operations, particularly for CUDA tensors. This would provide native support for queue operations that maintain GPU residency throughout.
## Potential Implementation
The implementation could:
- Leverage CUDA memory management for efficient storage
- Possibly use `torch.roll` or similar operations for efficient circular behavior
- Provide standard queue operations (enqueue, dequeue, sample) optimized for tensors
- Support batched operations common in deep learning workflows
- Include options for fixed size buffers (important for replay memory)
## Benefits
1. **Performance**: Eliminate CPU-GPU transfers when managing sequential data
2. **Simplicity**: Reduce boilerplate code in RL implementations
3. **Standardization**: Provide an optimized, tested solution instead of custom implementations
4. **Broader applications**: Useful beyond RL for sequential modeling, online learning, etc.
## Use Cases
- Replay buffers in reinforcement learning
- Sequence modeling with sliding windows
- Online learning algorithms
- Any application requiring FIFO operations on GPU tensors
I'm a student studying machine learning, and I noticed this gap while analyzing RL codebases. I believe this addition would benefit many PyTorch users working with sequential data on GPUs.
Thank you for considering this feature request!
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @msaroufim @eqy
| true
|
2,883,854,989
|
[test][do not merge] upgrade onednn v3.7, no ideep change
|
yanbing-j
|
closed
|
[
"module: cpu",
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01
| true
|
2,883,833,756
|
[Dynamo] Fix `AssertionError` when dynamo traces `torch.functional.xxx()` functions
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 8
|
CONTRIBUTOR
|
Fixes #147840
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,883,791,582
|
[inductor][cpu]DebertaV2ForMaskedLM, DebertaV2ForQuestionAnswering and eca_halonext26ts max_autotune accuracy failure in 2025-02-24 nightly release
|
zxd1997066
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>fp32 max_autotune static shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>accuracy</th>
<th>perf</th>
<th>reason(reference only)</th>
</tr>
</thead>
<tbody>
<tr>
<td>huggingface</td>
<td>DebertaV2ForMaskedLM</td>
<td>multiple</td>
<td>pass_due_to_skip</td>
<td>X</td>
<td>DebertaV2ForMaskedLM, LoweringException: AssertionError: View(</td>
</tr>
<tr>
<td>huggingface</td>
<td>DebertaV2ForQuestionAnswering</td>
<td>multiple</td>
<td>X</td>
<td>X</td>
<td>LoweringException: AssertionError: View(</td>
</tr>
<tr>
<td>timm_models</td>
<td>eca_halonext26ts</td>
<td>multiple</td>
<td>X</td>
<td>X</td>
<td>LoweringException: AssertionError: View(</td>
</tr>
<tr>
<td>huggingface</td>
<td>DebertaV2ForMaskedLM</td>
<td>single</td>
<td>pass_due_to_skip</td>
<td>X</td>
<td>DebertaV2ForMaskedLM, LoweringException: AssertionError: View(</td>
</tr>
<tr>
<td>huggingface</td>
<td>DebertaV2ForQuestionAnswering</td>
<td>single</td>
<td>X</td>
<td>X</td>
<td>LoweringException: AssertionError: View(</td>
</tr>
<tr>
<td>timm_models</td>
<td>eca_halonext26ts</td>
<td>single</td>
<td>X</td>
<td>X</td>
<td>LoweringException: AssertionError: View(</td>
</tr>
</tbody>
</table>
the bad commit: 2fb9416e6fea189baf45dd7a9a5e965e3f46f29a
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference accuracy timm_models eca_halonext26ts float32 first static default 0 inductor_max_autotune
Testing with inductor_max_autotune.
multi-threads testing....
model.safetensors: 100%|████████████████████████████████████████████████████████████████████| 43.2M/43.2M [00:07<00:00, 5.90MB/s]
loading model: 0it [00:10, ?it/s]█████████████████████████████████████████████████████████ | 41.9M/43.2M [00:07<00:00, 8.09MB/s]
cpu eval eca_halonext26ts
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
AUTOTUNE bmm(256x64x16, 256x16x144)
bmm 0.0361 ms 100.0%
cpp_bmm_0 0.1043 ms 34.6%
SingleProcess AUTOTUNE benchmarking takes 0.2582 seconds and 1.5254 seconds precompiling for 2 choices
AUTOTUNE packed_linear(16384x16, 1982689x1, 23x16)
cpp_packed_gemm_1 0.0253 ms 100.0%
_mkl_linear 0.0768 ms 33.0%
SingleProcess AUTOTUNE benchmarking takes 0.2495 seconds and 1.4916 seconds precompiling for 2 choices
AUTOTUNE bmm(256x64x144, 256x144x32)
bmm 0.0511 ms 100.0%
cpp_bmm_3 0.1307 ms 39.1%
SingleProcess AUTOTUNE benchmarking takes 0.2615 seconds and 1.3983 seconds precompiling for 2 choices
AUTOTUNE bmm(256x16x16, 256x16x144)
bmm 0.0211 ms 100.0%
cpp_bmm_4 0.0862 ms 24.5%
SingleProcess AUTOTUNE benchmarking takes 0.2517 seconds and 1.5165 seconds precompiling for 2 choices
AUTOTUNE packed_linear(4096x16, 1982689x1, 23x16)
cpp_packed_gemm_5 0.0068 ms 100.0%
_mkl_linear 0.0764 ms 8.9%
SingleProcess AUTOTUNE benchmarking takes 0.2481 seconds and 1.5178 seconds precompiling for 2 choices
AUTOTUNE bmm(256x16x144, 256x144x64)
bmm 0.0365 ms 100.0%
cpp_bmm_7 0.1044 ms 35.0%
SingleProcess AUTOTUNE benchmarking takes 0.2584 seconds and 1.3704 seconds precompiling for 2 choices
AUTOTUNE bmm(64x64x16, 64x16x144)
bmm 0.0211 ms 100.0%
cpp_bmm_8 0.1308 ms 16.1%
SingleProcess AUTOTUNE benchmarking takes 0.2500 seconds and 1.5260 seconds precompiling for 2 choices
ERROR:common:
Traceback (most recent call last):
File "/workspace/pytorch/benchmarks/dynamo/common.py", line 2228, in check_accuracy
new_result = self.run_n_iterations(
File "/workspace/pytorch/benchmarks/dynamo/common.py", line 1937, in run_n_iterations
model_iter_fn(mod, inputs, collect_outputs=False)
File "/workspace/pytorch/torch/_dynamo/eval_frame.py", line 589, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/workspace/pytorch/torch/_dynamo/output_graph.py", line 1515, in _call_user_compiler
raise BackendCompilerFailed(
File "/workspace/pytorch/torch/_dynamo/output_graph.py", line 1490, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/workspace/pytorch/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/workspace/pytorch/torch/__init__.py", line 2339, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 1829, in compile_fx
return compile_fx(
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 2164, in compile_fx
return aot_autograd(
File "/workspace/pytorch/torch/_dynamo/backends/common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/workspace/pytorch/torch/_functorch/aot_autograd.py", line 1158, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "/workspace/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 779, in load
compiled_fn = dispatch_and_compile()
File "/workspace/pytorch/torch/_functorch/aot_autograd.py", line 1143, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/workspace/pytorch/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/workspace/pytorch/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/workspace/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 205, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 1732, in fw_compiler_freezing
optimized_function = inner_compile(
File "/opt/conda/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 615, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/workspace/pytorch/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 721, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 1403, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/workspace/pytorch/torch/_inductor/compile_fx.py", line 1054, in codegen_and_compile
graph.run(*example_inputs)
File "/workspace/pytorch/torch/_inductor/graph.py", line 866, in run
return super().run(*args)
File "/workspace/pytorch/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
File "/workspace/pytorch/torch/_inductor/graph.py", line 1461, in run_node
result = super().run_node(n)
File "/workspace/pytorch/torch/fx/interpreter.py", line 236, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/workspace/pytorch/torch/_inductor/graph.py", line 1159, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/workspace/pytorch/torch/_inductor/graph.py", line 1149, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File "/workspace/pytorch/torch/_inductor/lowering.py", line 462, in wrapped
out = decomp_fn(*args, **kwargs)
File "/workspace/pytorch/torch/_inductor/kernel/bmm.py", line 190, in tuned_bmm
CppBmmTemplate.add_choices(
File "/workspace/pytorch/torch/_inductor/codegen/cpp_gemm_template.py", line 956, in add_choices
template.maybe_append_choice(choices)
File "/workspace/pytorch/torch/_inductor/select_algorithm.py", line 1573, in maybe_append_choice
return type(self._wrapped).maybe_append_choice(self, choices, **kwargs)
File "/workspace/pytorch/torch/_inductor/codegen/common.py", line 2255, in maybe_append_choice
choices.append(self.generate(**kwargs))
File "/workspace/pytorch/torch/_inductor/select_algorithm.py", line 1576, in generate
choice_caller = self._wrapped.generate(**kwargs)
File "/workspace/pytorch/torch/_inductor/codegen/cpp_template.py", line 52, in generate
code = kernel.render(self, **kwargs)
File "/workspace/pytorch/torch/_inductor/codegen/cpp_template_kernel.py", line 50, in render
template.render(kernel=self, **kwargs), self.render_hooks
File "/workspace/pytorch/torch/_inductor/codegen/cpp_bmm_template.py", line 217, in render
options = self.get_options(
File "/workspace/pytorch/torch/_inductor/codegen/cpp_bmm_template.py", line 195, in get_options
options[kword] = kernel.select(options[kword], 0, self.b_index)
File "/workspace/pytorch/torch/_inductor/codegen/cpp_template_kernel.py", line 174, in select
assert isinstance(sliced.data, ir.ReinterpretView), sliced.data
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
LoweringException: AssertionError: View(
ReinterpretView(
StorageBox(
ComputedBuffer(name='buf149', layout=FixedLayout('cpu', torch.float32, size=[64, 144, 64], stride=[9216, 1, 144]), data=Pointwise(device=device(type='cpu'), dtype=torch.float32, inner_fn=<function BaseView.make_loader.<locals>.loader at 0x7f3d3634a3b0>, ranges=[64, 144, 64]))
),
FixedLayout('cpu', torch.float32, size=[1, 144, 64], stride=[9216, 1, 144], offset=9216*s_b_index),
origins=OrderedSet([bmm_5])
),
size=[144, 64],
reindex=lambda i0, i1: [0, i0, i1],
origins=OrderedSet([bmm_5])
)
target: aten.bmm.default
args[0]: TensorBox(
View(
StorageBox(
ComputedBuffer(name='buf148', layout=FixedLayout('cpu', torch.float32, size=[64, 1, 64, 144], stride=[9216, 1, 144, 1]), data=Pointwise(device=device(type='cpu'), dtype=torch.float32, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x7f3d36317400>, ranges=[64, 1, 64, 144]))
),
size=[64, 64, 144],
reindex=lambda i0, i1, i2: [i0, 0, i1, i2],
origins=OrderedSet([view_80, div_2])
)
)
args[1]: TensorBox(
View(
SliceView(
PermuteView(data=View(data=GenericView(
TensorBox(
GenericView(
TensorBox(StorageBox(
ComputedBuffer(name='buf138', layout=FlexibleLayout('cpu', torch.float32, size=[8, 640, 12, 12], stride=[92160, 144, 12, 1]), data=Pointwise(device=device(type='cpu'), dtype=torch.float32, inner_fn=<function constant_pad_nd.<locals>.offset_fn at 0x7f3d3637a560>, ranges=[8, 640, 12, 12]))
)),
size=[8, 640, 1, 12, 12],
reindex=lambda i0, i1, i2, i3, i4: [i0, i1, 8*i2 + i4, i3],
origins=OrderedSet([unfold_4, constant_pad_nd_10])
)
),
size=[8, 640, 1, 1, 12, 12],
reindex=lambda i0, i1, i2, i3, i4, i5: [i0, i1, i2, 8*i3 + i5, i4],
origins=OrderedSet([unfold_5])
), size=[64, 80, 1, 144], reindex=<function fuse_reindexing.<locals>.reindex at 0x7f3d36378820>), dims=[0, 2, 3, 1]),
size=[64, 1, 144, 64],
reindex=lambda i0, i1, i2, i3: [i0, i1, i2, i3 + 16],
origins=OrderedSet([split_with_sizes_2])
),
size=[64, 144, 64],
reindex=lambda i0, i1, i2: [i0, 0, i1, i2],
origins=OrderedSet([view_81])
)
)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
TorchDynamo optimized model failed to run because of following error
fail_to_run
```
the last good commit: d91be786cbe7839c98ce54477ac41b38a2b1b4fc
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference accuracy timm_models eca_halonext26ts float32 first static default 0 inductor_max_autotune
Testing with inductor_max_autotune.
multi-threads testing....
loading model: 0it [00:01, ?it/s]
cpu eval eca_halonext26ts
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
AUTOTUNE bmm(64x64x144, 64x144x64)
bmm 0.0383 ms 100.0%
cpp_bmm_11 0.1772 ms 21.6%
SingleProcess AUTOTUNE benchmarking takes 0.2536 seconds and 1.4096 seconds precompiling for 2 choices
AUTOTUNE packed_linear(8x2048, 4079841x1, 1000x2048)
cpp_packed_gemm_12 0.0209 ms 100.0%
_mkl_linear 0.0399 ms 52.4%
SingleProcess AUTOTUNE benchmarking takes 0.2481 seconds and 1.5150 seconds precompiling for 2 choices
pass
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,accuracy,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips,compilation_latency
cpu,eca_halonext26ts,8,pass,302,1,0,0,0,0,1,71.135170
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>373ffb19</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>bea72180ed75f522ce4fe5e723bc2112e0874732</td>
<td>main</td>
<td>1677a3101959cd2ea5f811a023a2b8b0b9fc6c18</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
<td>main</td>
<td>2.6.0a0+f084f34</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference accuracy timm_models eca_halonext26ts float32 first static default 0 inductor_max_autotune
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/2fb9416e6fea189baf45dd7a9a5e965e3f46f29a
[timm_models-eca_halonext26ts-inference-float32-static-default-multiple-accuracy-crash_guilty_commit.log](https://github.com/user-attachments/files/19005173/timm_models-eca_halonext26ts-inference-float32-static-default-multiple-accuracy-crash_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129
| true
|
2,883,723,954
|
torch.compile with the inductor backend slows down (exponentially?) for certain graphs
|
linkct
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
### 🐛 Describe the bug
I recently ran into some issues where the compilation time of my model went from ~1min to ~20min. After some profiling I pinpointed the same culprit as this https://github.com/pytorch/pytorch/pull/145082#issue-2795829525, which solved the initial problem. However, the issue appeared again after some more changes to the model, and I managed to construct the following example that seems to **still slow down not quadratically, but exponentially**:
```python
import time
import torch
from torch import Tensor
def f(a: Tensor, b: Tensor, n: int):
l = [a, b]
for _ in range(n):
c = l[-1] + l[-2]
d = l[-1] * l[-2]
l.append(c)
l.append(d)
return l[-2] + l[-1]
def test(func):
for n in range(1, 14):
torch.compiler.reset()
x = torch.zeros((), requires_grad=True).cuda()
y = torch.zeros((), requires_grad=True).cuda()
start_time = time.time()
v = func(x, y, n)
v.backward()
torch.cuda.synchronize()
end_time = time.time()
print(n, end_time - start_time)
def main():
# Eager mode
print('Eager:')
test(f)
print('Compile+aot_eager:')
test(torch.compile(f, backend='aot_eager'))
print('Compile+aot_eager_decomp_partition:')
test(torch.compile(f, backend='aot_eager_decomp_partition'))
print('Compile+inductor:')
test(torch.compile(f, backend='inductor'))
print('Compile+inductor+reduce-overhead:')
test(torch.compile(f, backend='inductor', mode='reduce-overhead'))
print('Compile+inductor+max-autotune:')
test(torch.compile(f, backend='inductor', mode='max-autotune'))
if __name__ == '__main__':
main()
```
With the latest nightly, this works on my machine up to the `aot_eager_decomp_partition` part (so it seems the partitioner problem is indeed fixed). The `inductor` backend in any mode slows down with an exponential pattern like this:
```
Compile+inductor:
1 0.8154263496398926
2 0.47394800186157227
3 0.4942817687988281
4 0.5508174896240234
5 0.7381925582885742
6 0.9646518230438232
7 1.519777774810791
8 1.566840410232544
9 2.7781589031219482
10 3.7171809673309326
11 6.234206199645996
12 8.860355615615845
13 16.576820611953735
```
See below for the full log. Set `TORCHINDUCTOR_FORCE_DISABLE_CACHES=1` if you're going to run this example multiple times.
### Error logs
Eager:
1 0.022579193115234375
2 0.002338886260986328
3 0.002366781234741211
4 0.0024051666259765625
5 0.002657651901245117
6 0.0027315616607666016
7 0.002536296844482422
8 0.002430438995361328
9 0.002434968948364258
10 0.0024483203887939453
11 0.0027692317962646484
12 0.0024759769439697266
13 0.002763509750366211
Compile+aot_eager:
1 0.4809610843658447
2 0.025922060012817383
3 0.03066420555114746
4 0.0345158576965332
5 0.038732290267944336
6 0.04291129112243652
7 0.04505610466003418
8 0.049688100814819336
9 0.05630779266357422
10 0.0574648380279541
11 0.0630183219909668
12 0.06547188758850098
13 0.06908655166625977
Compile+aot_eager_decomp_partition:
1 0.03597378730773926
2 0.02463221549987793
3 0.030230045318603516
4 0.03245234489440918
5 0.0365450382232666
6 0.04038357734680176
7 0.04572415351867676
8 0.05180621147155762
9 0.053726911544799805
10 0.05597972869873047
11 0.06010150909423828
12 0.06695151329040527
13 0.0691518783569336
Compile+inductor:
1 0.8154263496398926
2 0.47394800186157227
3 0.4942817687988281
4 0.5508174896240234
5 0.7381925582885742
6 0.9646518230438232
7 1.519777774810791
8 1.566840410232544
9 2.7781589031219482
10 3.7171809673309326
11 6.234206199645996
12 8.860355615615845
13 16.576820611953735
Compile+inductor+reduce-overhead:
1 0.9557003974914551
2 0.4736628532409668
3 0.5002808570861816
4 0.35639095306396484
5 0.5239663124084473
6 0.7586610317230225
7 1.2123222351074219
8 1.3448257446289062
9 2.4787909984588623
10 3.3813910484313965
11 6.345768928527832
12 8.314946413040161
13 16.54460048675537
Compile+inductor+max-autotune:
1 1.51352858543396
2 0.4817020893096924
3 0.48807692527770996
4 0.5471885204315186
5 0.7288966178894043
6 0.9598393440246582
7 1.3769056797027588
8 1.47214674949646
9 2.681800603866577
10 3.6021103858947754
11 5.885891914367676
12 9.722765922546387
13 15.731146335601807
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250226+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
字节序: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU: 32
在线 CPU 列表: 0-31
每个核的线程数: 1
每个座的核数: 24
座: 1
NUMA 节点: 1
厂商 ID: GenuineIntel
CPU 系列: 6
型号: 183
型号名称: 13th Gen Intel(R) Core(TM) i9-13900KF
步进: 1
CPU MHz: 3000.000
CPU 最大 MHz: 5800.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 5990.40
虚拟化: VT-x
L1d 缓存: 576 KiB
L1i 缓存: 384 KiB
L2 缓存: 24 MiB
NUMA 节点0 CPU: 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy_extensions==1.0.0
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250226+cu126
[pip3] torchaudio==2.6.0.dev20250226+cu126
[pip3] torchvision==0.22.0.dev20250226+cu126
[conda] numpy 2.2.3 py310hefbff90_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0
[conda] nvidia-nccl-cu12 2.25.1 pypi_0
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0
[conda] torch 2.7.0.dev20250226+cu126 pypi_0
[conda] torchaudio 2.6.0.dev20250226+cu126 pypi_0
[conda] torchvision 0.22.0.dev20250226+cu126 pypi_0
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @zou3519 @bdhirsh
| true
|
2,883,713,987
|
torch.compile() on quantized model: No attribute "meta"
|
Whadup
|
closed
|
[
"needs reproduction",
"triaged",
"oncall: pt2",
"oncall: cpu inductor"
] | 3
|
CONTRIBUTOR
|
During evaluation of a compiled and quanzited model, I obtain the error "no attribute "meta"" in the following line:
https://github.com/pytorch/pytorch/blob/fd43c36aa9407634ad9f063eafc5b2795c091022/torch/_inductor/fx_passes/quantization.py#L1074
I propose the following change:
```diff
- x = match.kwargs["x"].meta["val"] if hasattr(match.kwargs["x"], 'meta') else match.kwargs["x"]
- weight = match.kwargs["weight"].meta["val"] if hasattr(match.kwargs["weight"], 'meta') else match.kwargs["weight"]
- scales = match.kwargs["scales"].meta["val"] if hasattr(match.kwargs["scales"], 'meta') else match.kwargs["scales"]
+ x = match.kwargs["x"]
+ if hasattr(x, 'meta'):
+ x = x.meta["val"]
+ weight = match.kwargs["weight"]
+ if hasattr(weight, 'meta'):
+ weight = weight.meta["val"]
+ scales = match.kwargs["scales"]
+ if hasattr(scales, 'meta'):
+ scales = scales.meta["val"]
```
cc @chauhang @penguinwu
Here is an example to reproduce the behavior on a machine with an A100 GPU.
Requirements: `torch, transformers, peft`
``` python
from transformers import AutoModelForCausalLM
import peft
import torch
model = AutoModelForCausalLM.from_pretrained(
"casperhansen/llama-3-8b-instruct-awq",
device_map="auto",
)
model = peft.get_peft_model(
model,
peft.LoraConfig(
task_type="CAUSAL_LM"
)
)
torch._dynamo.config.cache_size_limit = 1024
for i, layer in enumerate(model.base_model.model.model.layers):
model.base_model.model.model.layers[i] = torch.compile(layer)
with torch.amp.autocast("cuda"):
model(
input_ids = torch.tensor([[0, 1, 2]]).cuda(),
attention_mask = torch.tensor([[1, 1, 1]]).cuda()
)
```
Output:
```
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AttributeError: 'float' object has no attribute 'meta'
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
| true
|
2,883,640,830
|
[inductor][cpu] maml performance regression in 2025-02-24 nightly release
|
zxd1997066
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>amp static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>maml</td>
<td>multiple</td>
<td>1</td>
<td>0.754044</td>
<td>0.057128568000000005</td>
<td>0.043077453928992</td>
<td>29.795713</td>
<td>1</td>
<td>0.972093</td>
<td>0.046419061</td>
<td>0.045123644264672996</td>
<td>29.733994</td>
<td>0.78</td>
<td>1.05</td>
<td>0.81</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>amp dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>maml</td>
<td>single</td>
<td>1</td>
<td>0.82457</td>
<td>0.063849453</td>
<td>0.05264834346021</td>
<td>29.288304</td>
<td>1</td>
<td>0.990209</td>
<td>0.055708685</td>
<td>0.055163241265165</td>
<td>29.193898</td>
<td>0.83</td>
<td>1.05</td>
<td>0.87</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>fp32 static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>maml</td>
<td>multiple</td>
<td>1</td>
<td>0.741581</td>
<td>0.056735944</td>
<td>0.042074298087464004</td>
<td>32.561322</td>
<td>1</td>
<td>0.977097</td>
<td>0.042780619</td>
<td>0.041800814483043</td>
<td>32.365338</td>
<td>0.76</td>
<td>0.99</td>
<td>0.75</td>
<td>0.99</td>
</tr>
<tr>
<td>torchbench</td>
<td>maml</td>
<td>single</td>
<td>1</td>
<td>0.872544</td>
<td>0.114001951</td>
<td>0.099471718333344</td>
<td>33.230526</td>
<td>1</td>
<td>0.996849</td>
<td>0.099857448</td>
<td>0.099542797181352</td>
<td>32.951363</td>
<td>0.88</td>
<td>1.0</td>
<td>0.88</td>
<td>0.99</td>
</tr>
</tbody>
</table>
<p>fp32 dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>maml</td>
<td>multiple</td>
<td>1</td>
<td>0.739025</td>
<td>0.056286659999999995</td>
<td>0.041597248906499996</td>
<td>32.477689</td>
<td>1</td>
<td>0.974488</td>
<td>0.042967458</td>
<td>0.041871272211504</td>
<td>32.547124</td>
<td>0.76</td>
<td>1.01</td>
<td>0.76</td>
<td>1.0</td>
</tr>
<tr>
<td>torchbench</td>
<td>maml</td>
<td>single</td>
<td>1</td>
<td>0.874743</td>
<td>0.114048926</td>
<td>0.099763499676018</td>
<td>33.176391</td>
<td>1</td>
<td>0.996537</td>
<td>0.099328638</td>
<td>0.098984662926606</td>
<td>32.995821</td>
<td>0.88</td>
<td>0.99</td>
<td>0.87</td>
<td>0.99</td>
</tr>
</tbody>
</table>
the bad commit: 75db0fd8a0b4b355dca2ca1426062db6c1bac908
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench maml amp
Testing with inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval maml
W0227 06:32:05.505000 6721 torch/_dynamo/variables/tensor.py:910] [8/0] Graph break from `Tensor.item()`, consider setting:
W0227 06:32:05.505000 6721 torch/_dynamo/variables/tensor.py:910] [8/0] torch._dynamo.config.capture_scalar_outputs = True
W0227 06:32:05.505000 6721 torch/_dynamo/variables/tensor.py:910] [8/0] or:
W0227 06:32:05.505000 6721 torch/_dynamo/variables/tensor.py:910] [8/0] env TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1
W0227 06:32:05.505000 6721 torch/_dynamo/variables/tensor.py:910] [8/0] to include these operations in the captured graph.
W0227 06:32:05.505000 6721 torch/_dynamo/variables/tensor.py:910] [8/0]
W0227 06:32:05.505000 6721 torch/_dynamo/variables/tensor.py:910] [8/0] Graph break: from user code at:
W0227 06:32:05.505000 6721 torch/_dynamo/variables/tensor.py:910] [8/0] File "/workspace/benchmark/torchbenchmark/models/maml/meta.py", line 182, in torch_dynamo_resume_in_finetunning_at_171
W0227 06:32:05.505000 6721 torch/_dynamo/variables/tensor.py:910] [8/0] correct = torch.eq(pred_q, y_qry).sum().item()
W0227 06:32:05.505000 6721 torch/_dynamo/variables/tensor.py:910] [8/0]
W0227 06:32:05.505000 6721 torch/_dynamo/variables/tensor.py:910] [8/0]
running benchmark: 100%|█████████████████████████████████████████████████████████████████████████| 50/50 [00:06<00:00, 7.67it/s]
0.747x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,maml,1,0.747011,74.174063,40.165075,0.771766,53.307802,69.072486,149,20,12,5,0,0,18
```
the last good commit: eb892cd7687fc083c2e18e8185e69a780b6f06c3
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench maml amp
Testing with inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval maml
W0227 06:37:34.624000 9257 torch/_dynamo/variables/tensor.py:910] [8/0] Graph break from `Tensor.item()`, consider setting:
W0227 06:37:34.624000 9257 torch/_dynamo/variables/tensor.py:910] [8/0] torch._dynamo.config.capture_scalar_outputs = True
W0227 06:37:34.624000 9257 torch/_dynamo/variables/tensor.py:910] [8/0] or:
W0227 06:37:34.624000 9257 torch/_dynamo/variables/tensor.py:910] [8/0] env TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1
W0227 06:37:34.624000 9257 torch/_dynamo/variables/tensor.py:910] [8/0] to include these operations in the captured graph.
W0227 06:37:34.624000 9257 torch/_dynamo/variables/tensor.py:910] [8/0]
W0227 06:37:34.624000 9257 torch/_dynamo/variables/tensor.py:910] [8/0] Graph break: from user code at:
W0227 06:37:34.624000 9257 torch/_dynamo/variables/tensor.py:910] [8/0] File "/workspace/benchmark/torchbenchmark/models/maml/meta.py", line 182, in torch_dynamo_resume_in_finetunning_at_171
W0227 06:37:34.624000 9257 torch/_dynamo/variables/tensor.py:910] [8/0] correct = torch.eq(pred_q, y_qry).sum().item()
W0227 06:37:34.624000 9257 torch/_dynamo/variables/tensor.py:910] [8/0]
W0227 06:37:34.624000 9257 torch/_dynamo/variables/tensor.py:910] [8/0]
running benchmark: 100%|█████████████████████████████████████████████████████████████████████████| 50/50 [00:05<00:00, 8.65it/s]
0.972x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,maml,1,0.971883,57.020307,22.165622,0.829774,53.143962,64.046285,149,20,12,5,0,0,18
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>373ffb19</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>bea72180ed75f522ce4fe5e723bc2112e0874732</td>
<td>main</td>
<td>1677a3101959cd2ea5f811a023a2b8b0b9fc6c18</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
<td>main</td>
<td>2.6.0a0+f084f34</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance torchbench maml amp
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/75db0fd8a0b4b355dca2ca1426062db6c1bac908
[torchbench-maml-inference-amp-static-default-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/19004008/torchbench-maml-inference-amp-static-default-multiple-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129
| true
|
2,883,629,632
|
Fix atomic operation compatibility for ARMv8-A (Raspberry Pi 4) by adjusting compilation flags
|
maajidkhann
|
closed
|
[
"triaged",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"release notes: build",
"topic: bug fixes",
"ciflow/binaries_wheel",
"bug",
"ciflow/linux-aarch64"
] | 21
|
CONTRIBUTOR
|
**Issue:**
* The ldaddal instruction is an AArch64 atomic operation available from ARMv8.1-A onwards.
* Raspberry Pi 4 (Cortex-A72) is ARMv8-A, which does not support ldaddal, leading to failures when running PyTorch built with march=armv8.2-a+sve
* This led to an issue when running PyTorch on ARMv8-A (Raspberry Pi 4), as unsupported atomic operations were generated.
**Fix:**
* Updated the build flags to explicitly use **-march=armv8-a+sve**, ensuring GCC and clang promotes it correctly and resolves compatibility issues with armv8 and still work correctly for SVE like before.
* This ensures that PyTorch builds correctly for ARMv8-A platforms (e.g., Raspberry Pi 4) while still enabling SVE for supported hardware.
Test plan:
- Allocate `a1.4xlarge` on AWS
- Run following script using wheel produced by this PR
```python
import torch
def f(x):
return x.sin() + x.cos()
print(torch.__version__)
f_c = torch.jit.script(f)
```
- Observe no crash
```
$ python3 foo.py
2.7.0.dev20250313+cpu
```
- Observe crash with 2.6.0
```
$ python3 foo.py
2.6.0+cpu
Illegal instruction (core dumped)
```
Fixes #146792
cc @malfet @snadampal @milpuz01
| true
|
2,883,606,722
|
Replace `unimplemented` with `unimplemented_v2' in `codegen.py`
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 5
|
CONTRIBUTOR
|
Fixes #147913
- replace `unimplemented` in `codegen.py`
- remove unused import `unimplemented`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @williamwen42
| true
|
2,883,592,915
|
DISABLED test_disable_ctx_manager (__main__.ContextlibContextManagerTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_disable_ctx_manager&suite=ContextlibContextManagerTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37895843797).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_disable_ctx_manager`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_ctx_manager.py", line 2547, in test_disable_ctx_manager
self.assertEqual(len(eager.graphs), 0)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4096, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 0 but got 1.
Absolute difference: 1
Relative difference: inf
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_ctx_manager.py ContextlibContextManagerTests.test_disable_ctx_manager
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_ctx_manager.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,883,592,813
|
DISABLED test_slice_dynamic (__main__.AutoFunctionalizeTests)
|
pytorch-bot[bot]
|
closed
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"module: functionalization",
"oncall: pt2",
"module: pt2-dispatcher"
] | 5
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_slice_dynamic&suite=AutoFunctionalizeTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37895844239).
Over the past 3 hours, it has been determined flaky in 25 workflow(s) with 50 failures and 25 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_slice_dynamic`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_auto_functionalize.py", line 1357, in test_slice_dynamic
self.test_slice(_dynamic=True)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13557605677/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/inductor/test_auto_functionalize.py", line 1254, in test_slice
torch.library.define(
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13557605677/lib/python3.9/functools.py", line 888, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13557605677/lib/python3.9/site-packages/torch/library.py", line 531, in define
lib.define(name + schema, alias_analysis="", tags=tags)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13557605677/lib/python3.9/site-packages/torch/library.py", line 172, in define
result = self.m.define(schema, alias_analysis, tuple(tags))
RuntimeError: Tried to register an operator (mylib::foo(Tensor(a!) x, Tensor(b!) y) -> ()) with the same name and overload name multiple times. Each overload's schema should only be registered with a single call to def(). Duplicate registration: registered at /dev/null:119. Original registration: registered at /dev/null:203
To execute this test, run the following from the base repo dir:
python test/inductor/test_auto_functionalize.py AutoFunctionalizeTests.test_slice_dynamic
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_auto_functionalize.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @bdhirsh @chauhang @penguinwu
| true
|
2,883,528,347
|
use identity op for alpha=inf in torch.celu and quantized_celu
|
redwrasse
|
open
|
[
"module: cpu",
"triaged",
"open source",
"Stale",
"release notes: quantization"
] | 2
|
CONTRIBUTOR
|
Fixes #148065
This MR short-circuits the celu and quantized_celu ops to just return the input in the case alpha=inf, so the implementation of celu(x, inf) is defined on all x.
```
import torch
# (same for torch.ao.nn.quantized.functional.celu)
# Before
# -----------
x = torch.tensor(2.)
print(torch.celu(x, torch.inf))
# tensor(2.)
print(torch.celu(-x, torch.inf))
# tensor(nan)
x = torch.tensor(0.)
print(torch.celu(x, torch.inf))
# tensor(nan)
# After
# --------
x = torch.tensor(2.)
print(torch.celu(x, torch.inf))
# tensor(2.)
print(torch.celu(-x, torch.inf))
# tensor(-2.)
x = torch.tensor(0.)
print(torch.celu(x, torch.inf))
# tensor(0.)
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
2,883,523,874
|
Support alpha=inf consistently for torch.celu
|
redwrasse
|
open
|
[
"module: nn",
"triaged",
"module: python frontend"
] | 0
|
CONTRIBUTOR
|
The celu activation function introduced in https://arxiv.org/pdf/1704.07483 is described as being C1 differentiable across values of alpha.
- for alpha -> inf it converges to the identity op (f(x) = x)
- for alpha -> 0+ it converges to relu(x) = max(0, x)
The Pytorch [celu](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/Activation.cpp#L538-L549) (and related [quantized_celu](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/quantized/cpu/qelu.cpp#L24)) ) implementation(s) appears to call a parameterized elu, following the celu definition, so it looks like it will have these properties within bounds of overflow.
On the other hand I am not sure what the Pytorch behavior is intended to be in the implementation for alpha=0 and alpha=inf, but I think it would be nice to have it consistent.
At alpha=0 it raises RuntimeError ZeroDivisionError, which is fine, though it seems could instead for alpha=0 just short-circuit to return relu(x).
On the other hand, at alpha=torch.inf, positive values of x appear to return x, while non-positive values of x return `torch.nan`:
```
x = torch.tensor(2.)
torch.celu(x, torch.inf)
# tensor(2.)
torch.celu(-x, torch.inf)
# tensor(nan)
x = torch.tensor(0.)
print(torch.celu(x, torch.inf))
# tensor(nan)
print(torch.celu(-x, torch.inf))
# torch.nan
```
This seems inconsistent- since the celu(x, alpha=torch.inf) implementation is already returning f(x) = x for the positive domain, I'd suggest to make it a C1 function for the alpha=torch.inf case by just returning the identity op in that case.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,883,466,389
|
[inductor] [silence] `torch.cdist` outputs inconsistent results with eager
|
shaoyuyoung
|
closed
|
[
"oncall: pt2",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom description**: with the `H` and `W` increasing, errors are amplified.
**device backend**: both on Triton and CPP
**repro**:
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = torch.cdist(x, x, p=2)
return x
model = Model()
x = torch.randn(2, 3, 1024, 1024)
inputs = [x]
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
print(torch.allclose(output, c_output, 1e-3, 1e-3, equal_nan=True))
print(torch.max(torch.abs(output - c_output)))
```
### Error logs
```
False
tensor(0.0221)
```
### Versions
nightly 20250225
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,883,429,107
|
Add needs_exact_strides operator tag for Inductor to force exact strides
|
zou3519
|
closed
|
[
"Merged",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor",
"keep-going"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150511
* #148104
* #150495
* #148092
* #148091
* __->__ #148063
* #148046
Inductor will force exact strides on a custom operator tagged with
needs_exact_strides. I'll make this the default in a follow-up PR.
Test Plan:
- tests
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,883,429,040
|
Add torch._library.utils.normalize_args_kwargs
|
zou3519
|
open
|
[
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148104
* #148092
* #148091
* #148063
* #148046
* __->__ #148062
Normalizes (args, kwargs) to the PyTorch dispatcher calling convention.
I need some sort of normalization in the next PR.
Test Plan:
- new tests
| true
|
2,883,409,164
|
[inductor] [silence] `nn.ConvTranspose2d-F.dropout` outputs inconsistent results with eager
|
shaoyuyoung
|
open
|
[
"triaged",
"module: correctness (silent)",
"oncall: pt2",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom description**: when using `nn.ConvTranspose2d` and `F.dropout` together, outputs are different with eager.
**device backend**: both triton and CPP
**note**: I have used `config.fallback_random = True` and `torch.manual_seed(0)`
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv_transpose = torch.nn.ConvTranspose2d(in_channels=3, out_channels=6, kernel_size=3, stride=2, padding=1)
def forward(self, x):
torch.manual_seed(0)
x = self.conv_transpose(x)
x = F.dropout(x, p=0.5)
return x
model = Model().eval()
x = torch.randn(2, 3, 10, 10)
inputs = [x]
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
print(torch.allclose(output, c_output, 1e-3, 1e-3, equal_nan=True))
print(torch.max(torch.abs(output - c_output)))
```
### Error logs
```
False
tensor(1.7444)
```
### Versions
nightly 20250225
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,883,404,144
|
Smoke Test skip cuda.gds on windows
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Follow up after : https://github.com/pytorch/pytorch/pull/147120
Cufile was enabled only on Linux: https://pypi.org/project/nvidia-cufile-cu12/#files
Fixes validation workflow failues: https://github.com/pytorch/test-infra/actions/runs/13558218752/job/37896578837
```
File "C:\Jenkins\Miniconda3\envs\conda-env-13558218752\lib\site-packages\torch\cuda\gds.py", line 105, in __init__
raise RuntimeError("GdsFile is not supported on this platform.")
RuntimeError: GdsFile is not supported on this platform.
Exception ignored in: <function GdsFile.__del__ at 0x000001772B5003A0>
Traceback (most recent call last):
File "C:\Jenkins\Miniconda3\envs\conda-env-13558218752\lib\site-packages\torch\cuda\gds.py", line 113, in __del__
if self.handle is not None:
AttributeError: 'GdsFile' object has no attribute 'handle'
```
| true
|
2,883,396,473
|
[PT2] Port fuse_split_getitem_squeeze to PT2 pre_grad passes
|
huxintong
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: put it as an add_pass option
Reviewed By: frank-wei
Differential Revision: D68909559
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,883,386,578
|
[inductor] [cpu] `torch.scatter` throws `AttributeError: 'int' object has no attribute 'find'` on CPP backend
|
shaoyuyoung
|
closed
|
[
"triaged",
"oncall: pt2",
"oncall: cpu inductor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom description**: I have done some ablations, this seems to be a minified repro that triggers this `AttributeError`. I am unsure what CPP backend inductor optimizes on `F.gumbel_softmax-torch.where-torch.scatter`.
**device backend**: only CPP backend
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
x = F.gumbel_softmax(x, tau=1.0, hard=True)
x = torch.where(x > 0.5, x, torch.zeros_like(x))
x = torch.scatter(x, dim=1, index=torch.ones(1, 2, dtype=torch.long), src=torch.ones_like(x))
return x
model = Model()
x = torch.randn(1, 2)
inputs = [x]
def run_test(model, inputs, backend):
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(f"succeed on {backend}")
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'inductor')
```
### Error logs
```
succeed on eager
AttributeError: 'int' object has no attribute 'find'
```
### Versions
nightly 20250225
cc @chauhang @penguinwu
| true
|
2,883,331,973
|
DISABLED test_builtin_score_mods_bfloat16_score_mod0_head_dims0 (__main__.TestFlexDecoding)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_bfloat16_score_mod0_head_dims0&suite=TestFlexDecoding&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37883438465).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_bfloat16_score_mod0_head_dims0`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_decoding.py", line 636, in test_builtin_score_mods
self.run_test(score_mod, dtype, Q_H=Hq, KV_H=Hkv)
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_decoding.py", line 323, in run_test
self._check_out(
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_decoding.py", line 280, in _check_out
self._check_equal(golden_out, ref_out, compiled_out, fudge_factor, "Out")
File "/var/lib/jenkins/pytorch/test/inductor/test_flex_decoding.py", line 246, in _check_equal
self.assertTrue(False, "Output/Grad with NaN")
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : Output/Grad with NaN
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_flex_decoding.py TestFlexDecoding.test_builtin_score_mods_bfloat16_score_mod0_head_dims0
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_decoding.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,883,331,624
|
DISABLED test_nonstrict_trace_pre_existing_custom_class_with_side_effects (__main__.DecoratorTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nonstrict_trace_pre_existing_custom_class_with_side_effects&suite=DecoratorTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37888370816).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nonstrict_trace_pre_existing_custom_class_with_side_effects`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
Truncated for length
```
in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1007, in helper
if _is_leaf(node, is_leaf=is_leaf):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 802, in _is_leaf
return (is_leaf is not None and is_leaf(tree)) or _get_node_type(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 795, in _get_node_type
if _is_namedtuple_instance(tree):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 786, in _is_namedtuple_instance
if len(bases) != 1 or bases[0] != tuple:
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/polyfills/__init__.py", line 242, in cmp_ne
if isinstance(type(a).__ne__, types.FunctionType):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
python test/dynamo/test_decorators.py DecoratorTests.test_nonstrict_trace_pre_existing_custom_class_with_side_effects
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_decorators.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,883,331,576
|
DISABLED test_nonstrict_trace_inside_compiled_function_kwarg (__main__.DecoratorTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nonstrict_trace_inside_compiled_function_kwarg&suite=DecoratorTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37882843708).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 8 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nonstrict_trace_inside_compiled_function_kwarg`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_decorators.py", line 490, in test_nonstrict_trace_inside_compiled_function_kwarg
res = opt_fn(x)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1417, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 594, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1047, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 755, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 791, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1418, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 256, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 709, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3234, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2037, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 372, in call_function
unimplemented(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/exc.py", line 441, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: `nonstrict_trace` expects a callable, but got value of type <function>
from user code:
File "/var/lib/jenkins/workspace/test/dynamo/test_decorators.py", line 483, in fn
res = torch._dynamo.nonstrict_trace(traceable_fn=trace_me)(x)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
python test/dynamo/test_decorators.py DecoratorTests.test_nonstrict_trace_inside_compiled_function_kwarg
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_decorators.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,883,331,575
|
DISABLED test_nonstrict_trace_on_method (__main__.DecoratorTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: asan, linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nonstrict_trace_on_method&suite=DecoratorTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37883456540).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 8 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nonstrict_trace_on_method`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
Truncated for length
```
rch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1007, in helper
if _is_leaf(node, is_leaf=is_leaf):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 802, in _is_leaf
return (is_leaf is not None and is_leaf(tree)) or _get_node_type(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 795, in _get_node_type
if _is_namedtuple_instance(tree):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_pytree.py", line 786, in _is_namedtuple_instance
if len(bases) != 1 or bases[0] != tuple:
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/polyfills/__init__.py", line 242, in cmp_ne
if isinstance(type(a).__ne__, types.FunctionType):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
python test/dynamo/test_decorators.py DecoratorTests.test_nonstrict_trace_on_method
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_decorators.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,883,327,520
|
Skip the logging if the pass cannot be pickled
|
houseroad
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
MEMBER
|
Summary:
Skip the logging for vllm at this moment, we can add some pickle logic later.
The log is only for debugging purpose.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,883,312,687
|
[inductor][user triton] support on-device TMA / tensor descriptor API
|
davidberard98
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"upstream triton",
"module: user triton"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Following https://github.com/triton-lang/triton/pull/4916, there's a new tensor descriptor API. We should consider supporting it in case users write kernels that use it, or if we want to use it in inductor directly.
one part of this supporting the global_scratch - partially patched in https://github.com/pytorch/pytorch/pull/148051, but this always uses a nullptr for the scratch space.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @bertmaher @int3 @nmacchioni @embg @peterbell10 @oulgen
| true
|
2,883,311,025
|
[triton 3.3] cpp_wrapper: add a global_scratch arg
|
davidberard98
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148051
Following triton # 4916, the generated cubin expects a global_scratch argument to support on-device TMA. We believe this is the source of many of the "invalid argument" failures on AOTI/cpp_wrapper tests. AFAIK, we don't use on-device TMA in Inductor as of now, so it should be safe to use a nullptr for the scratch space.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,883,296,018
|
[cutlass backend] Check if len(timings) == len(choices) before skipping precompile
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148050
Differential Revision: [D70298908](https://our.internmc.facebook.com/intern/diff/D70298908/)
Mostly from @coconutruben observation. Right now, we skip precompilation if we find **some** timings. That sounds like a bug. Most of the time it is fine, since we don't change the number of configs and triton compilation doesn't take too long. But it is devastating for cutlass backend.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,883,287,820
|
Fix `torch.nn.functional.hardswish` gradients corner case
|
zeshengzong
|
closed
|
[
"module: autograd",
"module: cpu",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: nn",
"ci-no-td"
] | 32
|
CONTRIBUTOR
|
Fixes #147801
## Changes
- Change hardswish gradient compute condition as [torch.nn.functional.hardswish](https://pytorch.org/docs/stable/generated/torch.nn.functional.hardswish.html)
- Enable cuda for test `test_hardswish_grad_corner`
- Add test case for value=-3
## Test Result
```bash
pytest test/test_nn.py -k test_hardswish
pytest test/test_unary_ufuncs.py -k test_hardswish
pytest test/inductor/test_torchinductor.py -k test_hardswish
```



cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,883,284,868
|
[RFC] Customization of ElasticAgent for fault-tolerance and node-replacement in big ddp job
|
zwx4d6
|
open
|
[
"oncall: distributed",
"triaged"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
When working on >100 nodes training in k8s cluster, there are node failures, some not retryable (may need node replacement). For each job interruption, I would like to run some diagnostics on each node, **right in the training context** (container, fs mounts, hardware configuration etc.) to help figuring out which nodes are no longer eligible and calling the node provisioner (e.g. k8s), before automatic retry/resume of the job.
The `DynamicRendezvousHandler` already provides auto restart **all** workers by starting a new round of rendezvous, allowing new (replacement) node to join. But the problematic agent itself can't exit gracefully, the currently similar-named `RendezvousGracefulExitError` causes the rendezvous to close (everybody give up) rather than re-attempt. Even if some exception does have such semantic, the diagnostic script cannot throw such exception because it is not executed in the process of `torchrun` yet.
The proposed feature includes:
- Ability for an agent to exit cooperatively but not forbidding others to restart a new round of rendezvous.
- Customizable points in agent or rendezvous handler for programmatic controlling the agent workflow. In my case the `right-after-rendezvous` and `right-after-every-worker-terminated` is specially interesting, but there could be more.
If the features are proper for the role of elastic agent, I would like to make a PR for further discussion.
### Alternatives
- just implement another agent for fully customized behavior, like in [docs](https://pytorch.org/docs/stable/elastic/customization.html)
- `torchrun` is a well-known launcher script, so the new script would have to mimic the user-interface by importing or copying most of code
- the new agent itself would be almost a copy of `SimpleElasticAgent`, and new agent type cannot be easily registered into `torchrun` call stack by existing APIs.
- replicate the rdzv/sync logic inside training script, make them mini-agent
- introduces unnecessary complexity for training script
- the current implementation of dynamic rendezvous handler relies on a single storage and optimistic concurrent write, when number of writers increase, reaching the sync point would cost longer
- just terminate the agent abruptly (mimic agent lost) and leave it to external manager
- doesn't get other agents notified until heartbeat timeout
- external manager may not be aware of restart counter inside the agent
- do not use ElasticAgent's restart and leave it to external manager e.g. k8s operator
- k8s operator generally operates on pods. if all agents exit without restart, all pods would be deleted and re-created, causing large-scale rescheduling (or even re-queued for long time)
### Additional context
There is a issue aboud k8s fault-tolerance use case in https://github.com/pytorch/pytorch/issues/136312, looking for a method of handling agent node restart on hardware-failure.
Another discussion aboud non-retriable error in https://github.com/pytorch/pytorch/issues/133877, which seeks interface for training script to notify agent, and workarounded by modified agent.
A less related issue about global restarting count for external management: https://github.com/pytorch/pytorch/issues/108158.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,883,271,464
|
[cutlass backend] Sort the list of ops for better repro
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148047
Differential Revision: [D70298051](https://our.internmc.facebook.com/intern/diff/D70298051/)
This only affects anything if `cutlass_max_profiling_configs` is used. I believe cutlass_max_profiling_configs is more of a testing config.
Problem is when we get the configs from cutlass_library, the ops can come in different orders.
Motivation is to make repro small issues easier.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,883,260,650
|
Change arg_kwarg_vals propagation strategy
|
zou3519
|
closed
|
[
"Merged",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor",
"keep-going"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150511
* #148104
* #150495
* #148092
* #148091
* #148063
* __->__ #148046
Instead of always propagating arg_kwarg_vals in _COPY_META_FIELDS, we
special-case the pattern matcher to propagate arg_kwarg_vals when
it sees triton_kernel_wrapper_functional.
The strategy is:
1) trace out the replacement graph with arg_kwarg_vals (which have accurate eager-mode metadata)
2) trace out the replacement graph with vals (which have the accurate Inductor metadata)
3) Propagate the arg_kwarg_vals from the first graph to the second.
4) Use the second graph as the replacement graph.
The strategy is this because we want to extend this to handle
auto_functionalized later up in the stack.
Test Plan:
- existing tests
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,883,252,818
|
Run Performance Regression Tests on new Version of Triton
|
drisspg
|
open
|
[
"module: performance",
"triaged",
"upstream triton"
] | 2
|
CONTRIBUTOR
|
cc @msaroufim @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov
| true
|
2,883,251,685
|
Sweep through Potentially BC Breaking Commits in Triton
|
drisspg
|
open
|
[
"triaged",
"upstream triton"
] | 0
|
CONTRIBUTOR
|
+ [ ] https://github.com/triton-lang/triton/pull/4955
+ [ ] https://github.com/triton-lang/triton/pull/5637
+ [ ] https://github.com/triton-lang/triton/pull/5926
+ [ ] https://github.com/triton-lang/triton/pull/5961
cc @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov
| true
|
2,883,228,801
|
[user-triton] handle inline_asm_case
|
sijiac
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary: We currently failed the mutation analysis for all inline_asm ops. In this diff, we handle the case when "is_pure" is set to True since it indicates the operation doesn't mutate the input value
Test Plan:
../buck-out/v2/gen/fbcode/854b9ed00d28c5c5/caffe2/test/inductor/__triton_kernels__/triton_kernels.par --r test_mutations_inline_asm_kernel
```
test_mutations_inline_asm_kernel_is_pure_true (caffe2.test.inductor.test_triton_kernels.MutationTests) ... W0226 18:10:34.261000 1906801 /data/users/sijiac/fbsource/fbcode/caffe2/torch/_higher_order_ops/triton_kernel_wrap.py:656] TTIR mutation analysis: Skipping pure tt.elementwise_inline_asm op (is_pure=True)
ok
----------------------------------------------------------------------
Ran 2 tests in 0.706s
OK
```
Differential Revision: D69878591
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,883,219,835
|
[wip][aot] annotated fwd graph dynamic tensor outputs with mark_dynamic
|
xmfan
|
closed
|
[
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148042
* #147891
| true
|
2,883,209,326
|
Invalid ONNX graph if using float16 dtype for `torch.arange`
|
Y-T-G
|
closed
|
[
"module: onnx",
"triaged"
] | 4
|
NONE
|
### 🐛 Describe the bug
If you specify a float16 `dtype` for `torch.arange`, the graph is invalid:
```python
import torch
class Test(torch.nn.Module):
def forward(self, x):
return torch.arange(end=x.shape[0], dtype=torch.float16)
test = Test()
torch.onnx.export(test, torch.randn(1), "test.onnx", dynamic_axes={"input":{0:"batch"}, "output":{0:"batch"}}, input_names=["input"], output_names=["output"])
import onnxruntime
sess = onnxruntime.InferenceSession("test.onnx")
```
```
InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from test.onnx failed:This is an invalid model. Type Error: Type 'tensor(float16)' of input parameter (/Constant_1_output_0) of operator (Range) in node (/Range) is invalid.
```
Casting it after, instead of specifying a `dtype` works.
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
Nvidia driver version: 550.90.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7502 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2500.0000
CPU min MHz: 1500.0000
BogoMIPS: 5000.17
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnx2tf==1.26.3
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime-gpu==1.20.1
[pip3] onnxslim==0.1.47
[pip3] optree==0.13.0
[pip3] sng4onnx==1.0.4
[pip3] torch==2.6.0
[pip3] torchaudio==2.5.0+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[pip3] tritonclient==2.54.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.5.0+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
[conda] tritonclient 2.54.0 pypi_0 pypi
```
| true
|
2,883,206,777
|
[dynamo] torch._dynamo.mark_dynamic hard fails the compile if the dimension is coerced to static
|
xmfan
|
closed
|
[
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo",
"module: guards"
] | 2
|
MEMBER
|
### 🐛 Describe the bug
I'm trying to annotate fwd graph's dynamic tensor outputs as dynamic to cut down on recompiles: https://github.com/pytorch/pytorch/pull/148042.
```python
import torch
@torch.compile(backend="aot_eager")
def fn(static, dynamic):
return torch.matmul(static, dynamic) # inner dims coerced by matmul
static = torch.randn(10, 10)
dynamic = torch.randn(10, 10)
torch._dynamo.mark_dynamic(dynamic, 0)
fn(static, dynamic)
```
Today, this is a hard error, which is inconvenient when compiling code that may or may not run into this issue e.g. the PR above or when coersion only happens on some branches.
### Error logs
<details>
<summary>Logs</summary>
```
ERROR:torch._guards:Error while creating guard:
Name: ''
Source: shape_env
Create Function: SHAPE_ENV
Guard Types: None
Code List: None
Object Weakref: None
Guarded Class Weakref: None
Traceback (most recent call last):
File "/home/xmfan/core/a/pytorch/torch/_guards.py", line 356, in create
return self.create_fn(builder, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/guards.py", line 1948, in SHAPE_ENV
python_code_parts, verbose_code_parts = _get_code_parts(
^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/guards.py", line 1931, in _get_code_parts
return output_graph.shape_env.produce_guards_verbose(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 5361, in produce_guards_verbose
raise ConstraintViolationError(
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['dynamic'].size()[0])! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of RelaxedUnspecConstraint(L['dynamic'].size()[0]) are valid because L['dynamic'].size()[0] was inferred to be a constant (10).
ERROR:torch._guards:Created at:
File "/home/xmfan/core/a/pytorch/torch/_dynamo/convert_frame.py", line 680, in transform
tracer = InstructionTranslator(
File "/home/xmfan/core/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2910, in __init__
output=OutputGraph(
File "/home/xmfan/core/a/pytorch/torch/_dynamo/output_graph.py", line 356, in __init__
self.init_ambient_guards()
File "/home/xmfan/core/a/pytorch/torch/_dynamo/output_graph.py", line 505, in init_ambient_guards
self.guards.add(ShapeEnvSource().make_guard(GuardBuilder.SHAPE_ENV))
Traceback (most recent call last):
File "/home/xmfan/core/a/pytorch/bruh.py", line 10, in <module>
fn(static, dynamic)
File "/home/xmfan/core/a/pytorch/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/convert_frame.py", line 1393, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/convert_frame.py", line 1173, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/convert_frame.py", line 585, in __call__
return _compile(
^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/convert_frame.py", line 1023, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/convert_frame.py", line 746, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/convert_frame.py", line 884, in _compile_inner
check_fn = CheckFunctionManager(
^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/guards.py", line 2473, in __init__
guard.create(builder)
File "/home/xmfan/core/a/pytorch/torch/_guards.py", line 356, in create
return self.create_fn(builder, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/guards.py", line 1948, in SHAPE_ENV
python_code_parts, verbose_code_parts = _get_code_parts(
^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/guards.py", line 1931, in _get_code_parts
return output_graph.shape_env.produce_guards_verbose(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 5361, in produce_guards_verbose
raise ConstraintViolationError(
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['dynamic'].size()[0])! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of RelaxedUnspecConstraint(L['dynamic'].size()[0]) are valid because L['dynamic'].size()[0] was inferred to be a constant (10).
```
</details>
### Versions
main
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @anijain2305
| true
|
2,883,165,461
|
[TESTING] 1
|
ZainRizvi
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,883,163,601
|
[torchgen] Add support for schema with namespace
|
larryliu0820
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/executorch/issues/8711
In ExecuTorch when we try to parse the following schema:
```
aten::__lshift__.Scalar(Tensor self, Scalar other) -> Tensor
```
Repro:
```python
from torchgen.model import FunctionSchema
native_schema = FunctionSchema.parse("aten::__lshift__.Scalar(Tensor self, Scalar other) -> Tensor")
```
It's failing because `BaseOperatorName` categorizes it to be a
inplace operator.
I understand we are not supposed to pass in namespace "aten::" into
`FunctionSchema.parse()` but unfortunately ExecuTorch requires this
feature to work.
This PR adds a new `namespace` attribute to `BaseOperatorName` and makes
sure the rest of the stack works as before, if a schema without
namespace is passed in
| true
|
2,883,153,742
|
[ROCm] Skip gfx12 Row-Wise F8 Tests
|
petrex
|
open
|
[
"module: rocm",
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 6
|
CONTRIBUTOR
|
TLDR: Skip Gfx12 row-wise fp8 tests (under development)
This pull request introduces changes to the `test/test_matmul_cuda.py` and `torch/testing/_internal/common_cuda.py` files to add support for the GFX12 architecture. The changes include adding a new platform support check for GFX12 and updating several test cases to skip execution if the GFX12 architecture is detected.
Changes related to GFX12 support:
* [`torch/testing/_internal/common_cuda.py`](diffhunk://#diff-fe348e24069d43bc7c6913174b038fcc5880a3281bdc0e8e217cf210bd0935e5R58-R59): Added a new lazy evaluation `IS_GFX12` to check if the current CUDA device supports the GFX12 architecture.
* [`test/test_matmul_cuda.py`](diffhunk://#diff-3f31c52b48cfddf8f4617d809f7695b2e4a1c78656f8c4b5143a4b45d01fcf23L27-R28): Imported the new `IS_GFX12` variable.
Updates to test cases:
* [`test/test_matmul_cuda.py`](diffhunk://#diff-3f31c52b48cfddf8f4617d809f7695b2e4a1c78656f8c4b5143a4b45d01fcf23L470-R473): Modified `_test_tautological_mm` to skip the test if `IS_GFX12` is true.
* [`test/test_matmul_cuda.py`](diffhunk://#diff-3f31c52b48cfddf8f4617d809f7695b2e4a1c78656f8c4b5143a4b45d01fcf23R688): Added a skip condition for `IS_GFX12` in `test_float8_rowwise_scaling_sanity`.
* [`test/test_matmul_cuda.py`](diffhunk://#diff-3f31c52b48cfddf8f4617d809f7695b2e4a1c78656f8c4b5143a4b45d01fcf23R794): Added a skip condition for `IS_GFX12` in `test_scaled_mm_vs_emulated_row_wise`.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.