id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2 values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4 values | body stringlengths 7 62.5k ⌀ | is_title bool 1 class |
|---|---|---|---|---|---|---|---|---|
2,824,668,373 | [inductor] Refactor CSEProxy into global scope | jansel | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146373
* #146297
* #146282
* #146257
* #146255
* #146254
* #146252
* #146235
* __->__ #146226
* #146225
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,824,668,329 | [inductor] Finish typing common.py | jansel | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146373
* #146297
* #146282
* #146257
* #146255
* #146254
* #146252
* #146235
* #146226
* __->__ #146225
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,824,666,380 | [ONNX] Migrate onnx decomps into PyTorch | justinchuby | closed | [
"module: onnx",
"triaged",
"open source",
"ciflow/trunk",
"release notes: onnx",
"topic: new features",
"merging"
] | 11 | COLLABORATOR | Migrate the ATen op decomp library for ONNX ("torchlib") from ONNX Script with necessary changes in the onnx registry.
## The migration
"torchlib" is what we the decomp library from aten ops to onnx in the `onnxscript` project. (name doesn't matter, can be changed.) It is the improved version of the "symbolic functions" in `torch/onnx` implemented using `onnxscript`, a graph builder for onnx. **Since PyTorch 2.1, it has been a dependency for `torch.onnx` via `onnxscript` and has been powering the `torch.onnx` exporter.**
torchlib was hosted in `onnxscript` for rapid evolvement. However, is it time to migrate the logic into PyTorch because:
1. The logic (is developed for and) belongs to `torch.onnx` and is the equivalent of the onnx "symbolic functions" for FX graphs
2. Migrating to PyTorch decouples `torch.onnx` from logic in `onnxscript`, which is a good thing.
3. Maintaining its compatibility among multiple PyTorch versions is becoming harder and harder. After migration we can evolve the logic with aten operators without having to worry about backward compatibility for different PyTorch versions
4. We can use newer opsets by default, again without having to worry about BC. The proposal is upgrade to use opset 21 (from opset 18, released 2 years ago) for torch 2.7. This makes it easier for developers and users to leverage new dtypes and new operators like the corrected GroupNormalization.
## Test and infra impact
The tests leverage OpInfo. They run in an onnx shard only. On a 2-core machine, tests typically complete within 15 minutes.
No new dependencies are introduced. Packaging, test activities should remain the same.
## State of the migrated code
The migrated code are lifted from https://github.com/microsoft/onnxscript/tree/main/onnxscript/function_libs/torch_lib. It is reviewed by the same set of reviewers owning the `torch.onnx` component.
Fixes https://github.com/pytorch/pytorch/issues/139301
## Next steps
The follow up PRs will decouple the implementation from ONNX Script type system to improve type checking and bump the onnx opset version used.
| true |
2,824,631,633 | What will happen? | malfet | closed | [
"Stale",
"topic: not user facing"
] | 2 | CONTRIBUTOR | null | true |
2,824,629,830 | [while_loop][inductor] support sym expression as cond_fn output | ydwu4 | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 11 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146222
As titled. Previously, we only support tensor output of cond_fn, this PR changes to also allow a shape expr to be returned in cond_fn.
aoti generated output code looks like:
```
V0203 11:28:05.750000 2611693 torch/_inductor/compile_fx.py:1091] [1/0] [__output_code] bool buf7_cond_result;
....
(while_loop_cond_graph_0_arg2_1_handle);
V0203 11:27:59.336000 2611693 torch/_inductor/compile_fx.py:1091] [1/0] [__output_code] buf7_cond_result = u0 + u1 < 10L;
V0203 11:27:59.336000 2611693 torch/_inductor/compile_fx.py:1091] [1/0] [__output_code] if (!buf7_cond_result) break;
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,824,606,727 | Incompatible Torch and Torchvision while building from source for 2.6.0 and CUDA 12.6, RuntimeError: operator torchvision::nms does not exist | ajindal1 | open | [
"module: dependency bug",
"module: build",
"triaged"
] | 3 | CONTRIBUTOR | ### 🐛 Describe the bug
Building Torch 2.6.0 using CUDA 12.6 from source and then installing torchvision wheels, will give incompatibility issue, specifically `RuntimeError: operator torchvision::nms does not exist`. This error has been discussed before on the forum and the opinion has been that there was a build issue and reinstallation is recommended. So, I am providing detailed steps for the repro.
The issue only occurs with CUDA 12.6 for me, working fine with CUDA 11.8 and CUDA 12.4. Similar issue is occurring with Nightly images with CUDA 12.4 for the past few weeks.
```
# Clone Pytorch repo and checkout to v2.6.0
git clone https://github.com/pytorch/pytorch.git && cd pytorch && git checkout v2.6.0
# Sync submodules
git submodule sync && git submodule update --init --recursive && cd ..
# Build PyTorch from source (using Pytorch's builder container image)
export DESIRED_CUDA=126
export CUDA_HOME_PATH=/usr/local/cuda-12.6
export GPU_TYPE=cu126
export PYTORCH_BUILD_VERSION=2.6.0
export PYTORCH_ROOT=/pytorch
export DESIRED_PYTHON=3.10
docker run -it --gpus all --ipc host -e USE_NCCL=1 -e USE_SYSTEM_NCCL=1 -e CUDA_HOME=$CUDA_HOME_PATH -e CUDACXX=${CUDA_HOME_PATH}/bin/nvcc -e USE_DISTRIBUTED=1 -e SKIP_ALL_TESTS=1 -e BUILD_SPLIT_CUDA=ON -e DESIRED_CUDA=${DESIRED_CUDA:0:2}.${DESIRED_CUDA:2:1} -e GPU_TYPE=$GPU_TYPE -e GPU_ARCH_TYPE=cuda -e PYTORCH_ROOT=/pytorch \
-e DESIRED_PYTHON=$DESIRED_PYTHON -e PYTORCH_BUILD_VERSION=$PYTORCH_BUILD_VERSION -v ./pytorch:/pytorch -u root pytorch/manylinux-builder:cuda12.6-main
# Inside container:
# Build NCCL
git clone https://github.com/NVIDIA/nccl.git && cd nccl && git checkout v2.23.4-1 && make -j src.build
export NCCL_ROOT=/nccl/build/
export NCCL_LIB_DIR=/nccl/build/lib/
export NCCL_INCLUDE_DIR=/nccl/build/include/
# Build Torch wheels, this will create the wheels in this location: /wheelhouse126/torch-2.6.0-cp310-cp310-linux_x86_64.whl
source /pytorch/.ci/manywheel/build.sh
```
```
# Copy wheels generated in above container to a new container: docker cp <container_id>:/wheelhouse126/torch-2.6.0-cp310-cp310-linux_x86_64.whl .
# Use Pytorch's or Nvidia's container for CUDA 12.6
docker run -it --gpus all --ipc host pytorch/pytorch:2.6.0-cuda12.6-cudnn9-devel bash
# Install python3.10
apt update && apt install python3.10 python3-pip -y
# Remove Existing NCCL & Install NCCL (Optional, error comes both with and without this step)
apt-get update && apt-mark unhold libnccl2 libnccl-dev && apt-get remove -y libnccl*
git clone https://github.com/NVIDIA/nccl.git && cd nccl && git checkout v2.23.4-1 && make -j src.build
apt install build-essential devscripts debhelper fakeroot && make pkg.debian.build && cd build/pkg && dpkg -i deb/libnccl*
apt-mark hold libnccl2 libnccl-dev
# Install Pytorch Wheels
python3.10 -m pip install torch-2.6.0-cp310-cp310-linux_x86_64.whl
# Install Torchvision
python3.10 -m pip install torchvision==0.21.0 --index-url https://download.pytorch.org/whl/cu126
# Load Torchvision
python3.10 -c "import torchvision;print(torchvision.__version__)"
```
Error details:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/torchvision/__init__.py", line 10, in <module>
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
File "/usr/local/lib/python3.10/dist-packages/torchvision/_meta_registrations.py", line 164, in <module>
def meta_nms(dets, scores, iou_threshold):
File "/usr/local/lib/python3.10/dist-packages/torch/library.py", line 828, in register
use_lib._register_fake(op_name, func, _stacklevel=stacklevel + 1)
File "/usr/local/lib/python3.10/dist-packages/torch/library.py", line 198, in _register_fake
handle = entry.fake_impl.register(func_to_register, source)
File "/usr/local/lib/python3.10/dist-packages/torch/_library/fake_impl.py", line 31, in register
if torch._C._dispatch_has_kernel_for_dispatch_key(self.qualname, "Meta"):
RuntimeError: operator torchvision::nms does not exist
```
### Versions
Provided docker containers for the repro and the version information is not required, here are the list of containers used:
1. pytorch/manylinux-builder:cuda12.6-main
2. nvidia/cuda:12.6.3-devel-ubuntu22.04 or pytorch/pytorch:2.6.0-cuda12.6-cudnn9-devel
cc @malfet @seemethere | true |
2,824,592,072 | Fix aten.to when input is a tensor constant | yushangdi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Summary:
Fix aten.to when input is a tensor constant.
In this case, `args_unwrapped` could just be a constant, so not a functional tensor.
Test Plan:
```
buck2 run 'fbcode//mode/dev-nosan' fbcode//caffe2/test:test_export -- -r tensor_constant_aten_to
```
Differential Revision: D68984244
| true |
2,824,585,239 | [dynamo] Disable compiling on elementwise_type_promotion_wrapper | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"keep-going"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146116
* __->__ #146219
* #146283
* #146075
| true |
2,824,573,082 | [FSDP2][DEBUG] enforcing ReduceOp.SUM to avoid bug in ReduceOp.AVG | weifengpy | closed | [
"oncall: distributed",
"Stale",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146218
workaround for https://github.com/pytorch/pytorch/issues/144045 , but not sure if we should land
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,824,560,642 | Remove stage_index_to_group_rank from schedule | H-Huang | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (pipeline)"
] | 6 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146217
* #146193
This PR allows schedules loaded via CSV to automatically set their `stage_index_to_group_rank ` and removes the `stage_index_to_group_rank ` argument from the `PipelineScheduleMulti` constructor
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,824,552,897 | [inductor] use ftz variant of exp | shunting314 | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 11 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146216
Inductor generated exp op is compiled as the following ptx snippet by Triton.
```
mul.f32 %f74, %f83, 0f3FB8AA3B;
ex2.approx.f32 %f73, %f74;
```
But if we enable --use_fast_math in nvcc, exp in CUDA is compiled as
```
mul.ftz.f32 %f2, %f1, 0f3FB8AA3B;
ex2.approx.ftz.f32 %f3, %f2;
```
which uses the FTZ variant.
Let Inductor able to generate the FTZ variant if use_fast_math config is true.
I see 4% speedup for the two pass prepare_softmax kernel, online softmax should be affected more since it does more computation per seconds (>10% in my testing).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,824,551,537 | Update Dependencies.cmake | longlene | closed | [
"triaged",
"open source",
"Stale"
] | 4 | NONE | fix cmake if check error:
“Unknown arguments specified”
Fixes #ISSUE_NUMBER
| true |
2,824,551,299 | [dynamo] Graph break on tensor.retain_grad | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146116
* #146219
* #146075
* #146070
* __->__ #146214
* #146258
* #146198
* #146062
Fixes https://github.com/pytorch/pytorch/issues/146212
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,824,549,921 | while_loop cannot handle aliase when body_fn is not executed | ydwu4 | open | [
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 4 | CONTRIBUTOR | ### 🐛 Describe the bug
In the following repro, torch.compile gives us different results with eager:
```python
import torch
class ZeroLoop(torch.nn.Module):
def forward(self, c, a):
a_view = torch.sin(a.view(-1, 1))
def cond_fn(c, a_view):
return torch.clip(a_view.sum(), 0, 1) < 0
def body_fn(c, a_view):
return c - 1, a_view + 1
out1, out2 = torch._higher_order_ops.while_loop(
cond_fn,
body_fn,
[c, a_view],
)
return out2.sin_(), a_view.cos_()
mod = ZeroLoop()
inp = (torch.tensor(0, dtype=torch.int64), torch.randn(1, 1))
eager_out = mod(*inp)
compiled_out = torch.compile(mod)(*inp)
print(eager_out)
print(compiled_out)
```
Looking at the generated code:
```python
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] def call(args):
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] arg0_1, arg1_1 = args
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] args.clear()
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] assert_size_stride(arg0_1, (1, 1), (1, 1))
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] assert_size_stride(arg1_1, (), ())
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] buf0 = empty_strided_cpu((1, 1), (1, 1), torch.float32)
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] buf5 = empty_strided_cpu((1, 1), (1, 1), torch.float32)
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] cpp_fused_cos_sin_0(arg0_1, buf0, buf5)
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] del arg0_1
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] buf1 = [None] * 2
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] buf1[0] = arg1_1
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] buf1[1] = buf0
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] while True:
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code]
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] # subgraph: while_loop_cond_graph_0
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] while_loop_cond_graph_0_arg0_1 = buf1[0]
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] while_loop_cond_graph_0_arg1_1 = buf1[1]
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] while_loop_cond_graph_0_args = [while_loop_cond_graph_0_arg0_1, while_loop_cond_graph_0_arg1_1]
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] del while_loop_cond_graph_0_arg0_1
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] del while_loop_cond_graph_0_arg1_1
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] (buf1_cond_result,) = while_loop_cond_graph_0(while_loop_cond_graph_0_args)
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] if not buf1_cond_result.item(): break
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code]
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] # subgraph: while_loop_body_graph_0
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] while_loop_body_graph_0_arg0_1 = buf1[0]
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] while_loop_body_graph_0_arg1_1 = buf1[1]
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] while_loop_body_graph_0_args = [while_loop_body_graph_0_arg0_1, while_loop_body_graph_0_arg1_1]
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] del while_loop_body_graph_0_arg0_1
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] del while_loop_body_graph_0_arg1_1
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] (buf1[0], buf1[1]) = while_loop_body_graph_0(while_loop_body_graph_0_args)
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] del arg1_1
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] del buf0
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] buf3 = buf1[1]
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] del buf1
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] buf4 = buf3; del buf3 # reuse
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] cpp_fused_sin_3(buf4)
V0131 15:09:59.129000 2123663 torch/_inductor/graph.py:2021] [1/0] [__output_code] return (buf4, buf5, )
```
we found an aliase of buf3 and buf0 when the body_fn of while loop is not executed.
### Versions
on master
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | true |
2,824,480,418 | [aot_eager] retain_grad is ignored | anijain2305 | closed | [
"high priority",
"triage review",
"triaged",
"oncall: pt2",
"module: pt2-dispatcher"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
~~~
import torch
def fn(x, y):
y.retain_grad()
return torch.sin(y) + x
x = torch.randn(4, requires_grad=True)
y = torch.cos(x)
fn(x, y).sum().backward()
print(y.grad)
print("-------")
opt_fn = torch.compile(fn, backend="aot_eager")
x = torch.randn(4, requires_grad=True)
y = torch.cos(x)
opt_fn(x, y).sum().backward()
print(y.grad)
~~~
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bdhirsh @yf225
### Error logs
_No response_
### Versions
NA | true |
2,824,477,665 | Negative index support for `take_along_dim` | mdhaber | open | [
"triaged",
"actionable",
"module: python array api",
"module: python frontend"
] | 5 | NONE | ### 🚀 The feature, motivation and pitch
I'm working on adding an implementation of `quantile` in terms of Python array API standard calls[^1] for SciPy (https://github.com/scipy/scipy/pull/22352), and I would like use of negative indices to be possible in `torch.take_along_dim`.
```python3
import torch as xp
x = xp.asarray([1, 2, 3])
xp.take(x, xp.asarray(-1)) # tensor(3)
xp.take_along_dim(x, xp.asarray([-1])) # expected tensor(3), but got
# RuntimeError: index -1 is out of bounds for dimension 0 with size 3
```
On the GPU, we get:
```python3
import torch as xp
device = "cuda" if xp.cuda.is_available() else "cpu"
x = xp.asarray([1, 2, 3], device=device)
xp.take_along_dim(x, xp.asarray(-1, device=device))
# RuntimeError: CUDA error: device-side assert triggered
# CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
# For debugging consider passing CUDA_LAUNCH_BLOCKING=1
# Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
[^1]: [`take_along_axis`](https://data-apis.org/array-api/draft/API_specification/generated/array_api.take_along_axis.html) will be available in the next version of the standard. It is not explicit in the `take_along_axis` documentation about negative indices, but negative indices seem to be supported [in general](https://data-apis.org/array-api/draft/API_specification/indexing.html#single-axis-indexing). In any case, I've [asked for clarification](https://github.com/data-apis/array-api/issues/808#issuecomment-2628474075).
### Alternatives
`array_api_compat` can patch this, or I can always calculate the equivalent positive index.
### Additional context
_No response_
cc @mruberry @rgommers @asmeurer @leofang @AnirudhDagar @asi1024 @emcastillo @kmaehashi @albanD | true |
2,824,475,759 | get custom operators to use exact strides | zou3519 | open | [
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 0 | CONTRIBUTOR | Should be possible after #130243. Assigning to self so that I don't forget
cc @chauhang @penguinwu @bdhirsh @yf225 | true |
2,824,469,124 | [ONNX] Dynamo export fails for inception_v3 model | justinchuby | closed | [
"module: onnx",
"triaged"
] | 1 | COLLABORATOR | ### 🐛 Describe the bug
```py
from torchvision.models.inception import inception_v3
import torch
input = torch.randn(3, 3, 299, 299)
ep = torch.onnx.export(inception_v3(), (input,), dynamo=True, report=True, verify=True)
```
[onnx_export_2025-01-31_13-58-00-956153_accuracy.md](https://github.com/user-attachments/files/18624791/onnx_export_2025-01-31_13-58-00-956153_accuracy.md)
### Versions
main | true |
2,824,458,904 | [export][ez] Fix generated header file. | zhxchen17 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 5 | CONTRIBUTOR | Summary: as title.
Test Plan: CI
Differential Revision: D68978788
| true |
2,824,456,516 | [CPUInductor] Fix SVE256 detection | malfet | closed | [
"module: cpu",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146207
This PR removes `torch.cpu._is_arm_sve_supported()` and replaces is with stable `torch.backends.cpu.get_cpu_capability()`
I should have reviewed https://github.com/pytorch/pytorch/pull/134672 more thoroughly, because it introduced duplicate, but slightly different API for detecting CPU architectures, which resulted in runtime crashes on system that do support SVE128, rather than SVE256
Fixes https://github.com/pytorch/pytorch/issues/145441
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,824,450,053 | DISABLED test_python_val_doesnt_have_attr (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_python_val_doesnt_have_attr&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36489933553).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_python_val_doesnt_have_attr`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 12158, in test_python_val_doesnt_have_attr
with self.assertRaisesRegex(RuntimeError, 'object has no attribute abcd'):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 12161, in torch_dynamo_resume_in_test_python_val_doesnt_have_attr_at_12158
def python_val_doesnt_have_attr():
File "/opt/conda/envs/py_3.9/lib/python3.9/unittest/case.py", line 239, in __exit__
self._raiseFailure('"{}" does not match "{}"'.format(
File "/opt/conda/envs/py_3.9/lib/python3.9/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: "object has no attribute abcd" does not match "RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
"
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_python_val_doesnt_have_attr
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,824,449,991 | DISABLED test_ntuple_builtins (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_ntuple_builtins&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36489698024).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_ntuple_builtins`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 10124, in test_ntuple_builtins
self.checkScript(test_ints, ())
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 621, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_ntuple_builtins
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,824,449,943 | DISABLED test_return_stmt_not_at_end (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_return_stmt_not_at_end&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36489933553).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_return_stmt_not_at_end`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 11386, in test_return_stmt_not_at_end
self.checkScript(return_stmt, (torch.rand(1),))
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1024, in getsource
lines, lnum = getsourcelines(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 619, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_return_stmt_not_at_end
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,824,449,825 | Add "//caffe2:libtorch" to minifier TARGET file | yushangdi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | Summary: as title. To avoid errors like "undefined symbol: aoti_torch_device_type_cpu" when compiling minifier_launcher.py
Test Plan: CI
Differential Revision: D68978430
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,824,444,676 | [ROCm] follow up to #138964, remove work-around | jeffdaily | closed | [
"module: rocm",
"open source",
"release notes: cuda",
"ciflow/rocm"
] | 3 | COLLABORATOR | PR #138964 used #ifdef to skip non-contig tensor copies on ROCm due to failing tests.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,824,441,236 | L-BFGS-B support | jithendaraa | open | [
"module: optimizer",
"triaged"
] | 5 | NONE | Does torch currently already support L-BFGS-B? I see the implementation of torch's LBGS, which does not seem to handle bounds.
Is there a plan for torch to support bounds with LBFGS?
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar | true |
2,824,369,055 | Manylinux 2.28 migration - remove pre-cxx11 abi libtorch builds | atalman | closed | [
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Related to: https://github.com/pytorch/pytorch/issues/123649
Removing pre-cxx11 abi builds.
As per announcement : https://dev-discuss.pytorch.org/t/pytorch-linux-wheels-switching-to-new-wheel-build-platform-manylinux-2-28-on-november-12-2024/2581 | true |
2,824,368,967 | docs: change log to ln in Softplus function and class | Serenazhu | open | [
"triaged",
"open source"
] | 4 | NONE | Updated the math formula in the softplus function in torch.nn.functional.py and the Softplus class in torch.nn.modules.activation.py from log to ln for correctness and accuracy.
| true |
2,824,363,766 | [dynamo][exceptions][3.10] Clean symbolic stack on exception handling | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146116
* #146219
* #146075
* #146070
* #146214
* __->__ #146198
* #146062
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,824,341,612 | include entire GraphModule instead of current node when erroring inside of fx interpreter | bdhirsh | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 3 | CONTRIBUTOR | This seems like it would make it easier to diagnose PT2 issues where the user cannot easily repro, and we need more info in the backtrace, e.g. in https://github.com/pytorch/pytorch/issues/134182#issuecomment-2628076114
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146197
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,824,316,427 | [ROCm] Indexing perf optimization via Unroll/WideFetch/IdxReuse/OneDupOpt | amd-hhashemi | closed | [
"module: rocm",
"triaged",
"open source",
"release notes: cuda",
"ciflow/rocm"
] | 2 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,824,293,259 | Add non-strict export while_loop test back | ydwu4 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143457
* #146222
* __->__ #146195
* #146194
This is fixed by https://github.com/pytorch/pytorch/pull/145762
| true |
2,824,293,145 | [hop] enable while_loop return torch.ones with unbacked symbol expression. | ydwu4 | closed | [
"Merged",
"topic: not user facing"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143457
* #146222
* #146195
* __->__ #146194
| true |
2,824,287,429 | Add generate_stage_to_rank_mapping utility | H-Huang | closed | [
"oncall: distributed",
"Merged",
"release notes: distributed (pipeline)"
] | 1 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146217
* __->__ #146193
We use `stage_index_to_group_rank` in the stage to determine what send/recv ops and in the schedule for IR generation. However, we don't need to expose this as an argument in our schedule class, so this stack of PRs is to remove it.
This PR creates a `stage_index_to_group_rank` utility function and removes the arg for the ZBVschedule. In a following PR I will add code to infer the `stage_index_to_group_rank` for the CSV schedule path and we will be able to remove this argument from our classes entirely.
Related comment from @wconstab https://github.com/pytorch/torchtitan/issues/774#issuecomment-2619793741
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,824,271,030 | torch.check distributions | angelayi | closed | [] | 2 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
| true |
2,824,269,441 | [cuda] Speed up layernorm backward by ~13% by using warp shuffles for the 16x32 kernel invocation | ahmadsharif1 | open | [
"Stale",
"release notes: cuda"
] | 2 | CONTRIBUTOR | Before this PR we had 2 kernels:
1. For blocksize=32x32, this kernel *only* used warp shuffles to do the reduction
2. For blocksize=16x32, this kernel *only* used shared memory to do the reduction
This PR replaces those two kernels with a single generic kernel with template parameters for the block size.
1. Uses template parameters for blockDim.x and blockDim.y.
1. Uses those template parameters to do a partial final reduction in shared memory if needed (i.e. if blockDim.y > 32).
1. Then, for the final 32 rows, it uses warp shuffles when we need to reduce 32 rows down to a single row.
1. Uses slightly more shared memory to reduce bank conflicts when reading the transposed data in both cases
When compared to the baseline 16x32 kernel, ncu shows lower latency:

ncu shows much lower shared memory loads and stores:

ncu shows lower cycle count:

ncu shows lower sync instructions:

For the 32x32 kernel, nvcc in theory should optimize away the shared memory reduction loop completely and performance should be identical to the previous specialized kernel. | true |
2,824,229,487 | Discrepancy in Dropout between DTensor and torch.Tensor | bonpyt | closed | [
"oncall: distributed"
] | 1 | NONE | ### 🐛 Describe the bug
We are seeing differences in Dropout between `DTensor` and `torch.Tensor` and we think this is due to the CUDA PRNG state changing when using `DTensor`, but not when using `torch.Tensor`.
```
#!/usr/bin/env python3
import torch
import os
from contextlib import nullcontext
from torch.distributed.tensor import DTensor
from torch.distributed._tensor.device_mesh import init_device_mesh
def print_prng():
print(f"torch PRNG: {torch.get_rng_state().sum()}")
print(f"CUDA PRNG: {torch.cuda.get_rng_state().sum()}")
def main():
distributed = os.environ.get("RANK") is not None
print(f"distributed: {distributed}")
with init_device_mesh(
device_type="cuda", mesh_shape=(1,)
) if distributed else nullcontext() as device_mesh:
seed = 42
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
dropout = torch.nn.Dropout()
print_prng()
tensor = torch.rand(10, 10)
print_prng()
if distributed:
tensor = DTensor.from_local(tensor)
tensor = tensor.full_tensor()
print(f"tensor: {tensor.sum().item()}")
print_prng()
print(f"dropout: {dropout(tensor).sum().item()}")
print_prng()
print(f"dropout: {dropout(tensor).sum().item()}")
print_prng()
if __name__ == "__main__":
main()
```
```
$ ./test_dropout.py
distributed: False
torch PRNG: 316607
CUDA PRNG: 42
torch PRNG: 319252
CUDA PRNG: 42
tensor: 51.582977294921875
torch PRNG: 319252
CUDA PRNG: 42
dropout: 63.54338455200195
torch PRNG: 319252
CUDA PRNG: 42
dropout: 53.90919494628906
torch PRNG: 319252
CUDA PRNG: 42
$ torchrun --nnodes=1 --nproc_per_node=1 ./test_dropout.py
distributed: True
torch PRNG: 316607
CUDA PRNG: 42
torch PRNG: 319252
CUDA PRNG: 42
tensor: 51.582977294921875
torch PRNG: 319252
CUDA PRNG: 42
dropout: 52.835205078125
torch PRNG: 319252
CUDA PRNG: 46
dropout: 40.78480529785156
torch PRNG: 319252
CUDA PRNG: 50
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.12.8 (main, Dec 4 2024, 08:54:13) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-116-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H200
GPU 1: NVIDIA H200
GPU 2: NVIDIA H200
GPU 3: NVIDIA H200
GPU 4: NVIDIA H200
GPU 5: NVIDIA H200
GPU 6: NVIDIA H200
GPU 7: NVIDIA H200
Nvidia driver version: 535.216.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 207
Model name: INTEL(R) XEON(R) PLATINUM 8568Y+
Stepping: 2
CPU MHz: 2300.000
BogoMIPS: 4600.00
L1d cache: 4.5 MiB
L1i cache: 3 MiB
L2 cache: 192 MiB
L3 cache: 600 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.6.0
[pip3] numpy==2.0.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.6.0
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.19.1
[pip3] triton==3.2.0
[pip3] tritonclient==2.46.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,824,198,502 | [test] | clee2000 | closed | [
"release notes: releng",
"topic: not user facing"
] | 1 | CONTRIBUTOR | nccl dist | true |
2,824,179,037 | dynamo: fsdp throw unimplemented vs attribute error | c00w | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146188
Rather than throw a full exception for fsdp, instead just return unimplemented,
and respect the user options (i.e. fullgraph, vs graph break).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,824,174,708 | [ONNX] torch.onnx.export(dynamo=True) changes optimization to default | titaiwangms | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 8 | COLLABORATOR | Fixes #145897 | true |
2,824,166,265 | DISABLED test_pytree_register_nested_data_class_retraceability_non_strict (__main__.RetraceExportNonStrictTestExport) | pytorch-bot[bot] | closed | [
"module: flaky-tests",
"skipped",
"oncall: pt2",
"oncall: export"
] | 3 | NONE | Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pytree_register_nested_data_class_retraceability_non_strict&suite=RetraceExportNonStrictTestExport&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36482304223).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pytree_register_nested_data_class_retraceability_non_strict`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `export/test_retraceability.py`
cc @clee2000 @wdvr @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,824,166,226 | DISABLED test_repeated_calling_cuda (__main__.AOTInductorTestABICompatibleGpu) | pytorch-bot[bot] | open | [
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 21 | NONE | Platforms: linux, rocm, slow, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_repeated_calling_cuda&suite=AOTInductorTestABICompatibleGpu&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36490493384).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 14 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_repeated_calling_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_aot_inductor.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,824,166,115 | DISABLED test_pytree_register_nested_data_class_training_ir_to_decomp_non_strict (__main__.TrainingIRToRunDecompExportNonStrictTestExport) | pytorch-bot[bot] | closed | [
"module: flaky-tests",
"skipped",
"oncall: pt2",
"oncall: export"
] | 3 | NONE | Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pytree_register_nested_data_class_training_ir_to_decomp_non_strict&suite=TrainingIRToRunDecompExportNonStrictTestExport&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36482598568).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pytree_register_nested_data_class_training_ir_to_decomp_non_strict`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `export/test_export_training_ir_to_run_decomp.py`
cc @clee2000 @wdvr @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,824,166,036 | DISABLED test_pytree_register_nested_data_class_serdes_non_strict (__main__.SerDesExportNonStrictTestExport) | pytorch-bot[bot] | closed | [
"module: flaky-tests",
"skipped",
"oncall: pt2",
"oncall: export"
] | 3 | NONE | Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pytree_register_nested_data_class_serdes_non_strict&suite=SerDesExportNonStrictTestExport&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36482598885).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pytree_register_nested_data_class_serdes_non_strict`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/export/test_export.py", line 5207, in test_pytree_register_nested_data_class
self.assertEqual(roundtrip_spec, spec)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4042, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: TreeS[395 chars],
TreeSpec(Inner, [['x', 'y'], []], [*,
*])])])]) != TreeS[395 chars],
TreeSpec(Inner, [['x', 'y'], []], [*,
*])])])])
To execute this test, run the following from the base repo dir:
python test/export/test_serdes.py SerDesExportNonStrictTestExport.test_pytree_register_nested_data_class_serdes_non_strict
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `export/test_serdes.py`
cc @clee2000 @wdvr @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,824,134,917 | [export] Allow bypassing version check with unsafe API. | zhxchen17 | closed | [
"fb-exported",
"Stale",
"ciflow/trunk",
"release notes: export"
] | 3 | CONTRIBUTOR | Summary:
as title.
https://fb.workplace.com/groups/1028545332188949/permalink/10024343514259357/
Test Plan:
```
with torch.export._unsafe_skip_version_check():
ep = torch.export.load(...)
```
CI
Differential Revision: D68791202
| true |
2,824,092,089 | fix internal error with reorder submodules | avikchaudhuri | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 9 | CONTRIBUTOR | Test Plan: hard to isolate as small repro
Differential Revision: D68963033
| true |
2,824,089,657 | [AOTI] Improve readability of package_cpp_only | desertfire | closed | [
"Stale",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146180
Summary: Made two improvements here: 1) Emit interface.cpp into a separate file instead of embedding it to the model code; 2) Add prefix to mark the generated files as model code or weights(constants).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,824,079,337 | execution trace export supports gzip format | briancoutinho | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | As above, allows Chakra Execution Trace observer to support compressing files.
Usage is straightforward, just add ".gz" suffix to the output file name
```
et = ExecutionTraceObserver()
et.register_callback("my_trace.json.gz")
```
| true |
2,824,073,247 | fix internal error with reorder submodules | avikchaudhuri | closed | [
"release notes: export"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Differential Revision: [D68963033](https://our.internmc.facebook.com/intern/diff/D68963033/) | true |
2,824,066,416 | [dynamo] Revert abc change due to internal failures | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146177
* #146141
xref - https://www.internalfb.com/tasks/?t=191383874
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,824,055,500 | [executorch hash update] update the pinned executorch hash | mergennachin | closed | [
"Stale",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Based on latest green in HUD https://hud.pytorch.org/hud/pytorch/executorch/main/1?per_page=50
| true |
2,824,040,204 | add WaitCounter type interface and get rid of type errors | burak-turk | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8 | CONTRIBUTOR | Summary: as titled
Differential Revision: D68960123
| true |
2,824,028,584 | Temp disable MKL in DistributionKernels.cpp | malfet | closed | [
"module: cpu",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: bug fixes"
] | 4 | CONTRIBUTOR | Until https://github.com/pytorch/pytorch/issues/132395 is addressed
Test plan: Add test based on the script below (taken from https://discuss.pytorch.org/t/bug-in-torch-multinomial-generated-distribution-is-modestly-incorrect-edit-this-is-a-regression-and-appears-to-be-due-to-an-analogous-bug-in-tensor-exponential )
```python
import torch
high_bits_for_seed = 16000000000000000000 # to use "good quality" seed
_ = torch.manual_seed (high_bits_for_seed + 2024)
prob = torch.ones (26)
dups_mult = 0
perm_counts_mult = {}
for _ in range (1_000_000):
p = tuple (torch.multinomial (prob, prob.numel(), replacement=False).tolist())
if p in perm_counts_mult:
dups_mult += 1
perm_counts_mult[p] += 1
else:
perm_counts_mult[p] = 1
print ('duplicate multinomial perms: ', dups_mult)
print ('multiple multinomial perms: ', (torch.tensor (list (perm_counts_mult.values())) > 1).sum().item())
print ('max of perm_counts_mult: ', torch.tensor (list (perm_counts_mult.values())).max().item())
print ('len (perm_counts_mult): ', len (perm_counts_mult))
```
This is a reland of https://github.com/pytorch/pytorch/pull/132532 but excluding internal builds that already has some hardcoded values
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,824,011,904 | [CI] Get rid of UCC builds | malfet | open | [
"Stale",
"topic: not user facing"
] | 13 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146173
There hasn't been any active development/testing of those in last 2 years | true |
2,824,000,579 | Factory function support for NestedTensor | soulitzer | open | [
"release notes: nested tensor",
"module: dynamo",
"ciflow/inductor",
"no-stale"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146172
* #146101
* #145922
* #141842
* #141841
* #146052
Rebase of https://github.com/pytorch/pytorch/pull/117904 removing unnecessary bits now that python nested int already holds the necessary metadata.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,823,950,482 | Noob attempt at tensor_pointer_to_tensor_handle accepting const | janeyx99 | closed | [
"Stale",
"ciflow/inductor",
"release notes: inductor"
] | 3 | CONTRIBUTOR | Fairly certain this will fail lint but is there a reason creating an AtenTensorHandle is not const preserving?
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146171
| true |
2,823,940,603 | [ROCm] Tune 3d tensor sums when not using fastest dimension | doru1004 | closed | [
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"rocm",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 6 | CONTRIBUTOR | Tune 3d tensor sums when not using fastest dimension.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,823,935,306 | Remove trivial dispatch_key_allowlist_check function | janeyx99 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Hmmm...this _is_ removing a public function from a public C++ file. But the GH counts for this function total 83, seemingly all copying pytorch: https://github.com/search?q=dispatch_key_allowlist_check&type=code&p=1
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146169
| true |
2,823,831,780 | [dynamo] dynamo fails to compile a correct dynamic graph and lead to unexpected recompiles | yyp0 | open | [
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: compiled autograd"
] | 5 | NONE | ### 🐛 Describe the bug
Dynamo is expected to compile a dynamic graph when some specific sizes changes. However, in our case, it seems dynamo fails to do that. Graph in the first iteration:
```
class GraphModule(torch.nn.Module):
def forward(self, L_inputs_ : list, ...):
getitem: "bf16[**4090**, 1, 4096]" = l_inputs_[0]
view: "bf16[4090, 4096]" = torch.ops.aten.view.default(getitem, [**4090**, 4096])
```
Graph in the second iteration:
```
class GraphModule(torch.nn.Module):
def forward(self, L_inputs_ : list, **L_sizes_4_: "Sym(4089)"**, ...):
getitem: "bf16[**4089**, 1, 4096]" = l_inputs_[0]
view: "bf16[4089, 4096]" = torch.ops.aten.view.default(getitem, [**l_sizes_4_**, 4096]); l_sizes_4_ = None
```
It seems that the scalar(L_sizes_4_) and the dynamic dim (4090 vs 4089) are folded, and the following guards are added, leading to guard check fails and recompiles:
```
L['sizes'][4] == 4089 # view = torch.ops.aten.view.default(getitem, [getitem_63, 4096]); getitem_63 = None # <eval_with_key>.3:244 in forward (_refs/__init__.py:3755 in _reshape_view_helper)
```
Do you know why the `L_sizes_4_` and `getitem` are treated as constants when the dim changes? By the way, I have enabled compiled_autograd in the current case and just trace the backward graph. I'm not sure whether the compiled_autograd has any side effects on dynamo's dynamic feature.
cc @chauhang @penguinwu @ezyang @bobrenjc93 @xmfan @yf225
### Versions
pytorch 2.6 | true |
2,823,623,434 | [inductor] Add Python type annotations to `torch/_inductor` | rec | open | [
"module: typing",
"triaged",
"better-engineering",
"oncall: pt2",
"module: inductor"
] | 6 | COLLABORATOR | ### 🚀 The feature, motivation and pitch
[Type annotations](https://docs.python.org/3/library/typing.html) make new development and maintenance easier, and sometimes find bugs.
And `torch/_inductor` is tricky, and under constant modification by disparate developers.
### How?
Adding annotations occasionally finds latent bugs, but the real payoff is in faster and more accurate maintenance and new development that using that annotated code.
If we knew which files, classes and functions were going to be used in future development, we could prioritize annotating those.
What we _can_ measure is what gets imported in existing code.
[This little script](https://github.com/rec/test/blob/master/python/importer_counter.py) gives the following sorted counts of imports from `_inductor` over all of `torch/`:
* `.pattern_matcher`: 486
* `.utils`: 324
* `.ir`: 137
* `.codegen.common`: 131
* `.virtualized`: 111
* `.codecache`: 59
* `.lowering`: 54
* `.scheduler`: 52
* ... a lot more
So there is an import for either `torch._inductor.pattern_maker`, or a symbol contained within it, 486 times within `torch/`.
### Deliverable (per file)
* Removal of `# mypy: allow-untyped-defs` and ` ignore-errors` statements
* Evaluate `:# mypy: allow-untyped-decorators` (possibly keep, typing decorators correctly is arduous)
* For already-typed files, quickly check typing on the most imported symbols
That script above also drills down into individual symbols, for example:
```
"torch._inductor.pattern_matcher": {
"CallFunction": 32,
"KeywordArg": 30,
"Arg": 29,
"CallFunctionVarArgs": 27,
"Ignored": 26,
"ListOf": 26,
```
## Tracking
- [x] `utils.py`: https://github.com/pytorch/pytorch/pull/144108
- [x] `pattern_matcher.py`: https://github.com/pytorch/pytorch/pull/146626
- [x] ~~`ir.py`: https://github.com/pytorch/pytorch/pull/148358~~
- [ ] `ir.py`: https://github.com/pytorch/pytorch/pull/149958
- [ ] More `ir.py`: https://github.com/pytorch/pytorch/pull/149959
- [ ] `codegen/common.py`: https://github.com/pytorch/pytorch/pull/150767
- [ ] `virtualized.py`: (in progress @zeshengzong)
- [ ] `code_cache.py`: also https://github.com/pytorch/pytorch/pull/150767
- [ ] `lowering.py`: the first line disables all type checking: removing that reveals a hefty 395 errors; removing `type: ignores` adds another 25 errors
### Alternatives
Fumbling ahead with an ongoing ignorance of type information. 😁
cc @ezyang @malfet @xuzhao9 @gramster @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,823,456,975 | Build problems on Windows | matteosal | closed | [
"module: build",
"module: windows",
"triaged",
"actionable"
] | 6 | NONE | My end goal is to build the pytorch libraries from source and use them via the C++ API in an external project. On Windows torch_cpu.dll fails to load into my process and the OS reports the following error:
```
Library load error 1114: A dynamic link library (DLL) initialization routine failed.
```
I have tried rebuilding the libraries in several ways including these minimal settings to get rid of dependencies:
```
cd C:\Users\Work\Git\External\pytorch
set USE_NUMPY=0
set USE_FBGEMM=0
set USE_MKLDNN=0
set USE_DISTRIBUTED=0
set USE_CUDA=0
set CMAKE_GENERATOR=Visual Studio 17 2022
python setup.py clean
python setup.py develop
```
I have also tried to load the libraries in a minimal standalone program:
```
#include <windows.h>
#include <iostream>
int main()
{
HINSTANCE hGetProcIDDLL1 = LoadLibrary(L"C:\\Users\\Work\\Git\\External\\pytorch\\build\\bin\\Release\\c10.dll");
HINSTANCE hGetProcIDDLL2 = LoadLibrary(L"C:\\Users\\Work\\Git\\External\\pytorch\\build\\bin\\Release\\torch_cpu.dll");
std::cout << "Done!\n";
return EXIT_SUCCESS;
}
```
I have compiled the above code running this command from the "x64 Native Tools Command Prompt for VS 2022":
```
cl /D_DEBUG /D_CONSOLE /D_UNICODE /DUNICODE /ZI /MDd load_library.cpp
```
And ran it with:
```
devenv /DebugExe .\load_library.exe
```
When clicking on the start button in the Visual Studio window that opens up, I see this error:

I have also tried to simply start a python session with the built module, but I'm getting a different error which seems unrelated to the above, maybe some configuration error on my side:
```
>>> import sys
>>> sys.path.append('C:\\Users\\Work\\Git\\External\\pytorch')
>>> import torch
Traceback (most recent call last):
File "<python-input-3>", line 1, in <module>
import torch
File "C:\Users\Work\Git\External\pytorch\torch\__init__.py", line 899, in <module>
raise ImportError(
...<14 lines>...
) from None
ImportError: Failed to load PyTorch C extensions:
It appears that PyTorch has loaded the `torch/_C` folder
of the PyTorch repository rather than the C extensions which
are expected in the `torch._C` namespace. This can occur when
using the `install` workflow. e.g.
$ python setup.py install && python -c "import torch"
This error can generally be solved using the `develop` workflow
$ python setup.py develop && python -c "import torch" # This should succeed
or by running Python from a different directory.
```
This error mentions building with `develop` but that's what I have done. Anyway this was just to check if the library loaded correctly in the python process.
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | true |
2,823,395,183 | DISABLED test_serialized_source_ranges2 (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_serialized_source_ranges2&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36463607655).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_serialized_source_ranges2`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 4426, in test_serialized_source_ranges2
class FooTest2(torch.jit.ScriptModule):
...<2 lines>...
raise RuntimeError('foo')
File "/var/lib/jenkins/workspace/test/test_jit.py", line 4427, in FooTest2
@torch.jit.script_method
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 621, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_serialized_source_ranges2
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,823,394,783 | DISABLED test_not (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_not&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36463607655).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_not`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 7566, in test_not
self.checkScript(test_not_op, (torch.tensor(2), ), optimize=True)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 621, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_not
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,823,394,673 | DISABLED test_script_optional_none (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_optional_none&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36463607655).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_optional_none`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 6558, in test_script_optional_none
self.checkScript(none_stmt, [torch.arange(0, 2)], optimize=True)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 621, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_optional_none
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,823,394,592 | DISABLED test_remove_dropout (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remove_dropout&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36463607655).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remove_dropout`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 11154, in test_remove_dropout
m = torch.jit.script(m)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1439, in script
ret = _script_impl(
obj=obj,
...<3 lines>...
example_inputs=example_inputs,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1150, in _script_impl
return torch.jit._recursive.create_script_module(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
obj, torch.jit._recursive.infer_methods_to_compile
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_recursive.py", line 555, in create_script_module
AttributeTypeIsSupportedChecker().check(nn_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_check.py", line 62, in check
source_lines = inspect.getsource(nn_module.__class__.__init__)
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 621, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_remove_dropout
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,823,394,590 | DISABLED test_pytree_register_data_class_training_ir_to_decomp_non_strict (__main__.TrainingIRToRunDecompExportNonStrictTestExport) | pytorch-bot[bot] | closed | [
"module: flaky-tests",
"skipped",
"oncall: pt2",
"oncall: export"
] | 5 | NONE | Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pytree_register_data_class_training_ir_to_decomp_non_strict&suite=TrainingIRToRunDecompExportNonStrictTestExport&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36463581114).
Over the past 3 hours, it has been determined flaky in 28 workflow(s) with 56 failures and 28 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pytree_register_data_class_training_ir_to_decomp_non_strict`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/export/test_export.py", line 5141, in test_pytree_register_data_class
self.assertEqual(roundtrip_spec, spec)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4042, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: TreeSpec(MyDataClass, [['x', 'y'], ['z']], [*,
*]) != TreeSpec(MyDataClass, [['x', 'y'], ['z']], [*,
*])
To execute this test, run the following from the base repo dir:
python test/export/test_export_training_ir_to_run_decomp.py TrainingIRToRunDecompExportNonStrictTestExport.test_pytree_register_data_class_training_ir_to_decomp_non_strict
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `export/test_export_training_ir_to_run_decomp.py`
cc @clee2000 @wdvr @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,823,392,695 | DISABLED test_pytree_register_data_class_serdes_non_strict (__main__.SerDesExportNonStrictTestExport) | pytorch-bot[bot] | closed | [
"module: flaky-tests",
"skipped",
"oncall: pt2",
"oncall: export"
] | 4 | NONE | Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pytree_register_data_class_serdes_non_strict&suite=SerDesExportNonStrictTestExport&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36466178901).
Over the past 3 hours, it has been determined flaky in 28 workflow(s) with 56 failures and 28 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pytree_register_data_class_serdes_non_strict`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/export/test_export.py", line 5141, in test_pytree_register_data_class
self.assertEqual(roundtrip_spec, spec)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 4042, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
...<4 lines>...
)
AssertionError: Object comparison failed: TreeSpec(MyDataClass, [['x', 'y'], ['z']], [*,
*]) != TreeSpec(MyDataClass, [['x', 'y'], ['z']], [*,
*])
To execute this test, run the following from the base repo dir:
python test/export/test_serdes.py SerDesExportNonStrictTestExport.test_pytree_register_data_class_serdes_non_strict
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `export/test_serdes.py`
cc @clee2000 @wdvr @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,823,392,661 | DISABLED test_pytree_register_data_class_retraceability_non_strict (__main__.RetraceExportNonStrictTestExport) | pytorch-bot[bot] | closed | [
"module: flaky-tests",
"skipped",
"oncall: pt2",
"export-triage-review",
"oncall: export"
] | 4 | NONE | Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pytree_register_data_class_retraceability_non_strict&suite=RetraceExportNonStrictTestExport&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36463770268).
Over the past 3 hours, it has been determined flaky in 28 workflow(s) with 56 failures and 28 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pytree_register_data_class_retraceability_non_strict`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `export/test_retraceability.py`
cc @clee2000 @wdvr @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,823,302,250 | The Error Function reported should be it own | ILCSFNO | closed | [] | 3 | CONTRIBUTOR | ### 🐛 Describe the bug
The docs of [`torch.fft.rfft2()`](https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html#torch-fft-rfft2), [`torch.fft.rfftn()`](https://pytorch.org/docs/stable/generated/torch.fft.rfftn.html#torch-fft-rfftn) show their shared `kw argument` as below:
> ### Keyword Arguments
> * out ([Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor), optional) – the output tensor.
For `torch.fft.rfft2()`, when kw argument `out` is set to be a Float Tensor, the Error reported refers to the misuse of `torch.fft.rfftn()`:
### Minified Repro
```python
import torch
t = torch.rand(10, 10)
out = torch.randn(10, 6, 6)
rfft2 = torch.fft.rfft2(t, out=out)
```
### Output
```txt
RuntimeError: rfftn expects a complex output tensor, but got Float
```
### Versions
pytorch==2.5.0
torchvision==0.20.0
torchaudio==2.5.0
pytorch-cuda=12.1 | true |
2,823,290,665 | AdamW refactoring broke checkpoint reloading with DCP | lw | open | [
"module: optimizer",
"triaged",
"oncall: distributed checkpointing"
] | 8 | CONTRIBUTOR | ### 🐛 Describe the bug
I have a checkpoint created around end of October (I don't remember which PyTorch version was used back then), which I'm now trying to reload with a recent nightly build. I don't think anything relevant has changed in my codebase. However, I am hitting this issue:
```
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/lcw/repo/train.py", line 870, in <module>
main()
File "/home/lcw/repo/train.py", line 866, in main
train(train_args)
File "/home/lcw/repo/train.py", line 488, in train
reload_checkpoint.load_from_path(Path(args.continue_from.checkpoint_dir))
File "/home/lcw/repo/checkpoint/checkpointer.py", line 81, in load_from_path
dcp.load(states, checkpoint_id=path, planner=ZnnLoadPlanner())
File "/home/lcw/envs/my_env/lib/python3.12/site-packages/torch/distributed/checkpoint/logger.py", line 83, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lcw/envs/my_env/lib/python3.12/site-packages/torch/distributed/checkpoint/utils.py", line 438, in inner_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lcw/envs/my_env/lib/python3.12/site-packages/torch/distributed/checkpoint/state_dict_loader.py", line 172, in load
_load_state_dict(
File "/home/lcw/envs/my_env/lib/python3.12/site-packages/torch/distributed/checkpoint/state_dict_loader.py", line 229, in _load_state_dict
central_plan: LoadPlan = distW.reduce_scatter("plan", local_step, global_step)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lcw/envs/my_env/lib/python3.12/site-packages/torch/distributed/checkpoint/utils.py", line 192, in reduce_scatter
raise result
torch.distributed.checkpoint.api.CheckpointException: CheckpointException ranks:dict_keys([0, 1, 2, ...])
Traceback (most recent call last): (RANK 0)
File "/home/lcw/envs/my_env/lib/python3.12/site-packages/torch/distributed/checkpoint/utils.py", line 165, in reduce_scatter
local_data = map_fun()
^^^^^^^^^
File "/home/lcw/envs/my_env/lib/python3.12/site-packages/torch/distributed/checkpoint/logger.py", line 83, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lcw/envs/my_env/lib/python3.12/site-packages/torch/distributed/checkpoint/state_dict_loader.py", line 218, in local_step
local_plan = planner.create_local_plan()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lcw/envs/my_env/lib/python3.12/site-packages/torch/distributed/checkpoint/default_planner.py", line 233, in create_local_plan
return create_default_local_load_plan(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lcw/envs/my_env/lib/python3.12/site-packages/torch/distributed/checkpoint/default_planner.py", line 354, in create_default_local_load_plan
raise RuntimeError(f"Missing key in checkpoint state_dict: {fqn}.")
RuntimeError: Missing key in checkpoint state_dict: optimizer.param_groups.tok_embeddings.weight.decoupled_weight_decay.
```
I suspect this is due to the refactor done by @EmmettBicker in https://github.com/pytorch/pytorch/pull/143710.
It looks like @janeyx99 already attempted a fix related to `decoupled_weight_decay` in https://github.com/pytorch/pytorch/pull/144972, but apparently there's more to it.
### Versions
torch == 2.7.0.dev20250120+cu126
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @LucasLLC @pradeepfn | true |
2,823,258,987 | Optional tag for some parameters and kw arguments in `torch.quantile()` | ILCSFNO | open | [
"module: docs",
"triaged"
] | 2 | CONTRIBUTOR | ### 📚 The doc issue
The doc of [`torch.quantile()`](https://pytorch.org/docs/stable/generated/torch.quantile.html#torch-quantile) shows its `definition`, `parameters` and `kw arguments` as below:
> ### torch.quantile(input, q, dim=None, keepdim=False, *, interpolation='linear', out=None) → [Tensor]
> ### Parameters
> * input ([Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor)) – the input tensor.
> * q ([float](https://docs.python.org/3/library/functions.html#float) or [Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor)) – a scalar or 1D tensor of values in the range [0, 1].
> * dim ([int](https://docs.python.org/3/library/functions.html#int)) – the dimension to reduce.
> * keepdim ([bool](https://docs.python.org/3/library/functions.html#bool)) – whether the output tensor has dim retained or not.
> ### Keyword Arguments
> * interpolation ([str](https://docs.python.org/3/library/stdtypes.html#str)) – interpolation method to use when the desired quantile lies between two data points. Can be linear, lower, higher, midpoint and nearest. Default is linear.
> * out ([Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor), optional) – the output tensor.
Some of them has their default values, i.e. `dim=None`, `keepdim=False`, `interpolation='linear'` and `out=None`, but only `out` has the optional tag. I suggest that `dim`, `keepdim` and `interpolation` may have the optional tag too.
### Suggest a potential alternative/fix
* Add the optional tag to `dim`, `keepdim` and `interpolation`.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke | true |
2,823,241,686 | Assertion Failure: TestBinaryUfuncsCPU.test_lerp_cpu_complex64 on Graviton 3 | kundaMwiza | open | [
"module: cpu",
"module: tests",
"triaged",
"module: correctness (silent)",
"module: arm"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
Repro:
```
python test/test_binary_ufuncs.py TestBinaryUfuncsCPU.test_lerp_cpu_complex64
```
Error:
```
Traceback (most recent call last):
in test_lerp self.assertEqual(expected, actual) File "/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual raise error_metas.pop()[0].to_error( # type: ignore[index] AssertionError: Tensor-likes are not close! Mismatched elements: 1 / 5 (20.0%) Greatest absolute difference: 0.2975790798664093 at index (2,) (up to 1e-05 allowed) Greatest relative difference: 0.1535184681415558 at index (2,) (up to 1.3e-06 allowed) To execute this test, run the following from the base repo dir: python test/test_binary_ufuncs.py TestBinaryUfuncsCPU.test_lerp_cpu_complex64 This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
This failure is currently not encountered in CI, see https://github.com/pytorch/pytorch/pull/146153
### Versions
```
PyTorch version: 2.7.0a0+git367593d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 1 MiB (16 instances)
L1i cache: 1 MiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.14.0
[pip3] torch==2.7.0a0+git367593d
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mruberry @ZainRizvi @malfet @snadampal @milpuz01 | true |
2,823,219,801 | Parameter may not be a name of one Parameter | ILCSFNO | closed | [
"module: docs",
"module: optimizer",
"triaged"
] | 2 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
The doc of [torch.nn.utils.clip_grad_norm_()](https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_norm_.html#torch-nn-utils-clip-grad-norm) shows its `Parameters` as below:
> ### Parameters
> * parameters (Iterable[[Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor)] or [Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor)) – an iterable of Tensors or a single Tensor that will have gradients normalized
> ...
The doc of [torch.optim.AdamW()](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#adamw) shows its `Parameters` as below:
> ### Parameters
> * params (iterable) – iterable of parameters or named_parameters to optimize or iterable of dicts defining parameter groups. When using named_parameters, all parameters in all groups should be named
> ...
As is known to us all, LLMs are widely used nowadays, for the special term `Parameters` and Parameter `parameters`, it's similar when LLMs encode them, so that they may seem as two similar words. I suppose that Parameter `parameters` may be replaced by other words just like Parameter `params` in `torch.optim.AdamW()`.
### Alternatives
* Replace the Parameter `parameters` of the relative modules, which have Parameter `parameters`, to Parameter `params`.
### Additional context
_No response_
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar | true |
2,823,155,579 | [AArch64] Build on Graviton 3 so that SVE is used in Graviton 3 tests | kundaMwiza | closed | [
"open source",
"release notes: releng"
] | 3 | CONTRIBUTOR | Currently the linux aarch64 CI that runs on pushes to main builds pytorch on Graviton 2, and tests on Graviton 2 and 3. However, by building on Graviton 2, CPU_CAPABILITY_SVE code paths are not available when the tests on Graviton 3 are run:
```yaml
linux-jammy-aarch64-py3_10-build:
name: linux-jammy-aarch64-py3.10
uses: ./.github/workflows/_linux-build.yml
needs: get-label-type
with:
runner_prefix: ${{ needs.get-label-type.outputs.label-type }}
build-environment: linux-jammy-aarch64-py3.10
docker-image-name: pytorch-linux-jammy-aarch64-py3.10-gcc11
runner: linux.arm64.2xlarge <--- Graviton 2
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 4, runner: "linux.arm64.2xlarge" },
{ config: "default", shard: 2, num_shards: 4, runner: "linux.arm64.2xlarge" },
{ config: "default", shard: 3, num_shards: 4, runner: "linux.arm64.2xlarge" },
{ config: "default", shard: 4, num_shards: 4, runner: "linux.arm64.2xlarge" },
{ config: "default", shard: 1, num_shards: 3, runner: "linux.arm64.m7g.4xlarge" },
{ config: "default", shard: 2, num_shards: 3, runner: "linux.arm64.m7g.4xlarge" },
{ config: "default", shard: 3, num_shards: 3, runner: "linux.arm64.m7g.4xlarge" },
]}
secrets: inherit
```
This is in contrast to the release [CI](https://github.com/pytorch/pytorch/blob/main/.github/workflows/generated-linux-aarch64-binary-manywheel-nightly.yml) which builds on Graviton 3, and therefore has CPU_CAPABILITY_SVE enabled. There is currently a test failure that is encountered in the release wheel, but not in the tests that are run on pushes to main due to the above reason.
This PR changes the runner to a Graviton 3 machine for builds that occur on pushes to main so that release wheels for aarch64 are fully tested.
Fixes #ISSUE_NUMBER
CC @malfet | true |
2,823,139,186 | [torch.export] Cannot export TorchVision fasterrcnn_mobilenet_v3_large_fpn | tom-arm | open | [
"oncall: pt2",
"export-triage-review",
"oncall: export"
] | 7 | NONE | ### 🐛 Describe the bug
I want to be able to export `fasterrcnn_mobilenet_v3_large_fpn` for training, so it can be quantized. But running `torch.export.export_for_training` fails.
```python
from torchvision.models.detection import FasterRCNN_MobileNet_V3_Large_FPN_Weights
import torch
import torchvision
if __name__ == "__main__":
model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(weights=FasterRCNN_MobileNet_V3_Large_FPN_Weights.DEFAULT)
model.eval()
example_args = torch.randn(1, 3, 224, 224)
exported_program = torch.export.export_for_training(model, (example_args,))
```
Full traceback is below:
```
Traceback (most recent call last):
File "faster_rcnn/lower.py", line 11, in <module>
exported_program = torch.export.export_for_training(model, (example_args,))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/export/__init__.py", line 168, in export_for_training
return _export_for_training(
^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/export/_trace.py", line 1044, in wrapper
raise e
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/export/_trace.py", line 1017, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/export/exported_program.py", line 117, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/export/_trace.py", line 1944, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/export/_trace.py", line 1296, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/export/_trace.py", line 693, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 1579, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 685, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2378, in CALL
self._call(inst)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2372, in _call
self.call_function(fn, args, kwargs)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 923, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/variables/nn_module.py", line 444, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 929, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3112, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3249, in inline_call_
self.run()
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 685, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1765, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 923, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 461, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 319, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 120, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 929, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3112, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3249, in inline_call_
self.run()
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 685, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2378, in CALL
self._call(inst)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2372, in _call
self.call_function(fn, args, kwargs)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 923, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/variables/nn_module.py", line 444, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 929, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3112, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3249, in inline_call_
self.run()
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 685, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1765, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 923, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 461, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 319, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 120, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 929, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3112, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3249, in inline_call_
self.run()
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 685, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2378, in CALL
self._call(inst)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2372, in _call
self.call_function(fn, args, kwargs)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 923, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 461, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 319, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 120, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 929, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3112, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3249, in inline_call_
self.run()
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
^^^^^^^^^^^
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1841, in STORE_ATTR
not self.export
AssertionError: Mutating module attribute cell_anchors during export.
from user code:
File "faster_rcnn/venv/lib/python3.12/site-packages/torchvision/models/detection/generalized_rcnn.py", line 104, in forward
proposals, proposal_losses = self.rpn(images, features, targets)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
File "faster_rcnn/venv/lib/python3.12/site-packages/torchvision/models/detection/rpn.py", line 362, in forward
anchors = self.anchor_generator(images, features)
File "faster_rcnn/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
File "faster_rcnn/venv/lib/python3.12/site-packages/torchvision/models/detection/anchor_utils.py", line 126, in forward
self.set_cell_anchors(dtype, device)
File "faster_rcnn/venv/lib/python3.12/site-packages/torchvision/models/detection/anchor_utils.py", line 77, in set_cell_anchors
self.cell_anchors = [cell_anchor.to(dtype=dtype, device=device) for cell_anchor in self.cell_anchors]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
PyTorch version: 2.7.0.dev20250130
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.6.1 (arm64)
GCC version: Could not collect
Clang version: 18.1.7
CMake version: version 3.29.3
Libc version: N/A
Python version: 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime)
Python platform: macOS-14.6.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] torch==2.7.0.dev20250130
[pip3] torchaudio==2.6.0.dev20250130
[pip3] torchvision==0.22.0.dev20250130
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,823,054,630 | [FlexAttention] Flex attention + compile fails if head-dimension of values is different than head-dimension of query/keys | matthijsvk | closed | [
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 4 | NONE | ### 🐛 Describe the bug
```python
import torch
from torch.nn.attention.flex_attention import flex_attention
class Model(torch.nn.Module):
def forward(self, v_mult=1):
bsz, n_head, seq_len, qk_dim = 4, 8, 256, 64
v_dim = int(qk_dim * v_mult)
query = torch.randn(bsz, n_head, seq_len, qk_dim, dtype=torch.bfloat16).cuda()
key = torch.randn(bsz, n_head, seq_len, qk_dim, dtype=torch.bfloat16).cuda()
value = torch.randn(bsz, n_head, seq_len, v_dim, dtype=torch.bfloat16).cuda()
out = flex_attention(query, key, value)
out = out.transpose(1, 2).reshape(bsz, seq_len, int(n_head * v_dim)) # [bsz, num_heads, slen, v_head_dim] -> [bsz, slen, num_heads * v_head_dim]
return out.shape
mod = Model().cuda()
mc = torch.compile(mod)
for v_mult in [1, 0.5, 2]:
print(f"v_mult = {v_mult}")
print(mod)
print(mod(v_mult))
print(mc(v_mult))
```
with v_dim != qk_dim, this fails. E.g. for v_mult=2 with error:
```
torch._dynamo.exc.TorchRuntimeError: Failed running call_method reshape(*(FakeTensor(..., device='cuda:0', size=(4, 256, 8, 64), dtype=torch.bfloat16), 4, 256, 1024), **{}):
shape '[4, 256, 1024]' is invalid for input of size 524288
```
it seems like flex_attention + compile outputs shape `size=(4, 256, 8, 64)`, where the last dimension is only 64 but should be 128?
### Versions
PyTorch version: 2.6.0+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.31.2
Libc version: glibc-2.40
Python version: 3.12.7 (main, Oct 1 2024, 11:15:50) [GCC 14.2.1 20240910] (64-bit runtime)
Python platform: Linux-6.6.65-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX 6000 Ada Generation
GPU 1: NVIDIA RTX 6000 Ada Generation
Nvidia driver version: 550.144.03
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.5.1
/usr/lib/libcudnn_adv.so.9.5.1
/usr/lib/libcudnn_cnn.so.9.5.1
/usr/lib/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/libcudnn_graph.so.9.5.1
/usr/lib/libcudnn_heuristic.so.9.5.1
/usr/lib/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-14900K
CPU family: 6
Model: 183
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 20%
CPU max MHz: 6000.0000
CPU min MHz: 800.0000
BogoMIPS: 6376.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,822,902,153 | DISABLED test_script_chunk (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_chunk&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36461551984).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_chunk`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9843, in test_script_chunk
def test_script_chunk(self):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1439, in script
ret = _script_impl(
obj=obj,
...<3 lines>...
example_inputs=example_inputs,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1209, in _script_impl
ast = get_jit_def(obj, obj.__name__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 621, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_chunk
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,902,037 | DISABLED test_pack_unpack_state (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pack_unpack_state&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36461551984).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pack_unpack_state`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9082, in test_pack_unpack_state
def test_pack_unpack_state(self):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 321, in init_then_script
] = torch.jit._recursive.create_script_module(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self, make_stubs, share_types=not added_methods_in_init
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_recursive.py", line 555, in create_script_module
AttributeTypeIsSupportedChecker().check(nn_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_check.py", line 62, in check
source_lines = inspect.getsource(nn_module.__class__.__init__)
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 621, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_pack_unpack_state
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,902,033 | DISABLED test_script_star_expr (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_star_expr&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36461551984).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_star_expr`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9572, in test_script_star_expr
class M2(torch.jit.ScriptModule):
...<9 lines>...
return self.m(*tup)
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9579, in M2
@torch.jit.script_method
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 621, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_star_expr
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,901,409 | DISABLED test_script_module_call_noscript (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_module_call_noscript&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36461551984).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_module_call_noscript`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 8752, in test_script_module_call_noscript
class M(torch.jit.ScriptModule):
...<10 lines>...
return input + self.foo()
File "/var/lib/jenkins/workspace/test/test_jit.py", line 8761, in M
@torch.jit.script_method
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 621, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_module_call_noscript
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,901,334 | DISABLED test_python_frontend_py3 (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_python_frontend_py3&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36461551984).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_python_frontend_py3`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 5878, in test_python_frontend_py3
def test_python_frontend_py3(self):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 621, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_python_frontend_py3
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,892,530 | [CUDAEvent.h] support external cuda events in cudagraphs | nmacchioni | open | [
"Stale",
"release notes: cuda"
] | 15 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146145
| true |
2,822,783,860 | Remove outdated test skipif conditions for Python3.9 | cyyever | closed | [
"oncall: jit",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,822,754,850 | Fix C++20 build errors | cyyever | closed | [
"oncall: jit",
"triaged",
"open source",
"NNC",
"release notes: jit"
] | 3 | COLLABORATOR | Without breaking C++17.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | true |
2,822,666,305 | Fix condition number invertible input(s) documented results | redwrasse | closed | [
"triaged",
"open source",
"release notes: linalg_frontend"
] | 8 | CONTRIBUTOR | `torch.linalg.cond` documentation states a singular input raises a RuntimeError, though unit tests show it in fact returns `inf` (https://github.com/pytorch/pytorch/blob/main/test/test_linalg.py#L1576).
Fixes the documentation and adds an example.
It appears earlier documentation reflected this behavior (https://github.com/pytorch/pytorch/pull/45832/files/9008c10d63e7f5ddd0f06bbd5c7f1548c945d917#diff-316ce439a56491298e2d98deeca82606c52e5bde2f1ceb16c534ec03386c817eR358)
and then got updated here: https://github.com/pytorch/pytorch/commit/d578e8cfa2db71e45c3565b42ff2b10d13643402.
| true |
2,822,659,094 | [hotfix][dynamo] Skip linecache due to a flaky issue | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146177
* __->__ #146141
A large number of jit + dynamo wrapped tests fail in linecache tracing.
We need further debugging. Skipping for now to stem the bleeding.
https://github.com/pytorch/pytorch/issues/146076
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,822,607,909 | Apply ruff fixes to tests | cyyever | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,822,594,355 | DISABLED test_module_none_attrs (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_module_none_attrs&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36448670734).
Over the past 3 hours, it has been determined flaky in 18 workflow(s) with 18 failures and 18 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_module_none_attrs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 15267, in test_module_none_attrs
class MyMod(torch.jit.ScriptModule):
...<6 lines>...
return self.optional_value
File "/var/lib/jenkins/workspace/test/test_jit.py", line 15272, in MyMod
@torch.jit.script_method
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 621, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_module_none_attrs
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,594,354 | DISABLED test_dropout_eval (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_dropout_eval&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36456193176).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_dropout_eval`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 7688, in test_dropout_eval
class ScriptedConv2d(torch.jit.ScriptModule):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 7695, in ScriptedConv2d
def forward(self, x):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 619, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_dropout_eval
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,594,115 | DISABLED test_ternary_right_associative (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_ternary_right_associative&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36448852261).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_ternary_right_associative`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 6680, in test_ternary_right_associative
self.checkScript(plus_123, (1,))
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1024, in getsource
lines, lnum = getsourcelines(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 619, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_ternary_right_associative
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,594,057 | DISABLED test_add_tuple_non_optional (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_add_tuple_non_optional&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36456193176).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_add_tuple_non_optional`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 11351, in test_add_tuple_non_optional
self.checkScript(foo, (inp,))
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1024, in getsource
lines, lnum = getsourcelines(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 456, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 384, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 619, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 619, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_add_tuple_non_optional
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,490,814 | async fx compile | aorenste | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | Adds the ability to run the selected out-of-process fx compile scheme in async mode - where we kick off the compile and then run eagerly until the compile is finished.
Added a test which runs a tiny model in a loop making sure that we execute it both eagerly and then compiled.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146135
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
Differential Revision: [D71135546](https://our.internmc.facebook.com/intern/diff/D71135546) | true |
2,822,490,720 | Subprocess compile | aorenste | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 10 | CONTRIBUTOR | Add a mode to `fx_codegen_and_compile()` to compile in a separate process. This is to prepare for async compile where we'll compile and run eager in parallel (and also be able to move the compile phase to a remote computer).
Added a test based which runs the test_torchinductor tests with subprocess compiling turned on.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146134
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,822,459,516 | Apply ruff fixes to torch/**/*py | cyyever | closed | [
"oncall: distributed",
"oncall: jit",
"triaged",
"open source",
"release notes: quantization",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 1 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,822,431,820 | [2/N] Enable ruff F841 on distributed tests | cyyever | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (pipeline)"
] | 3 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,822,424,058 | Enable ruff F841 on distributed tests | cyyever | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,822,422,413 | [FSDP2] mixed precision: auto turn off `cast_forward_inputs` | leonardo0lyj | open | [
"triaged",
"module: fsdp"
] | 3 | NONE | Hi Andrew @awgu 😊,
I come again with a tiny discussion regarding the `MixedPrecision.cast_forward_inputs` 😁:
Recall the mixed precision of FSDP1 has two flags for [cast forward inputs](https://github.com/pytorch/pytorch/blob/e6704a2447a04349e6b021817a2bf2f601215e67/torch/distributed/fsdp/api.py#L226):
- `cast_forward_inputs`: cast non-root FSDP instance's input dtype to `param_dtype`
- `cast_root_forward_inputs`: cast root FSDP instance's input dtype to `param_dtype`
In FSDP2, the two flags are unified into one [`cast_forward_inputs`](https://github.com/pytorch/pytorch/blob/f358d4d00462616d98d272fc94829365e7ab4c21/torch/distributed/fsdp/_fully_shard/_fsdp_api.py#L47)
- `cast_forward_inputs`: cast both non-root and root FSDP instance's input dtype to `param_dtype`
I really like this unified API especially for its simplicity and debuggability (cheers!), but am slightly concerned about the performance:
- When non-root FSDP instance's inputs are already in `param_dtype` (can be due to the root's input dtype casting), there is no need to cast inputs dtype again for each FSDP instance, especially it comes with non-trivial cpu overhead ([unnecessary `tree_map` for every `args/kwargs`](https://github.com/pytorch/pytorch/blob/f358d4d00462616d98d272fc94829365e7ab4c21/torch/distributed/fsdp/_fully_shard/_fsdp_state.py#L223)):
```python
if self._mp_policy.cast_forward_inputs and self._mp_policy.param_dtype:
with torch.profiler.record_function("FSDP::cast_forward_inputs"):
cast_fn = functools.partial(
_cast_fp_tensor, self._mp_policy.param_dtype
)
args, kwargs = tree_map(cast_fn, args), tree_map(cast_fn, kwargs)
```
- Such overhead incurred by extra input casting is proportional to "number of FSDP instances" times "number of args"
*Solution*:
- FSDP1's two flags, although complex and not elegant, can avoid this overhead by setting `cast_root_forward_inputs = True` and `cast_forward_inputs = False`
- I believe FSDP2's unified flag can still automatically avoid this overhead, by using two internal flags:
i) the clamped `param_dtype` (clamped to `None` after `lazy_init()`), but not using the unreliable `self._mp_policy.param_dtype` because `param_dtype`can be mutated after `fully_shard()` before `lazy_init()`.
ii) `self._is_root` to turn off the cast input dtype for non-root FSDP instances
How do you think? Thanks 🙏
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang | true |
2,822,398,811 | torch.compile on Mamba2 model produces NaNs | emmay78 | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 4 | NONE | ### 🐛 Describe the bug
Using `torch.compile` with the Inductor backend on the Mamba2 model in both fp32 and bf16 causes nans to appear in the forward pass. Removing the compile line from the reproducer below gives the expected numerical results.
Requires: `mamba_ssm`, `transformers`
Code snippet:
```
import torch
from torch.nn import CrossEntropyLoss
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel
from datasets import load_dataset
from transformers import AutoTokenizer
from tqdm.auto import tqdm
model_name = "state-spaces/mamba2-780m"
batch_size = 8
seq_length = 1024
learning_rate = 1e-6
num_epochs = 1
def forward_with_loss(self, input_ids, labels=None):
hidden_states = self.backbone(input_ids)
lm_logits = self.lm_head(hidden_states)
if labels is not None:
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
loss_fct = CrossEntropyLoss()
shift_logits = shift_logits.view(-1, self.backbone.embedding.weight.size()[0])
shift_labels = shift_labels.view(-1)
shift_labels = shift_labels.to(shift_logits.device)
loss = loss_fct(shift_logits, shift_labels)
return (loss,)
else:
return lm_logits
MambaLMHeadModel.forward = forward_with_loss
model = MambaLMHeadModel.from_pretrained(model_name, dtype=torch.float32, device="cuda")
dataset = load_dataset("tatsu-lab/alpaca", split="train")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
tokenizer.pad_token = tokenizer.eos_token
def tokenize_function(examples):
result = tokenizer(
examples["text"],
padding="max_length",
truncation=True,
max_length=seq_length,
)
result["labels"] = result["input_ids"].copy()
return result
tokenized_datasets = dataset.map(tokenize_function, batched=True)
tokenized_datasets.set_format(type="torch", columns=["input_ids", "labels"])
dataloader = torch.utils.data.DataLoader(
tokenized_datasets, batch_size=batch_size, shuffle=True
)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
model = torch.compile(model, backend="inductor")
model.train()
progress_bar = tqdm(range(len(dataloader)))
for batch in dataloader:
batch = {k: v.to("cuda") for k, v in batch.items()}
outputs = model(**batch)
loss = outputs[0]
loss.backward()
optimizer.step()
optimizer.zero_grad()
progress_bar.set_description(f"Loss: {loss.item():.4f}")
progress_bar.update(1)
```
Output:
```
Map: 100%|█████████████████████████████████████████████████████████████████████████| 52002/52002 [00:38<00:00, 1337.85 examples/s]
0%| | 0/6501 [00:00<?, ?it/s]W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] Encountered an exception in identify_mutated_tensors, assuming every input is mutated
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] Traceback (most recent call last):
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 483, in identify_mutated_tensors
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 137, in generate_ttir
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] assert isinstance(kernel, JITFunction)
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] AssertionError
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] Encountered an exception in identify_mutated_tensors, assuming every input is mutated
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] Traceback (most recent call last):
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 483, in identify_mutated_tensors
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 137, in generate_ttir
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] assert isinstance(kernel, JITFunction)
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] AssertionError
W0130 23:03:41.987000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] Encountered an exception in identify_mutated_tensors, assuming every input is mutated
W0130 23:03:41.987000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] Traceback (most recent call last):
W0130 23:03:41.987000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 483, in identify_mutated_tensors
W0130 23:03:41.987000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
W0130 23:03:41.987000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0130 23:03:41.987000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 137, in generate_ttir
W0130 23:03:41.987000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] assert isinstance(kernel, JITFunction)
W0130 23:03:41.987000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] AssertionError
/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py:725: UserWarning: Graph break due to unsupported builtin causal_conv1d_cuda.PyCapsule.causal_conv1d_fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
torch._dynamo.utils.warn_once(msg)
/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py:167: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
W0130 23:04:36.583000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] Encountered an exception in identify_mutated_tensors, assuming every input is mutated
W0130 23:04:36.583000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] Traceback (most recent call last):
W0130 23:04:36.583000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 483, in identify_mutated_tensors
W0130 23:04:36.583000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
W0130 23:04:36.583000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0130 23:04:36.583000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 137, in generate_ttir
W0130 23:04:36.583000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] assert isinstance(kernel, JITFunction)
W0130 23:04:36.583000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] AssertionError
W0130 23:04:36.601000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] Encountered an exception in identify_mutated_tensors, assuming every input is mutated
W0130 23:04:36.601000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] Traceback (most recent call last):
W0130 23:04:36.601000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 483, in identify_mutated_tensors
W0130 23:04:36.601000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
W0130 23:04:36.601000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0130 23:04:36.601000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 137, in generate_ttir
W0130 23:04:36.601000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] assert isinstance(kernel, JITFunction)
W0130 23:04:36.601000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] AssertionError
W0130 23:04:36.644000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] Encountered an exception in identify_mutated_tensors, assuming every input is mutated
W0130 23:04:36.644000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] Traceback (most recent call last):
W0130 23:04:36.644000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 483, in identify_mutated_tensors
W0130 23:04:36.644000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
W0130 23:04:36.644000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0130 23:04:36.644000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 137, in generate_ttir
W0130 23:04:36.644000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] assert isinstance(kernel, JITFunction)
W0130 23:04:36.644000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/1] AssertionError
W0130 23:04:36.821000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] Encountered an exception in identify_mutated_tensors, assuming every input is mutated
W0130 23:04:36.821000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] Traceback (most recent call last):
W0130 23:04:36.821000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 483, in identify_mutated_tensors
W0130 23:04:36.821000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
W0130 23:04:36.821000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0130 23:04:36.821000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 137, in generate_ttir
W0130 23:04:36.821000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] assert isinstance(kernel, JITFunction)
W0130 23:04:36.821000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] AssertionError
W0130 23:04:36.838000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] Encountered an exception in identify_mutated_tensors, assuming every input is mutated
W0130 23:04:36.838000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] Traceback (most recent call last):
W0130 23:04:36.838000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 483, in identify_mutated_tensors
W0130 23:04:36.838000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
W0130 23:04:36.838000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0130 23:04:36.838000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 137, in generate_ttir
W0130 23:04:36.838000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] assert isinstance(kernel, JITFunction)
W0130 23:04:36.838000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] AssertionError
W0130 23:04:36.873000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] Encountered an exception in identify_mutated_tensors, assuming every input is mutated
W0130 23:04:36.873000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] Traceback (most recent call last):
W0130 23:04:36.873000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 483, in identify_mutated_tensors
W0130 23:04:36.873000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
W0130 23:04:36.873000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0130 23:04:36.873000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 137, in generate_ttir
W0130 23:04:36.873000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] assert isinstance(kernel, JITFunction)
W0130 23:04:36.873000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/2] AssertionError
/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/mamba/mamba_ssm/utils/hf.py:18: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return torch.load(resolved_archive_file, map_location=mapped_device)
Map: 100%|█████████████████████████████████████████████████████████████████████████| 52002/52002 [00:38<00:00, 1337.85 examples/s]
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] Encountered an exception in identify_mutated_tensors, assuming every input is mutated
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] Traceback (most recent call last):
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 483, in identify_mutated_tensors
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 137, in generate_ttir
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] assert isinstance(kernel, JITFunction)
W0130 23:03:41.701000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] AssertionError
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] Encountered an exception in identify_mutated_tensors, assuming every input is mutated
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] Traceback (most recent call last):
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 483, in identify_mutated_tensors
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0130 23:03:41.727000 1742402 site-packages/torch/_higher_order_ops/triton_kernel_wrap.py:504] [4/0] File "/n/netscratch/idreos_lab/Lab/emyang/mamba-qat/pytorch-3.12/lib/python3.12/site-package
Loss: nan: 2%|█▍ | 122/6501 [05:07<1:42:48, 1.03it/s]
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Chillee @drisspg @bdhirsh @ezyang
### Error logs
Loss is nan after the first forward pass. Adding pre-forward hooks using `torch.distributed._tools.mod_tracker.ModTracker` also confirms that nans appear in the forward pass of the compiled model.
### Versions
```
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.9 (Green Obsidian) (x86_64)
GCC version: (GCC) 12.2.0
Clang version: 18.1.8 (Red Hat 18.1.8-1.module+el8.10.0+1875+4f0b06db)
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 16:05:46) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-513.18.1.el8_9.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 17
Model name: AMD EPYC 9454 48-Core Processor
Stepping: 1
CPU MHz: 2349.964
CPU max MHz: 3810.7910
CPU min MHz: 1500.0000
BogoMIPS: 5499.91
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 32768K
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.1
[pip3] torchao==0.7.0
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 2.1.3 py312h58c1407_0 conda-forge
[conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchao 0.7.0 pypi_0 pypi
[conda] torchaudio 2.5.1 py312_cu124 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.1 py312_cu124 pytorch
``` | true |
2,822,389,492 | TransformerEncoderLayer returns very different results on float64 | twoertwein | closed | [] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
`TransformerEncoderLayer` should return very similar results when run with float32 and float64, but they are very different:
```py
import torch
torch.manual_seed(0)
model = torch.nn.TransformerEncoderLayer(32, 1)
x = torch.rand(10, 32)
y = model(x)
y64 = model.to(dtype=torch.float64)(x.to(dtype=torch.float64))
# should get very similar results with float64
print((y - y64).abs().max().item()) # but get a giant difference: 2.3634471505652055
```
(I was using float64 to debug some numerical discrepencies.)
### Versions
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.31.5
Libc version: N/A
Python version: 3.11.10 | packaged by conda-forge | (main, Sep 10 2024, 10:57:35) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnx2torch==1.5.15
[pip3] onnxconverter-common==1.14.0
[pip3] onnxmltools==1.12.0
[pip3] onnxruntime==1.19.2
[pip3] onnxscript==0.1.0.dev20241214
[pip3] optree==0.13.0
[pip3] skl2onnx==1.18.0
[pip3] tf2onnx==1.16.1
[pip3] torch==2.6.0
[pip3] torch-onnx==0.1.25
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.20.1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] onnx2torch 1.5.15 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torch-onnx 0.1.25 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi | true |
2,822,377,736 | When I try to run rt-detr model in C++ libtorch i face the given error | PranavhShetty | open | [
"oncall: jit",
"module: windows"
] | 3 | NONE | ### 🐛 Describe the bug
When I try to run rt-detr model in C++ libtorch i face the given error
Sample Code to reproduce the problem:
```cpp
#include <torch/torch.h>
#include <torch/cuda.h>
#include <torch/script.h>
#include
#include <Windows.h> // For HMODULE and basic Windows types
#include <psapi.h>
#include <opencv2/opencv.hpp>
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/imgcodecs.hpp>
using namespace std;
int main() {
HMODULE torchCudaDll = LoadLibraryA("torch_cuda.dll");
try {
std::cout << "LibTorch version: " << TORCH_VERSION << std::endl;
std::cout << "LibTorch major version: " << TORCH_VERSION_MAJOR << std::endl;
std::cout << "LibTorch minor version: " << TORCH_VERSION_MINOR << std::endl;
std::cout << "LibTorch patch version: " << TORCH_VERSION_PATCH << std::endl;
if (!torch::cuda::is_available()) {
std::cerr << "CUDA is not available!" << std::endl;
return -1;
}
else {
std::cout << "CUDA is available\n";
}
std::string model_path = "C:/Users/prana/Downloads/rt-detr-v1.4.1.torchscript";
torch::jit::script::Module model;
try {
torch::NoGradGuard no_grad;
model = torch::jit::load(model_path, torch::kCUDA);
}
catch (const c10::Error& e) {
std::cerr << "Error loading the model: " << e.what() << std::endl;
return -1;
}
/* model.to(torch::kCUDA);*/
model.eval();
cv::Mat image = cv::imread("C:/Users/prana/Desktop/bhavith/images/Img_008_12108(0) (1)_316.png");
if (image.empty()) {
std::cerr << "Error loading the image" << std::endl;
return -1;
}
cv::imshow("image", image);
/*cv::waitKey(0);*/
cv::Mat input_image;
cv::cvtColor(image, input_image, cv::COLOR_BGR2RGB);
torch::Tensor image_tensor = torch::from_blob(input_image.data, { input_image.rows, input_image.cols, 3 }, torch::kByte);
image_tensor = image_tensor.toType(torch::kFloat32).div(255);
image_tensor = image_tensor.permute({ 2, 0, 1 });
image_tensor = image_tensor.unsqueeze(0);
image_tensor = image_tensor.to(torch::kCUDA);
std::vector<torch::jit::IValue> inputs{ image_tensor };
//try {
////torch::NoGradGuard no_grad; // Disable gradient calculation
torch::Tensor output = model.forward(inputs).toTensor();
output = output.to(torch::kCPU);
std::cout << output.slice(1, 0, 10) << std::endl;
//}
//catch (const c10::Error& e) {
//std::cerr << "Error during model inference: " << e.what() << std::endl;
//return -1;
//}
return 0;
}
catch (const std::exception& e) {
std::cerr << "Error: " << e.what() << std::endl;
return -1;
}
}
```
Error:
Error: The following operation failed in the TorchScript interpreter.
```
Traceback of TorchScript, serialized code (most recent call last):
File "code/torch/ultralytics/nn/tasks.py", line 85, in forward
_33 = (_7).forward(act1, (_6).forward(act1, _32, ), )
_34 = (_10).forward((_9).forward((_8).forward(_33, ), ), )
_35 = (_12).forward(act0, (_11).forward(_34, ), )
~~~~~~~~~~~~ <--- HERE
_36 = (_15).forward((_13).forward(_35, ), (_14).forward(_33, ), )
_37 = (_17).forward(act0, (_16).forward(act0, act, _36, ), )
File "code/torch/ultralytics/nn/modules/transformer.py", line 39, in forward
pos_dim = torch.div(embed_dim, CONSTANTS.c0, rounding_mode="trunc")
_7 = torch.arange(annotate(number, pos_dim), dtype=6, layout=0, device=torch.device("cpu"), pin_memory=False)
_8 = torch.div(_7, pos_dim)
~~~~~~~~~ <--- HERE
_9 = torch.to(CONSTANTS.c1, torch.device("cpu"), 6)
_10 = torch.reciprocal(torch.pow(torch.detach(_9), _8))
Traceback of TorchScript, original code (most recent call last):
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ultralytics\nn\modules\transformer.py(109): build_2d_sincos_position_embedding
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ultralytics\nn\modules\transformer.py(96): forward
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\torch\nn\modules\module.py(1090): _slow_forward
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\torch\nn\modules\module.py(1102): _call_impl
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ultralytics\nn\tasks.py(587): predict
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ultralytics\nn\tasks.py(112): forward
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\torch\nn\modules\module.py(1090): _slow_forward
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\torch\nn\modules\module.py(1102): _call_impl
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\torch\jit_trace.py(958): trace_module
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\torch\jit_trace.py(741): trace
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ultralytics\engine\exporter.py(434): export_torchscript
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ultralytics\engine\exporter.py(141): outer_func
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ultralytics\engine\exporter.py(355): call
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ultralytics\engine\model.py(737): export
C:\Users\Vijay M\AppData\Local\Temp\ipykernel_16012\1332778321.py(1):
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\IPython\core\interactiveshell.py(3508): run_code
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\IPython\core\interactiveshell.py(3448): run_ast_nodes
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\IPython\core\interactiveshell.py(3269): run_cell_async
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\IPython\core\async_helpers.py(129): _pseudo_sync_runner
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\IPython\core\interactiveshell.py(3064): _run_cell
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\IPython\core\interactiveshell.py(3009): run_cell
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ipykernel\zmqshell.py(549): run_cell
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ipykernel\ipkernel.py(449): do_execute
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ipykernel\kernelbase.py(778): execute_request
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ipykernel\ipkernel.py(362): execute_request
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ipykernel\kernelbase.py(437): dispatch_shell
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ipykernel\kernelbase.py(534): process_one
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ipykernel\kernelbase.py(545): dispatch_queue
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\asyncio\events.py(81): _run
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\asyncio\base_events.py(1859): _run_once
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\asyncio\base_events.py(570): run_forever
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\tornado\platform\asyncio.py(205): start
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ipykernel\kernelapp.py(739): start
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\traitlets\config\application.py(1075): launch_instance
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\site-packages\ipykernel_launcher.py(18):
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\runpy.py(87): _run_code
c:\Users\Public\miniconda\envs\pytorch110-cu10.2\lib\runpy.py(194): _run_module_as_main
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
This is the error faced when i try to infer the rt-detr model in c++ with cuda.
Kindly help me solve this issue.
### Versions
This is the error appears when I try to infer a rt-detr model in c++, exported in torchscript format from ultralytics using pytorch version 2.6.0+cu118 in python. But in the same code Yolov11 model in torchscript format runs perfectly fine exported using the same method.
libtorch version used is 1.13.0+cu117 as I want to run this code in C++14.
Kindly help me out in this issue I will still be working on this issue and if i figure this issue out I will address it below
Thank You
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.