id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,018,372,613
|
[logging] Clean up dynamo_timed usages in cudagraph_trees
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152136
Summary: I'm investigating differences in total torch.compile overhead in our two main internal sources: dynamo_compile and pt2_compile_events. One source of discrepancy is due to cudagraphs overheads. Currently, we have a context manager that optionally attributes a dynamo_timed region to a cudagraph-related column logged to dynamo_compile, but _all_ dynamo_timed regions show up in pt2_compile_events (hence the discrepancy; pt2_compile_events is overcounting). We could filter out these specific events from pt2_compile_events when measuring overall overhead. But I'm going to argue that those timed regions that we DO NOT consider as a compiler-related overhead don't have much value in logging in the first place. So I'm suggesting we just remove those instances.
Here's the production job with the discrepancy:
* dynamo_compile: https://fburl.com/scuba/dynamo_compile/3604eypl
* pt2_compile_events: https://fburl.com/scuba/pt2_compile_events/c2dv8sty
Test Plan:
torchbench nanogpt:
* tlparse: https://fburl.com/h1n2ascc
* dynamo_compile: https://fburl.com/scuba/dynamo_compile/sandbox/u37yrynp
* pt2_compile_events: https://fburl.com/scuba/pt2_compile_events/s7avd0di
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,018,317,780
|
Unaccaptable OOMs all the time.
|
Deathawaits4
|
open
|
[
"needs reproduction",
"module: cuda",
"module: memory usage",
"triaged"
] | 3
|
NONE
|
Hello,
i don't want to sound harsh, but pytorch has ruined me many training runs, and wasted many hours of training
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 250.00 MiB. GPU 0 has a total capacity of 79.26 GiB of which 104.75 MiB is free. Process 1007710 has 79.14 GiB memory in use. Of the allocated memory 74.77 GiB is allocated by PyTorch, and 3.87 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
It is unaccaptable that with 4gigs of free memory, the allocator is unable to create large enough segments to finish a training run!
The worst part is, that this keeps happening after it already has trained hours, with expandable segements set to true.
This issue has been plaguing pytorch since forever and is happening in every version.
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,018,241,032
|
[RFC] Proposed Changes to Feature Tracking & Classification for PyTorch Releases starting Release 2.8
|
atalman
|
open
|
[
"triaged"
] | 0
|
CONTRIBUTOR
|
RFC Authors: @anitakat @atalman
Hello everyone,
Following feedback and discussion on existing gaps of the feature review process, below are proposed changes for which we are keen to have your input.
## Feature Tracking Process
Beginning with release 2.8, the PyTorch release will only track major features. At this time we do not have a comprehensive list and welcome examples from the community of what they would like tracked.
* All major features will require an RFC from the start with an estimated timeline. This will allow the maintainers to provide async feedback before feature implementation begins.
* The RFC for these major features will have a current status that will enable partners and the community at large, to reference the progress of a given feature with ease. [Example of an RFC](https://github.com/pytorch/pytorch/issues/130249).
* These RFCs will be tagged and labelled appropriately for tracking and when a feature is complete/stable, they will be untagged and no longer tracked.
* Release notes will highlight new major features and improvements made to existing features, and will follow the new classification of API-Stable and API-Unstable.
For any features not on this list, the only requirement is to follow the path to stable below, to be classified as stable when ready.
## Feature Classification:
Beginning with release 2.8, feature submissions will be classified as either API-Stable or API-Unstable, and the previous classifications of Prototype, Beta and Stable, will no longer be used.
### API-Stable
(Previously called Stable) An API-Stable feature means that the user value-add has been proven, the API isn’t expected to change, the feature is performant and all documentation exists to support end user adoption.
Examples of API-Stable features include Accelerated Transformers, DataLoader, Autograd Engine, and CUDA support in PyTorch CI/CD.
Commitment from the PyTorch team: We expect to maintain these features long term and generally there should be no major performance limitations or gaps in documentation. We also expect to maintain backwards compatibility (although breaking changes can happen and notice will be given one release ahead of time).
### API-Unstable
(Previously Prototype or Beta) Encompasses all features that are under active development where API may change based on user feedback, requisite performance improvements or because coverage across operators is not yet complete.
Commitment from the PyTorch team: We are not committing to backwards compatibility. The APIs and performance characteristics of this feature may change.
### Classification Requirements
Requirement | API-Unstable | API-Stable | Path to API-Stable
-- | -- | -- | --
RFC Created | X | X | -
Doc Strings | X | X | -
Unit Tests | X | X | -
CI Coverage | X | X | -
Complete Workflow Coverage (e.g. CV or NLP) | | X | Phase 1
Recipe or Tutorial | | X | Phase 1
User Feedback (Features with User API surface) | | X | Phase 2
Dogfooding: 1-2 early adopter teams (internal or external) have found this feature useful and their feedback has been incorporated | | X | Phase 2
Design review / TL Signoff | | X | Phase 2
API Stability | | X | -
Full Op Coverage | | X | -
### Path To Stable API
<img width="778" alt="Image" src="https://github.com/user-attachments/assets/e9ae8fcc-6d4c-41fe-bcb6-26330f82fdd8" />
Thank you for reading and we look forward to your feedback.
Cheers,
Team PyTorch
| true
|
3,018,235,081
|
[ROCm] Fixes to enable VM-based MI300 CI runners
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
COLLABORATOR
|
New VM-based MI300 CI runners tested in https://github.com/pytorch/pytorch/pull/151708 exposed some issues in CI that this PR fixes:
* HSAKMT_DEBUG_LEVEL is a debug env var that was introduced to debug driver issues. However, in the new MI300 runners being tested, since they run inside a VM, the driver emits a debug message `Failed to map remapped mmio page on gpu_mem 0` when calling `rocminfo` or doing other GPU-related work. This results in multiple PyTorch unit tests failing when doing a string match on the stdout vs expected output.
* HSA_FORCE_FINE_GRAIN_PCIE was relevant for rccl performance improvement, but is not required now.
* amdsmi doesn't return metrics like [power_info](https://rocm.docs.amd.com/projects/amdsmi/en/latest/reference/amdsmi-py-api.html#amdsmi-get-power-cap-info) and [clock_info](https://rocm.docs.amd.com/projects/amdsmi/en/latest/reference/amdsmi-py-api.html#amdsmi-get-clock-info) in a VM ("Guest") environment. Return 0 as the default in cases where amdsmi returns "N/A"
* amdsmi throws an exception when calling `amdsmi.amdsmi_get_clock_info` on the VM-based runners. Temporarily skipping the unit test for MI300 until we find a resolution.
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,018,233,465
|
Remove some instances of uninitialized memory use
|
pganssle-google
|
open
|
[
"open source",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Two changes, one caught by MSAN, the other caught because it was blowing up tests.
The change in test_ir fixes a use-after-free by capturing the variable being closed over by value.
The change in debug_util initializes all values for the SourceLocation object.
| true
|
3,018,181,623
|
[C10D] Autograd Support for Collectives
|
wconstab
|
open
|
[
"oncall: distributed",
"triaged"
] | 0
|
CONTRIBUTOR
|
Building on #148690 and following from [this post](https://discuss.pytorch.org/t/supporting-autograd-for-collectives/219430) there are a few we should make to support autograd properly in our collective library.
**Problem:** Collectives today silently no-op during backwards
The first thing we should do since it's simplest is to prevent accidental silent incorrectness by issuing an error whenever the backwards pass of a collective that currently has no backwards formula is executed.
- https://github.com/pytorch/pytorch/issues/152127
Then, we should support backwards properly. It is shown to be useful in some cases for some ops. We should probably just support all of the ops.
**Option 1: Naive Implementation**
We can start by just implementing the backwards formulas as described in this table, for the ops we care about.
gather | scatter |
-- | -- | --
scatter | gather |
reduce (avg, sum, premul_sum) | broadcast | Bitwise ops not supported for grad (band, bor, bxor)
reduce (max, min) | Identity (for max/min src) Scaled (for a tie) 0 (for others) |
reduce (product) | fwd_out / fwd_in * dout |
broadcast | reduce(sum) |
all_to_all | all_to_all |
all_reduce (avg, sum, premul_sum) | all_reduce(sum) | Common exception, e.g. megatron TP, see below; Bitwise ops not supported for grad (band, bor, bxor)
all_reduce (max, min) | all_reduce(sum) (for max/min src) 0 (for others) |
all_reduce (product) | fwd_out / fwd_in * allreduce(sum, dout) |
all_gather | reduce_scatter(sum) |
reduce_scatter | all_gather |
all_to_all | all_to_all
The problem with this option is that during backwards, there will be no communication / compute overlap. For some use cases, this is not a problem because there is little or no opportunity to overlap the collective anyways, but for others it would be catastrophic.
For use cases where Option 1 is useful, we should just land the change to enable the backwards formula and unblock them.
**Option 2: Improved Overlap**
During forward, we also observe a 'wait_tensor' op (when using functional collectives), or a 'work.wait()' call (in C10D). Users may call this as late as possible in Forwards, and this gives us a good place during backwards to start the backwards collective 'early' so it will overlap with the forwards pass. We could implement the backwards pass of 'wait_tensor' op to launch an appropriate collective corredsponding to the collective launched during forward before the wait. Credit to @fmassa for this idea.
1) dCollective = wait_tensor
2) dWait_tensor = the backwards for 'collective'
To implement (2) we'd have to store metadata about the collective we launched (or the backwards we want to run) on the Work object or some other way. This needs further design investigation.
The alternative to Option 2 is we just live with bad performance in Eager, and rely on torch.compile() to get overlapping back during backwards. This may also be OK.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k
| true
|
3,018,174,869
|
AOTI cannot move tensors between cuda devices
|
yushangdi
|
open
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When we move tensors between cuda devices, AOTI just does a `AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_copy_(buf0, arg0_1, 0));`, which doesn't really change the device index. The resulting tensor is still in device 0.
Exported Program:
```
def forward(self, x):
x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
_assert_tensor_metadata_default = torch.ops.aten._assert_tensor_metadata.default(x, dtype = torch.float32, device = device(type='cuda', index=0), layout = torch.strided); _assert_tensor_metadata_default = None
to = torch.ops.aten.to.dtype_layout(x, dtype = torch.float32, layout = torch.strided, device = device(type='cuda', index=1))
return pytree.tree_unflatten((x, to), self._out_spec)
```
```
import torch
class M(torch.nn.Module):
def forward(self, x):
y = x.to("cuda:1")
return x, y
x = torch.rand(100, device="cuda:0")
model = M().cuda()
ep = torch.export.export(model, (x,))
gm = ep.module()
print(gm(x)) # this is correct
path = torch._inductor.aoti_compile_and_package(ep)
aot_model = torch._inductor.aoti_load_package(path)
out = aot_model(x)
print(out) # this is wrong
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire
### Versions
master
| true
|
3,018,159,218
|
[dynamic shapes] aten.constant_pad_nd meta impl
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
We know the output shape, and we know this always produces a clone. Avoids data-dependent errors from the decomposition.
along with https://github.com/pytorch/pytorch/pull/150483, should fix https://github.com/pytorch/pytorch/issues/123855
| true
|
3,018,156,403
|
FlexAttention + Export / AOTI
|
drisspg
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 0
|
CONTRIBUTOR
|
# Summary
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @yanboliang @BoyuanFeng
| true
|
3,018,149,226
|
[C10D] Make collectives backwards throw an error
|
wconstab
|
open
|
[
"oncall: distributed",
"triaged"
] | 1
|
CONTRIBUTOR
|
Today functional collectives and C10D collectives silently ignore backwards, which can surprise users and lead to missing gradients and incorrect training.
Many users of these collectives do not intend to use the backwards pass, so this limitation is not affecting them. They either call functional_collectives _during_ the backward pass, in an explicit no_grad context, (e.g. DDP, FSDP) or they write a custom autograd.Function that performs some explicitly chosen collectives as part of forward pass and backward pass.
Other users may use the collective directly during forward, and expect to have proper autograd support. These users would observe silent correctness problems.
We should explicitly register a backwards kernel for functional collectives that throws an error. Later, we can replace this kernel with an actual backwards implementation, but we don't have to do that all at once and there are some more other design decisions to make regarding performance of the backwards pass, so we should plug this hole first.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k
| true
|
3,018,070,853
|
[CI] [anaconda] Utilities
|
atalman
|
closed
|
[
"module: ci",
"triaged",
"better-engineering"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Related to https://github.com/pytorch/pytorch/issues/138506
```
torch/utils/data/dataframes_pipes.ipynb
torch/utils/data/datapipes/utils/decoder.py
torch/utils/data/standard_pipes.ipynb
tools/setup_helpers/env.py
```
### Versions
2.8.0
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,018,069,234
|
Add runtime asserts to AOTI
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 22
|
CONTRIBUTOR
|
Summary:
Solves https://github.com/pytorch/pytorch/issues/151925
Currently, AOTI only generate runtime asserts for unbacked symints. We should generate asserts for all `_assert_scalar` calls in the input graph.
Also factored out the run time assertion logic to a separate function.
We need to generate runtime asserts directly in Inductor instead
of just re-using the asserts from input graphs becase we reuse the
same ShapeEnv as before. In particular, on subsequent graph passes,
we would immediately turn all of these assertions into noops,
because when we evaluated their expressions, we would see that
because we had a deferred runtime assert in the ShapeEnv, we
know "oh, of course this expression is True" already.
One example is below:
```
class Model(torch.nn.Module):
def forward(self, a, b, c):
nz = torch.nonzero(a)
ones = a.new_ones([nz.size(0), b.size(0)])
torch._check(ones.size(0) >= 1)
equals = torch.add(ones, c)
return equals
torch._dynamo.mark_dynamic(c, 0)
```
When we re-use the ShapeEnv in Inductor lowering, the check that checks
a and nonzero have the same shape would be evaluted to True after we resolve
unbacked bindings using the ShapeEnv.
See test_unbacked_equals_input_size_runtime_assertion in test_aot_inductor.
In addition to the Inductor generated runtime asserts, we also
need the runtime asserts from the input graph, because some derived
runtime asserts are not generated in Inductor. One example is
below:
```
class Model(torch.nn.Module):
def forward(self, x):
y = x.reshape(100, -1).clone()
y = y + 1
return y
dynamic_shapes = {
"x": {0: torch.export.Dim.DYNAMIC},
}
x.shape[0] needs to be a multiple of 100.
```
See test_aoti_runtime_asserts_backed_symint in test_aot_inductor.
Example:
```
def forward(self):
arg0_1: "f32[s35]";
arg0_1, = fx_pytree.tree_flatten_spec([], self._in_spec)
# File: /data/users/shangdiy/fbsource/buck-out/v2/gen/fbcode/73a672eb896e7996/scripts/shangdiy/__pt__/pt#link-tree/scripts/shangdiy/pt.py:11 in forward, code: y = x.reshape(100, -1).clone()
sym_size_int: "Sym(s35)" = torch.ops.aten.sym_size.int(arg0_1, 0)
#
mod: "Sym(Mod(s35, 100))" = sym_size_int % 100; sym_size_int = None
eq_2: "Sym(Eq(Mod(s35, 100), 0))" = mod == 0; mod = None
_assert_scalar = torch.ops.aten._assert_scalar.default(eq_2, "Runtime assertion failed for expression Eq(Mod(s35, 100), 0) on node 'eq'"); eq_2 = _assert_scalar = None
# File: /data/users/shangdiy/fbsource/buck-out/v2/gen/fbcode/73a672eb896e7996/scripts/shangdiy/__pt__/pt#link-tree/scripts/shangdiy/pt.py:11 in forward, code: y = x.reshape(100, -1).clone()
view: "f32[100, (s35//100)]" = torch.ops.aten.reshape.default(arg0_1, [100, -1]); arg0_1 = None
clone: "f32[100, (s35//100)]" = torch.ops.aten.clone.default(view); view = None
# File: /data/users/shangdiy/fbsource/buck-out/v2/gen/fbcode/73a672eb896e7996/scripts/shangdiy/__pt__/pt#link-tree/scripts/shangdiy/pt.py:12 in forward, code: y = y + 1
add_6: "f32[100, 1]" = torch.ops.aten.add.Tensor(clone, 1); clone = None
return (add_6,)
```
Generated cpp code:
```
auto inputs = steal_from_raw_handles_to_raii_handles(input_handles, 1);
auto arg0_1 = std::move(inputs[0]);
auto arg0_1_size = arg0_1.sizes();
int64_t s35 = arg0_1_size[0];
inputs.clear();
auto& kernels = static_cast<AOTInductorModelKernels&>(*this->kernels_.get());
if (!((s35 % 100L) == 0L)) { throw std::runtime_error("Expected Eq(Mod(s35, 100), 0) to be True but received " + std::to_string(s35)); }
```
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r aoti_runtime_asserts_backed_symint
```
Differential Revision: D73596786
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,018,065,203
|
[CI] [anaconda] Utility scripts and workflows
|
atalman
|
closed
|
[
"module: ci",
"triaged",
"better-engineering"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Related to https://github.com/pytorch/pytorch/issues/138506
```
.ci/pytorch/python_doc_push_script.sh#L76
.github/workflows/upload-test-stats-while-running.yml
.github/workflows/llm_td_retrieval.yml
.github/scripts/test_trymerge.py
tools/code_coverage/package/tool/print_report.py
```
### Versions
2.8.0
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,018,048,393
|
[CI] [anaconda] Benchmarks anaconda removal
|
atalman
|
closed
|
[
"module: ci",
"triaged",
"better-engineering"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Related to #138506
Benchmarks files
```
benchmarks/dynamo/Makefile
benchmarks/dynamo/runner.py
benchmarks/sparse/test_csr.sh
torch/utils/benchmark/examples/blas_compare_setup.py
torch/utils/benchmark/examples/prepare_e2e.sh
```
### Versions
2.8.0
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,018,043,192
|
[NJT] `.bmm`'s BmmBackward0 fails compilation when second arg requires grad
|
imh
|
open
|
[
"module: autograd",
"triaged",
"module: nestedtensor",
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
When we try to compile `njt_tensor.bmm(default_tensor)` and `default_tensor` requires grad, compilation fails.
```python
import torch
def do_bmm(x, y):
return x.bmm(y.transpose(1,2))
d = 4
x = torch.nested.nested_tensor(
[
torch.randn((1,d)),
torch.randn((1,d)),
torch.randn((2,d))
],
layout=torch.jagged,
requires_grad=True
)
y = torch.randn((3, 5, d), requires_grad=True)
# works just fine fwd/backwards uncompiled
z = do_bmm(x, y).mean().backward()
# Also works compiled when y doesn't need grad
x.grad = y.grad = None # reset
y.requires_grad_(False)
do_bmm_compiled = torch.compile(do_bmm, fullgraph=True)
z = do_bmm_compiled(x, y).mean().backward()
# It fails to compile when y needs grad:
x.grad = y.grad = None # reset
y.requires_grad_(True)
# # Succeeds when not requiring fullgraph, but logs "Backend compiler exception"
# do_bmm_compiled = torch.compile(do_bmm)
# z = do_bmm_compiled(x, y).mean().backward()
# x.grad = y.grad = None # reset
# # if we uncomment this block, then the next block *doesn't* fail, weirdly
# Fails to handle fullgraph when y requires grad
do_bmm_compiled = torch.compile(do_bmm, fullgraph=True)
z = do_bmm_compiled(x, y)
```
Here's a [log](https://gist.github.com/imh/232555f7b4cb7b73c3ab1d0933df548b) with TORCHDYNAMO_VERBOSE=1.
The non-verbose version is here:
```
/home/imh/code/hmer/modeling/.venv/lib/python3.12/site-packages/torch/autograd/graph.py:824: UserWarning: Error detected in BmmBackward0. Traceback of forward call that caused the error:
File "/home/imh/.config/JetBrains/PyCharm2024.3/scratches/scratch_2.py", line 4, in do_bmm
return x.bmm(y.transpose(1,2))
(Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:122.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
W0424 10:54:09.598000 477854 modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py:514] [0/1] Backend compiler exception
W0424 10:54:09.598000 477854 modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py:514] [0/1] Explanation: Backend compiler `inductor` failed with aten._local_scalar_dense.default. Adding a graph break.
W0424 10:54:09.598000 477854 modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py:514] [0/1] Hint: Report an issue to the backend compiler repo.
W0424 10:54:09.598000 477854 modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py:514] [0/1]
W0424 10:54:09.598000 477854 modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py:514] [0/1] Developer debug context: Backend: inductor
W0424 10:54:09.598000 477854 modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py:514] [0/1] Exception:aten._local_scalar_dense.default
W0424 10:54:09.598000 477854 modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py:514] [0/1] Traceback:
W0424 10:54:09.598000 477854 modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py:514] [0/1] File "/home/imh/.config/JetBrains/PyCharm2024.3/scratches/scratch_2.py", line 4, in do_bmm
W0424 10:54:09.598000 477854 modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py:514] [0/1] return x.bmm(y.transpose(1,2))
W0424 10:54:09.598000 477854 modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py:514] [0/1]
W0424 10:54:09.598000 477854 modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py:514] [0/1]
Traceback (most recent call last):
File "/home/imh/.config/JetBrains/PyCharm2024.3/scratches/scratch_2.py", line 40, in <module>
z = do_bmm_compiled(x, y)
^^^^^^^^^^^^^^^^^^^^^
File "/home/imh/code/hmer/modeling/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 659, in _fn
raise e.with_traceback(None) from None
torch._dynamo.exc.Unsupported: Backend compiler exception
Explanation: Backend compiler `inductor` failed with aten._local_scalar_dense.default. Adding a graph break.
Hint: Report an issue to the backend compiler repo.
Developer debug context: Backend: inductor
Exception:aten._local_scalar_dense.default
Traceback:
File "/home/imh/.config/JetBrains/PyCharm2024.3/scratches/scratch_2.py", line 4, in do_bmm
return x.bmm(y.transpose(1,2))
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 565.57.01
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
CPU family: 6
Model: 94
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 3
CPU(s) scaling MHz: 95%
CPU max MHz: 4200.0000
CPU min MHz: 800.0000
BogoMIPS: 7999.96
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] Could not collect
[conda] Could not collect
```
cc @ezyang @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan @cpuhrsch @jbschlosser @bhosmer @drisspg @davidberard98 @YuqingJ @chauhang @penguinwu
| true
|
3,018,024,782
|
[poetry] 2.7.0+cpu includes cuda as a dependency
|
peter-axion
|
closed
|
[
"triage review",
"module: binaries",
"module: regression",
"topic: binaries"
] | 6
|
NONE
|
### 🐛 Describe the bug
I use torch `+cpu` variants in images I run on VMs without GPUs because the CUDA libraries are huge, so if I don't need them then I definitely don't want them.
When I use `poetry lock` on poetry 1 or 2 with torch `2.7.0+cpu` in my pyproject.toml, the cuda libraries and triton are added as dependencies (11GB image), while with `2.5.1+cpu`, they were not (3.7GB image).
I'm reporting this as a bug because I assume it was unintentional, given that the +cpu addition seems to imply the user won't be running on a GPU.
### Versions
I already deleted the big image with torch `2.7.0+cpu` :(
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @osalpekar @atalman
| true
|
3,018,023,445
|
[dynamo] Remove unnecessary guarding on callable user defined objects
|
anijain2305
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152120
* #151847
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,017,977,356
|
[dynamo][ca] support dynamic annotations on tensors in ListVariables/TupleVariables
|
xmfan
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"ci-no-td"
] | 12
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151860
* __->__ #152119
* #151962
* #151731
Together with https://github.com/pytorch/pytorch/pull/151962, FIXES https://github.com/pytorch/pytorch/issues/133575
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,017,949,132
|
Update torch/optim/optimizer.py
|
janeyx99
|
closed
|
[
"release notes: optim"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152118
* #152117
* #152116
| true
|
3,017,948,908
|
Update torch/optim/optimizer.py
|
janeyx99
|
closed
|
[
"release notes: optim"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152118
* __->__ #152117
* #152116
| true
|
3,017,948,639
|
Include other accelerators in capturable docstr for optimizers
|
janeyx99
|
closed
|
[
"release notes: optim"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152118
* #152117
* __->__ #152116
| true
|
3,017,910,917
|
Unify how we create random inputs for auto-tuning
|
masnesral
|
closed
|
[
"module: rocm",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152115
Summary: We're creating autotune inputs slightly differently when autotuning in-process vs. in a subprocess: One implementation is in TensorMeta.to_tensor() and another in AlgorithmSelectorCache.benchmark_example_value. Move the TensorMeta definition to select_algorith.py and call that implementation from AlgorithmSelectorCache.benchmark_example_value().
Test Plan: Existing unit tests
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,017,899,222
|
[Torch Profiler] Only two streams captured in CUDA graph but multiple streams shown in Torch Profiler
|
ispobock
|
closed
|
[
"module: cuda",
"triaged"
] | 6
|
NONE
|
### 🐛 Describe the bug
As shown in the following demo code, I use two streams to overlap the `set_kv_buffer` operation, which will be captured in a CUDA graph. The `alt_stream` is created when the KVPool object initialized, so this stream should be reused all the runtime. There is no more stream created during the runtime.
However, in the trace file of Torch Profiler, it shows 40 streams, seems each loop creates a new stream. Could you help check if it's a bug for Torch Profiler?
Code for reproduction:
```python
import torch
class KVPool:
def __init__(self):
self.alt_stream = torch.cuda.Stream()
self.k_buffer = [torch.zeros(10000, 8, 128, device='cuda') for _ in range(40)]
self.v_buffer = [torch.zeros(10000, 8, 128, device='cuda') for _ in range(40)]
def set_kv_buffer(self, layer_id, loc, k, v):
current_stream = torch.cuda.current_stream()
self.alt_stream.wait_stream(current_stream)
with torch.cuda.stream(self.alt_stream):
self.k_buffer[layer_id][loc] = k
self.v_buffer[layer_id][loc] = v
current_stream.wait_stream(self.alt_stream)
kv_pool = KVPool()
stream = torch.cuda.Stream()
graph = torch.cuda.CUDAGraph()
with torch.cuda.graph(graph, stream=stream):
for layer_id in range(40):
k = torch.randn(10, 8, 128, device='cuda')
v = torch.randn(10, 8, 128, device='cuda')
loc = torch.randint(0, 10000, (10,), device='cuda')
kv_pool.set_kv_buffer(layer_id, loc, k, v)
with torch.profiler.profile(activities=[torch.profiler.ProfilerActivity.CUDA], record_shapes=True, profile_memory=True, with_stack=True) as prof:
graph.replay()
prof.export_chrome_trace("trace.json")
```
Profile trace:
<img width="1204" alt="Image" src="https://github.com/user-attachments/assets/6aee1806-002a-4fcc-b933-0bbc186e991e" />
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Versions of relevant libraries:
[pip3] flashinfer-python==0.2.3+cu124torch2.5
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchao==0.9.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
```
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,017,898,839
|
[CI] [anaconda] CI Build and Test scripts MacOS
|
atalman
|
closed
|
[
"module: ci",
"triaged",
"better-engineering"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Related to https://github.com/pytorch/pytorch/issues/138506
CI Build and Test scripts to replace:
.ci/pytorch/macos-test.sh - used for torchbench
astunparse numpy scipy ninja pyyaml setuptools cmake typing-extensions requests protobuf numba cython scikit-learn librosa
.github/workflows/_mac-build.yml
.github/workflows/_mac-test.yml
.github/workflows/_mac-test-mps.yml
We would like to remove Anaconda install dependency
cc @seemethere @malfet @pytorch/pytorch-dev-infra
### Versions
2.8.0
| true
|
3,017,886,320
|
Pin to SHA for actions outside of PyTorch
|
zxiiro
|
closed
|
[
"module: rocm",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Pin actions from repos external to the PyTorch project to their shasums for security. This is a best practice as Git tags are not immutable.
https://openssf.org/blog/2024/08/12/mitigating-attack-vectors-in-github-workflows/
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,017,864,752
|
[ONNX] Implement sym_not
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 4
|
COLLABORATOR
|
Implement onnx support for sym_not. Replaces https://github.com/pytorch/pytorch/pull/147472
Fix https://github.com/pytorch/pytorch/issues/136572
| true
|
3,017,861,114
|
Pin to SHA for actions outside of PyTorch
|
zxiiro
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Pin actions from repos external to the PyTorch project to their shasums for security. This is a best practice as Git tags are not immutable.
https://openssf.org/blog/2024/08/12/mitigating-attack-vectors-in-github-workflows/
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,017,786,422
|
Python 3.11 and 3.13 support for Windows Arm64
|
iremyux
|
closed
|
[
"module: windows",
"open source",
"module: arm",
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
This PR adds Python 3.11 and 3.13 support Windows Arm64 wheels and creates the necessary jobs
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @Blackhex @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
3,017,759,979
|
[inductor][cpu] AMP static shape default wrapper AOTInductor performance regression in 2025_04_20 nightly release
|
zxd1997066
|
open
|
[
"module: regression",
"topic: performance",
"oncall: pt2",
"oncall: cpu inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>AMP static shape default wrapper AOTInductor</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>hf_GPT2</td>
<td>multiple</td>
<td>1</td>
<td>1.855095</td>
<td>0.020711079</td>
<td>0.038421019097505</td>
<td>41.296582</td>
<td>1</td>
<td>1.966686</td>
<td>0.015391351000000001</td>
<td>0.030269954532786</td>
<td>42.421077</td>
<td>0.94</td>
<td>0.79</td>
<td>0.74</td>
<td>1.03</td>
</tr>
<tr>
<td>torchbench</td>
<td>hf_GPT2_large</td>
<td>multiple</td>
<td>1</td>
<td>1.206619</td>
<td>0.191283364</td>
<td>0.23080614138631603</td>
<td>67.302233</td>
<td>1</td>
<td>1.533002</td>
<td>0.15207534</td>
<td>0.23313180037068001</td>
<td>69.351171</td>
<td>0.79</td>
<td>1.01</td>
<td>0.8</td>
<td>1.03</td>
</tr>
<tr>
<td>huggingface</td>
<td>DistillGPT2</td>
<td>multiple</td>
<td>16</td>
<td>1.730428</td>
<td>0.121863231</td>
<td>0.21087554709286802</td>
<td>35.54343</td>
<td>16</td>
<td>2.462141</td>
<td>0.08568144700000001</td>
<td>0.21095980359802702</td>
<td>36.301132</td>
<td>0.7</td>
<td>1.0</td>
<td>0.7</td>
<td>1.02</td>
</tr>
<tr>
<td>huggingface</td>
<td>GPT2ForSequenceClassification</td>
<td>multiple</td>
<td>4</td>
<td>1.151294</td>
<td>0.128171088</td>
<td>0.147562604587872</td>
<td>40.777434</td>
<td>4</td>
<td>2.2595</td>
<td>0.06586300199999999</td>
<td>0.14881745301899998</td>
<td>41.288425</td>
<td>0.51</td>
<td>1.01</td>
<td>0.51</td>
<td>1.01</td>
</tr>
<tr>
<td>torchbench</td>
<td>hf_GPT2</td>
<td>single</td>
<td>1</td>
<td>1.23444</td>
<td>0.150846848</td>
<td>0.18621138304512</td>
<td>38.041133</td>
<td>1</td>
<td>1.407039</td>
<td>0.127419733</td>
<td>0.179284533700587</td>
<td>40.229946</td>
<td>0.88</td>
<td>0.96</td>
<td>0.84</td>
<td>1.06</td>
</tr>
<tr>
<td>torchbench</td>
<td>hf_GPT2_large</td>
<td>single</td>
<td>1</td>
<td>1.013489</td>
<td>4.422686286</td>
<td>4.482343901311855</td>
<td>49.876722</td>
<td>1</td>
<td>1.45735</td>
<td>3.029691519</td>
<td>4.41532093521465</td>
<td>53.04356</td>
<td>0.7</td>
<td>0.99</td>
<td>0.69</td>
<td>1.06</td>
</tr>
<tr>
<td>huggingface</td>
<td>DistillGPT2</td>
<td>single</td>
<td>1</td>
<td>1.31723</td>
<td>0.20977825000000003</td>
<td>0.27632620424750004</td>
<td>32.651317</td>
<td>1</td>
<td>1.504701</td>
<td>0.18449616900000002</td>
<td>0.27761156999046904</td>
<td>34.461712</td>
<td>0.88</td>
<td>1.0</td>
<td>0.88</td>
<td>1.06</td>
</tr>
<tr>
<td>huggingface</td>
<td>GPT2ForSequenceClassification</td>
<td>single</td>
<td>1</td>
<td>0.970965</td>
<td>0.667234569</td>
<td>0.647861413289085</td>
<td>35.579949</td>
<td>1</td>
<td>1.497788</td>
<td>0.45225368</td>
<td>0.6773801348598399</td>
<td>37.857003</td>
<td>0.65</td>
<td>1.05</td>
<td>0.68</td>
<td>1.06</td>
</tr>
</tbody>
</table>
the bad commit: 90ddb33141b8aecbe0da979d284fff7fa9f93bca
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance huggingface GPT2ForSequenceClassification amp first static default 0 aot_inductor
Testing with aot_inductor.
multi-threads testing....
/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:896: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
loading model: 0it [00:02, ?it/s]
cpu eval GPT2ForSequenceClassification
skipping cudagraphs due to cpp wrapper enabled
running benchmark: 100%|█████████████████████████████████████████████████████████████████| 50/50 [00:19<00:00, 2.56it/s]
1.189x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,GPT2ForSequenceClassification,4,1.188574,177.095622,58.083640,0.928775,576.566067,620.781158,0,0,0,0,0,0,1
```
the last good commit: 2e5d95a0828060f816251671e8e59f2680f9f9be
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance huggingface GPT2ForSequenceClassification amp first static default 0 aot_inductor
Testing with aot_inductor.
multi-threads testing....
/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:896: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
loading model: 0it [00:02, ?it/s]
cpu eval GPT2ForSequenceClassification
skipping cudagraphs due to cpp wrapper enabled
running benchmark: 100%|█████████████████████████████████████████████████████████████████| 50/50 [00:16<00:00, 3.08it/s]
1.663x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,GPT2ForSequenceClassification,4,1.663003,120.465681,57.954118,0.875373,577.577779,659.807846,0,0,0,0,0,0,1
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>373ffb19</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>1a1a32ce5af880709a761c4cd9e9e43fb67e5058</td>
<td>main</td>
<td>52135db69a5b02bb9e5120a5fa410c303f649dfe</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+bccaa45</td>
<td>main</td>
<td>2.6.0a0+318bace</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance huggingface GPT2ForSequenceClassification amp first static default 0 aot_inductor
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/90ddb33141b8aecbe0da979d284fff7fa9f93bca
[huggingface-GPT2ForSequenceClassification-inference-amp-static-default-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/19895339/huggingface-GPT2ForSequenceClassification-inference-amp-static-default-multiple-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129
| true
|
3,017,750,908
|
Some Doc Issue about `torch.lobpcg()`
|
ILCSFNO
|
open
|
[
"module: docs",
"triaged",
"module: linear algebra"
] | 0
|
CONTRIBUTOR
|
### 📚 The doc issue
This issue is about func: `torch.lobpcg()`
### Discuss 1
Seen from #139563, some similar situation in `torch.lobpcg()`:
The doc of [torch.lobpcg()](https://pytorch.org/docs/stable/generated/torch.lobpcg.html#torch-lobpcg) shows its description as below:
https://github.com/pytorch/pytorch/blob/d743a7bd85d2d793bc0e2a38d4538276ce06b601/torch/_lobpcg.py#L394-L482
But its definition is:
https://github.com/pytorch/pytorch/blob/d743a7bd85d2d793bc0e2a38d4538276ce06b601/torch/_lobpcg.py#L345-L360
### Suggestion 1
* Fix the order of params in doc
### Discuss 2
It is showed that: If :math:`X` is specified, the value of `n`(when specified) must be the number of :math:`X` columns.
https://github.com/pytorch/pytorch/blob/d743a7bd85d2d793bc0e2a38d4538276ce06b601/torch/_lobpcg.py#L414-L419
But for n not the number of X, it can run well yet:
### Repro For 2
```python
import torch
A = torch.rand(5, 20, 20)
X = torch.randn(5, 20, 3)
n = 2
torch.lobpcg(A=A, X=X, n=n)
```
### Output For 2
```text
(tensor([[10.7453, 1.8008, 1.2166],
[10.0439, 1.8280, 1.1628],
[10.1809, 1.6499, 1.3205],
[ 9.8603, 2.0274, 1.4006],
[ 9.8713, 2.0663, 1.1549]]),
tensor([[[-0.2269, 0.1153, -0.2278],
[-0.2074, -0.1190, -0.1559],
[-0.2585, -0.1909, 0.1698],
...
[ 0.2556, -0.2533, 0.1293],
[ 0.2065, 0.0397, -0.2635]]]))
```
### Suggestion 2
* Remove the doc limit about `n` or add check of `n` and `X` if specified in codes
### Discuss 3
For param's limitation, which is not told in doc, see repro below:
### Repro For 3
```python
import torch
def generate_input_data():
A = torch.randn(5, 5)
A = (A @ A.t())
X = torch.randn(5, 2)
B = torch.eye(5)
return (A, B, X)
(A, B, X) = generate_input_data()
(eigenvalues, eigenvectors) = torch.lobpcg(A=A, B=B, X=X, k=2, method='ortho', tol=1e-06, niter=(- 1))
print('Eigenvalues:', eigenvalues)
print('Eigenvectors:', eigenvectors)
print('')
```
### Output For 3
```text
ValueError: LPBPCG algorithm is not applicable when the number of A rows (=5) is smaller than 3 x the number of requested eigenpairs (=2)
```
This limit that `the number of A rows must be bigger than 3 x the number of requested eigenpairs` is not shown.
### Suggestion 3
* Add warning that:
```text
.. warning:: `m` must be bigger than 3 x the number of requested eigenpairs.
```
Thanks!
### Suggest a potential alternative/fix
Suggestions above listed:
* Fix the order of params in doc
* Remove the doc limit about `n` or add check of `n` and `X` if specified in codes
* Add warning that:
```text
.. warning:: `m` must be bigger than 3 x the number of requested eigenpairs.
```
cc @svekars @sekyondaMeta @AlannaBurke @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,017,717,304
|
Relax tolerance on test_aot_autograd_exhaustive_matmul_cpu_float32 without MKL
|
Flamefire
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
When e.g. OpenBLAS is used instead of MKL the differences get to large:
> Greatest absolute difference: 5.91278076171875e-05 at index (7,) (up to 1e-05 allowed)
> Greatest relative difference: 3.468156592134619e-06 at index (7,) (up to 1.3e-06 allowed)
I traced some of the matmul operations and there are differences of around 8e-6 between MKL and OpenBLAS but I haven't found where exactly the backward pass is calculated which is where the actual differences arise. So I couldn't check if there is some difference in the low-level BLAS function used by the autograd.
However it seems odd that there is a difference at all: For the MKL case it seems to be zero up to the accuracy shown by Python.
So it seems the AOT compilation has some differences when MKL is not available.
Maybe this is also the reason why it fails for ARM and hence the test is skipped there. Maybe @zou3519 knows more as he introduced those skip markers in https://github.com/pytorch/pytorch/pull/85565
Is there any documentation how and where `matmul_backward(_out)` is generated and how AOT transforms it with and without MKL?
| true
|
3,017,686,303
|
[MTIA] Contribute OpExpanderPass to FX pass infra.
|
patrick-toulme
|
open
|
[
"fb-exported",
"release notes: fx",
"fx"
] | 4
|
NONE
|
Summary:
MTIA has been using an OpExpanderPass in our compiler. This type of pass allows pass authors to write two functions
1. Pattern Matcher - returns a boolean and an optional metadata tuple
2. Expander - accepts a node and an optional metadata tuple
It cleanly organizes the components of a compiler pass, and allows pass authors to not have to script boiler plate.
Test Plan: CI
Differential Revision: D73592104
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,017,591,462
|
Update _torch_docs.py to Fix torch.bernoulli()
|
ILCSFNO
|
open
|
[
"triaged",
"open source",
"release notes: python_frontend"
] | 1
|
CONTRIBUTOR
|
Fixes #152095
@malfet Wondering whether to fix signature that from:
```text
@overload
def bernoulli(input: Tensor, p: _float, *, generator: Optional[Generator] = None) -> Tensor:
```
to
```text
@overload
def bernoulli(input: Tensor, p: _float, *, generator: Optional[Generator] = None, out: Optional[Tensor] = None) -> Tensor:
```
Or just merge them two to:
```text
@overload
def bernoulli(input: Tensor, p: _float = None, *, generator: Optional[Generator] = None, out: Optional[Tensor] = None) -> Tensor:
```
which can cover the original both signatures.
| true
|
3,017,586,122
|
Change test/inductor/test_standalone_compile to test/inductor/test_compile
|
zou3519
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152103
These are the tests for torch._inductor.compile, so I renamed the file
test_compile. This is to avoid confusion with
torch._inductor.standalone_compile, which is now a lot more standalone
than torch._inductor.compile.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,017,498,662
|
Segmentation fault with # FIXME: copy.deepcopy() is not defined on nn.module
|
cattientk
|
open
|
[
"needs reproduction",
"module: crash",
"module: nn",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
I got error with this app always crash when I ran a model
```python
def _get_clones(module, N):
# FIXME: copy.deepcopy() is not defined on nn.module
return ModuleList([copy.deepcopy(module) for i in range(N)])
```
err:
```
Thread 0x00000002089f8c80 (most recent call first):
File ".venv-py311/lib/python3.11/site-packages/torch/nn/parameter.py", line 68 in __deepcopy__
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 153 in deepcopy
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 231 in _deepcopy_dict
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 146 in deepcopy
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 231 in _deepcopy_dict
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 146 in deepcopy
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 271 in _reconstruct
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 172 in deepcopy
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 231 in _deepcopy_dict
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 146 in deepcopy
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 231 in _deepcopy_dict
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 146 in deepcopy
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 271 in _reconstruct
File ".pyenv/versions/3.11.7/lib/python3.11/copy.py", line 172 in deepcopy
File ".venv-py311/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 1167 in <listcomp>
File ".venv-py311/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 1167 in _get_clones
File ".venv-py311/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 347 in __init__
```
### Versions
Pytorch version 2.6.0
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
3,017,246,853
|
linear + relu don't fuse
|
nairbv
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
I'm not entirely sure if this is expected behavior, but I think mm + relu are supposed to fuse when using torch.compile, and it looks like it's not happening.
example code:
```
import torch
import torch.nn as nn
model = nn.Sequential(nn.Linear(128, 128), nn.ReLU()).cuda()
x = torch.randn(32, 128, device="cuda")
compiled = torch.compile(model)
compiled(x)
```
I run with:
TORCH_LOGS=output_code python test.py
in the generated code I see:
```
@triton_heuristics.pointwise(
size_hints={'x': 4096},
filename=__file__,
triton_meta={'signature': {'in_out_ptr0': '*fp32', 'in_ptr0': '*fp32', 'out_ptr0': '*i1', 'xnumel': 'i32', 'XBLOCK': 'constexpr'}, 'device': DeviceProperties(type='cuda', index=0, multi_processor_count=128, cc=89, major=8, regs_per_multiprocessor=65536, max_threads_per_multi_processor=1536, warp_size=32), 'constants': {}, 'configs': [{(0,): [['tt.divisibility', 16]], (1,): [['tt.divisibility', 16]], (2,): [['tt.divisibility', 16]], (3,): [['tt.divisibility', 16]]}]},
inductor_meta={'grid_type': 'Grid1D', 'autotune_hints': set(), 'kernel_name': 'triton_poi_fused_addmm_relu_threshold_backward_0', 'mutated_arg_names': ['in_out_ptr0'], 'optimize_mem': False, 'no_x_dim': False, 'num_load': 2, 'num_reduction': 0, 'backend_hash': 'C11DE0628EED4C0AB66E26CDE84B57CDE9A70547B1A1FB7FCCB8011CFA28CE35', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False},
min_elem_per_thread=0
)
@triton.jit
def triton_poi_fused_addmm_relu_threshold_backward_0(in_out_ptr0, in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 4096
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = tl.full([XBLOCK], True, tl.int1)
x2 = xindex
x0 = (xindex % 128)
tmp0 = tl.load(in_out_ptr0 + (x2), None)
tmp1 = tl.load(in_ptr0 + (x0), None, eviction_policy='evict_last')
tmp2 = tmp0 + tmp1
tmp3 = tl.full([1], 0, tl.int32)
tmp4 = triton_helpers.maximum(tmp3, tmp2)
tmp5 = 0.0
tmp6 = tmp4 <= tmp5
tl.store(in_out_ptr0 + (x2), tmp4, None)
tl.store(out_ptr0 + (x2), tmp6, None)
''', device_str='cuda')
async_compile.wait(globals())
del async_compile
def call(args):
primals_1, primals_2, primals_3 = args
args.clear()
assert_size_stride(primals_1, (128, 128), (128, 1))
assert_size_stride(primals_2, (128, ), (1, ))
assert_size_stride(primals_3, (32, 128), (128, 1))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf0 = empty_strided_cuda((32, 128), (128, 1), torch.float32)
# Topologically Sorted Source Nodes: [input_1], Original ATen: [aten.addmm]
extern_kernels.mm(primals_3, reinterpret_tensor(primals_1, (128, 128), (1, 128), 0), out=buf0)
del primals_1
buf1 = buf0; del buf0 # reuse
buf2 = empty_strided_cuda((32, 128), (128, 1), torch.bool)
# Topologically Sorted Source Nodes: [input_1, input_2], Original ATen: [aten.addmm, aten.relu, aten.threshold_backward]
stream0 = get_raw_stream(0)
triton_poi_fused_addmm_relu_threshold_backward_0.run(buf1, primals_2, buf2, 4096, stream=stream0)
del primals_2
return (buf1, primals_3, buf2, )
```
the function name `triton_poi_fused_addmm_relu_threshold_backward_0` seems to suggest something being fused, but what I see inside the "fused" part looks like it's only the relu, and I do then see `extern_kernels.mm` is used inside `call`.
I also see this issue, which I think means they should be fusable, though maybe only happens with certain shapes?
https://github.com/pytorch/pytorch/issues/103480
### Versions
Environment:
```
Collecting environment information...
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.13.3 | packaged by conda-forge | (main, Apr 14 2025, 20:44:03) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 64%
CPU max MHz: 5837.0000
CPU min MHz: 400.0000
BogoMIPS: 8982.45
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] torch==2.7.0+cu128
[pip3] torchaudio==2.7.0+cu128
[pip3] torchvision==0.22.0+cu128
[pip3] triton==3.3.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.7.1.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] torch 2.7.0+cu128 pypi_0 pypi
[conda] torchaudio 2.7.0+cu128 pypi_0 pypi
[conda] torchvision 0.22.0+cu128 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,017,186,194
|
What is the difference between normal_tensor.storage().use_count() and viewed_tensor's?
|
CLiqing
|
closed
|
[] | 1
|
CONTRIBUTOR
|
In the test2() below, why is b.storage().use_count() still 2 even when I deleted the source tensor a?
```
import torch
def test1():
print("=============== test 1 ===============")
a = torch.empty(size=(17, 32, 128, 16), dtype=torch.float16)
b = a.view(-1)
# b.storage().use_count() is 2
def test2():
print("=============== test 2 ===============")
a = torch.empty(size=(17, 32, 128, 16), dtype=torch.float16)
b = a.view(-1)
del a
# b.storage().use_count() is 2
def test3():
print("=============== test 3 ===============")
a = torch.empty(size=(17, 32, 128, 16), dtype=torch.float16)
b = a.view(-1)
del b
# a.storage().use_count() is 1
test1()
test2()
test3()
```
I thought use_count=2 was because a and b each referenced the storage once, and deleting either tensor would make the use_comunt be 1, but that's not the case.
| true
|
3,017,033,307
|
Migrate to new Windows Arm64 runners
|
iremyux
|
open
|
[
"triaged",
"open source",
"ciflow/binaries",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
This PR moves the Windows Arm64 nightly jobs to the new runner image, see [arm-windows-11-image](https://github.com/actions/partner-runner-images/blob/main/images/arm-windows-11-image.md )
Fixes #151671
| true
|
3,016,777,788
|
Switch to standard pep517 sdist generation
|
zklaus
|
open
|
[
"open source",
"release notes: releng"
] | 2
|
COLLABORATOR
|
Generate source tarball with PEP 517 conform build tools instead of the custom routine in place right now.
Closes #150461.
The current procedure for generating the source tarball consists in creation of a source tree by manual copying and pruning of source files.
This PR replaces that with a call to the standard [build tool](https://build.pypa.io/en/stable/), which works with the build backend to produce an sdist. For that to work correctly, the build backend also needs to be configured. In the case of Pytorch, the backend currently is (the legacy version of) the setuptools backend, the source dist part of which is mostly configured via the `MANIFEST.in` file.
At the moment, this is still a draft due to two issues:
- According to PEP 517, the name of the source distribution file must coincide with the project name, or [more precisely](https://peps.python.org/pep-0517/#source-distributions), the source distribution of a project that generates `{NAME}-{...}.whl` wheels are required to be named `{NAME}-{...}.tar.gz`. Currently, the source tarball is called `pytorch-{...}.tar.gz`, but the generated wheels and python package are called `torch-{...}`.
- The source tree at the moment contains a small number of symbolic links. This [has been seen as problematic](https://github.com/pypa/pip/issues/5919) largely because of lack of support on Windows. Particularly unfortunate is a circular symlink in the third party `ittapi` module, which can not be resolved by replacing it with a copy.
For the first issue, the proposed solution is to distribute the source tarball as `torch-{...}.tar.gz`.
For the second issue, the best solution would be to eliminate all symbolic links in the source tree. If that is not possible, further investigation is needed. PEP 721 (now integrated in the [Source Distribution Format Specification](https://packaging.python.org/en/latest/specifications/source-distribution-format/#source-distribution-archive-features)) clarified which kinds of symbolic links are permissible. Possible solutions must be evaluated on a case-by-case basis for every existing symbolic link.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152098
| true
|
3,016,700,597
|
When using torch to convert to oxxn model, testing the inference results with actual images shows tensor mismatch
|
Zhengqinze05
|
open
|
[
"module: onnx",
"triaged",
"onnx-needs-info"
] | 2
|
NONE
|
### 🐛 Describe the bug
Here is my test code :
```py
import os
import torch
import torch.nn as nn
import torch
import torchvision
from torchvision.models.detection import fasterrcnn_mobilenet_v3_large_320_fpn
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torch.utils.data import Dataset, DataLoader, random_split
from torchvision.transforms import functional as F
import cv2
import numpy as np
import json
from torchvision.models.detection import ssdlite320_mobilenet_v3_large
from torchvision.models.detection.ssdlite import SSDLiteClassificationHead
import onnx
import onnxruntime as ort
ONNX_MODE_PATH = "./test_faster_rcnn.onnx"
DEVICE = torch.device('cpu')
model = fasterrcnn_mobilenet_v3_large_320_fpn(pretrained=True)
dummy_input = torch.randn(1, 3, 320, 320).to(DEVICE)
model.eval()
model.to(DEVICE)
model(dummy_input)
im = torch.zeros(1, 3, 320, 320).to(DEVICE)
torch.onnx.export(model, im, ONNX_MODE_PATH,
verbose=False,opset_version=11,
training=torch.onnx.TrainingMode.EVAL,
do_constant_folding=True,
input_names=['input'],
output_names=['output'],
dynamic_axes={'input': {0: 'batch', 2: 'height', 3: 'width'},
"boxes": {0: "num_detections"},
"scores": {0: "num_detections"},
"labels": {0: "num_detections"},
}
)
ort_session = ort.InferenceSession(ONNX_MODE_PATH)
input_name = ort_session.get_inputs()[0].name
img = cv2.imread(".\IMG_20230629_115933.jpg")
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (320, 320))
img_tensor = img.astype(np.float32) / 255.0
img_tensor = np.transpose(img_tensor, (2, 0, 1))[np.newaxis, ...]
# print("img_tensor :\n",img_tensor)
output_names = [output.name for output in ort_session.get_outputs()]
outputs = ort_session.run(output_names, {input_name: img_tensor})
boxes, scores, labels = outputs
print("scores:", scores)
```
Error message after running:
```pytb
Traceback (most recent call last):
File "D:\workspace\gesture_test\jxrobot_models\pytorch\github_test.py", line 57, in <module>
outputs = ort_session.run(output_names, {input_name: img_tensor})
File "D:\workspace\gesture_test\python_gesture\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 220, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'/roi_heads/Reshape_2' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\cpu\tensor\reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper size != 0 && (input_shape_size % size) == 0 was false. The input tensor cannot be reshaped to the requested shape. Input shape:{150,363}, requested shape:{-1,4}
```
It's amazing that I've tried using other models without encountering the same error, such as the ssdlite320_mobilenet-v3_1arge model. I've also tried modifying the dynamic_axes parameter, but it didn't work. I noticed that someone else had the same problem and fixed it by modifying functionalist.py, but I modified the same code and found that it didn't call the modified code
### Versions
Collecting environment information...
PyTorch version: 2.3.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 专业版
GCC version: (GCC) 14.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080
Nvidia driver version: 572.61
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3601
DeviceID=CPU0
Family=198
L2CacheSize=12288
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=3601
Name=12th Gen Intel(R) Core(TM) i7-12700K
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] onnx==1.17.0
[pip3] onnx-simplifier==0.4.36
[pip3] onnx-tf==1.10.0
[pip3] onnxruntime==1.19.2
[pip3] onnxruntime-gpu==1.19.2
[pip3] optree==0.15.0
[pip3] tf2onnx==1.16.1
[pip3] torch==2.3.1+cu118
[pip3] torchaudio==2.3.1+cu118
[pip3] torchvision==0.18.1+cu118
[conda] Could not collect
| true
|
3,016,669,515
|
[Accelerator] Add `torch.acc.set_default_device()` and `torch.acc.device_module()`
|
shink
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 15
|
CONTRIBUTOR
|
### Changes
Users may want to allocate tensors to the current accelerator, but `torch.set_default_device(torch.accelerator.current_accelerator())` is too long, so `torch.accelerator.set_default_device` (or `enable_default_device`?) may be a good choice.
### Test
```python
python test/test_accelerator.py
```
If you have any ideas, please let me know. Thanks! cc: @albanD @guangyey @FFFrog
| true
|
3,016,589,024
|
To fix inconsistency between signature and doc on `torch.bernoulli()`
|
ILCSFNO
|
open
|
[
"module: distributions",
"module: docs",
"triaged",
"actionable"
] | 3
|
CONTRIBUTOR
|
### 📚 The doc issue
The doc of [torch.bernoulli()](https://pytorch.org/docs/stable/generated/torch.bernoulli.html#torch-bernoulli) shows its description as below:
```text
torch.bernoulli(input: Tensor, *, generator: Optional[Generator], out: Optional[Tensor]) → Tensor
Draws binary random numbers (0 or 1) from a Bernoulli distribution.
...
```
For its signatures in codes, it showed like this:
```text
@overload
def bernoulli(input: Tensor, *, generator: Optional[Generator] = None, out: Optional[Tensor] = None) -> Tensor:
@overload
def bernoulli(input: Tensor, p: _float, *, generator: Optional[Generator] = None) -> Tensor:
```
They diff on the param `p`, for validate, repro below shows that `torch.bernoulli()` can have this param:
### Repro
```python
import torch
import numpy as np
input_data = torch.empty(10, 2).uniform_(0, 1)
output_data = torch.bernoulli(input_data, 0.3)
print(output_data)
```
### Output
```text
tensor([[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.],
[0., 1.],
[1., 0.],
[0., 0.],
[1., 0.],
[0., 0.],
[0., 1.]])
```
Suggest to fix the doc to meet the signatures in codes.
Thanks for noting.
### Suggest a potential alternative/fix
* Suggest to fix the doc to meet the signature in codes.
cc @fritzo @neerajprad @alicanb @nikitaved @svekars @sekyondaMeta @AlannaBurke
| true
|
3,016,539,613
|
Work around MPSGraph issue in backward pass of nn.ReplicationPad1d/2d
|
xwu-498
|
open
|
[
"triaged",
"open source",
"release notes: mps"
] | 2
|
NONE
|
Fixes https://github.com/pytorch/pytorch/issues/135447.
When the 3rd from last dimension is 2^16 or greater, MPSGraph returns 0 for padgradient.
To work around this, we break the problematic dimension into chunks with chunk size being
no greater than 2^16 - 1.
Test case for nn.ReplicationPad1d:
```
shape = [65739, 2, 4]
x_cpu = torch.randn(shape, device='cpu', requires_grad=True)
x_mps = x_cpu.clone().detach().to('mps').requires_grad_(True)
model = torch.nn.ReplicationPad1d((1, 1))
out_cpu = model(x_cpu)
out_mps = model(x_mps)
# backward
g_cpu = torch.randn_like(out_cpu)
g_mps = g_cpu.clone().detach().to('mps').requires_grad_(False)
out_cpu.backward(g_cpu)
out_mps.backward(g_mps)
print(f"{((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = }")
# Expected Output:
# ((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = tensor(0)
```
Test case for nn.ReplicationPad2d,
```
shape = [2, 65739, 2, 4]
x_cpu = torch.randn(shape, device='cpu', requires_grad=True)
x_mps = x_cpu.clone().detach().to('mps').requires_grad_(True)
model = torch.nn.ReplicationPad2d((1, 1, 1, 1))
out_cpu = model(x_cpu)
out_mps = model(x_mps)
# backward
g_cpu = torch.randn_like(out_cpu)
g_mps = g_cpu.clone().detach().to('mps').requires_grad_(False)
out_cpu.backward(g_cpu)
out_mps.backward(g_mps)
print(f"{((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = }")
# Expected Output:
# ((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = tensor(0)
```
These tests produce expected output with this workaround.
| true
|
3,016,508,665
|
Add optional device index to AOTIModelPackageLoader
|
juliusgh
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"release notes: inductor (aoti)",
"skip-url-lint"
] | 9
|
CONTRIBUTOR
|
This is my suggestion for resolving #152087
This PR extends the constructor of `AOTIModelPackageLoader` with an (optional) device index. The device type is still determined by `metadata_["AOTI_DEVICE_KEY"]`, but the `device_index` argument can be used to move an AOTI model package to different devices like `cuda:0`, `cuda:1`, ... in a convenient way. AFAIK, this is not possible so far using `AOTIModelPackageLoader` alone. The default case (no device index specified) with `metadata_["AOTI_DEVICE_KEY"] == "cuda"` would lead to the current behavior, i.e., the model is loaded to device `cuda`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,016,491,867
|
[AOTInductor] Inherit Buffer if not being updated
|
muchulee8
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 8
|
CONTRIBUTOR
|
Summary: Inherit buffer from original constants buffer if it's not being updated.
Test Plan: TBD
Differential Revision: D73571260
| true
|
3,016,486,952
|
[Intel GPU] Support f32 intermediate dtype, headdim size <=576 and f32 causal mask for SDPA
|
LuFinch
|
open
|
[
"module: cpu",
"triaged",
"module: mkldnn",
"open source",
"release notes: xpu",
"module: xpu"
] | 3
|
CONTRIBUTOR
|
In OneDNN v3.7, SDPA has below defects:
1. The dtype of intermediate value is the same as QKV, while Pytorch uses FP32 dtype for intermediate value to make sure better accuracy.
2. Only support headdim size <= 256.
3. Don't support implict causal mask when QKV is FP32. We need to build an attention mask explicitly with aten ops.
In OneDNN v3.8, they have update for these defects. Since these are tiny changes, I decided to put them in single PR.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @EikanWang @fengyuan14 @guangyey
| true
|
3,016,388,012
|
[XPU] test_tensordot_out_kernel_errors_with_autograd_xpu_float32 UT failure
|
CuiYifeng
|
closed
|
[
"triaged",
"module: xpu"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
New UT `test_linalg_xpu.py::TestLinalgXPU::test_tensordot_out_kernel_errors_with_autograd_xpu_float32` failed with the following error:
```
AssertionError: "the 'out' tensor was specified and requires gradients" does not match "cannot resize variables that require grad"
```
### Versions
Collecting environment information...
PyTorch version: 2.8.0a0+gitf7ddc51
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 18.1.8 (++20240731024944+3b5b5c1ec4a3-1~exp1~20240731145000.144)
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.8.6-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Genuine Intel(R) CPU 0000%@
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 5
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr avx512_fp16 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] optree==0.14.0
[pip3] torch==2.8.0a0+gitf7ddc51
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.2.3 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] torch 2.8.0a0+gitf7ddc51 pypi_0 pypi
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,016,385,956
|
dynamically set tags
|
jijiew
|
open
|
[] | 2
|
CONTRIBUTOR
|
Fixes ##150972
This pull request allows for dynamically set tags
| true
|
3,016,379,634
|
Incorrect Gradient Computation in `torch.log1p`
|
vwrewsge
|
closed
|
[
"triage review",
"module: autograd",
"module: NaNs and Infs"
] | 2
|
NONE
|
### 🐛 Describe the bug
# To Reproduce
```python
import torch
def test_bug():
a = torch.tensor([-1.0, 0.5, 1.0], requires_grad=True)
l = torch.log1p(a)[a > -1].sum() # This will include only a[1] and a[2]
l.backward()
print(a.grad)
if __name__ == "__main__":
test_bug()
```
# Output
```
tensor([ nan, 0.6667, 0.5000])
```
# Expected Behaviour
Since `a[0] = -1.0` is excluded by the `mask (a > -1)`, it should not contribute to the output `l`. Therefore, its gradient (`a.grad[0]`) should be `0` instead of `nan`.
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
cc @ezyang @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan
| true
|
3,016,373,776
|
AOTInductor package can only be loaded on the first GPU (cuda:0) in C++ via AOTIModelPackageLoader
|
juliusgh
|
closed
|
[
"triaged",
"oncall: r2p",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Thanks for implementing the very helpful AOTInductor features in C++! In my scenario I have to load a compiled `*.pt2` package on multiple GPUs (e.g. `cuda:{0..7}`) and then run inference on all of them. AFAIK `torch::inductor::AOTIModelPackageLoader` only supports loading the package on device `cuda` and I think this is not intended.
The signature of the constructor of `AOTIModelPackageLoader` does not have the option to pass the specific device:
```cpp
AOTIModelPackageLoader(
const std::string& model_package_path,
const std::string& model_name = "model",
const bool run_single_threaded = false,
const size_t num_runners = 1)
```
From the `*.pt2` file, the package loader reads the following information:
```cpp
// Construct the runner depending on the device information
std::string device = metadata_["AOTI_DEVICE_KEY"];
```
and then uses this string as device identifier to instantiate the runner
```cpp
runner_ = registered_aoti_runner[device](
so_path, num_runners, device, cubin_dir, run_single_threaded);
}
```
However, this is only the device type (e.g. `cuda` not `cuda:1`) and the model will only be loaded to `cuda:0`.
Changing the meta data `AOTI_DEVICE_KEY` to the specific device would not be a good solution in my opinion and so far the AOTI exporter in Python only stores the device type, e.g., `cuda`.
I think it would be very helpful if the constructor of `AOTIModelPackageLoader` can be extended with an (optional) device specification.
At the moment, I use the following workaround that works:
1. Unpack the `*.pt2` file manually and retrieve `so_path` and `cubin_dir`
2. Create PyTorch AOTI runner on specified device using
```cpp
auto runner = torch::inductor::AOTIModelContainerRunnerCuda(so_path, 1, device_string, cubin_dir, false);
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-24-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 NVL
GPU 1: NVIDIA H100 NVL
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 96
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 51%
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4800.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
Virtualization: AMD-V
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] torch==2.7.0+cu128
[pip3] torchaudio==2.7.0+cu128
[pip3] torchvision==0.22.0+cu128
[pip3] triton==3.3.0
cc @dzhulgakov @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
| true
|
3,016,198,895
|
Update CPU Inductor merge rules by adding more CPP Template
|
leslie-fang-intel
|
open
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152086
**Summary**
Add more CPP Template into the CPU Inductor merge rules.
| true
|
3,016,193,153
|
Aborted (core dumped) in torch.fliplr
|
cx104906
|
closed
|
[
"needs reproduction",
"module: crash",
"triaged",
"security",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
### Summary
When using torch.fliplr with invalid data, the program crashes with Aborted (core dumped).
### Reproduce
curl -L -o 001-args.pkl "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000001-args.pkl"
curl -L -o 001-kwargs.pkl "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000001-kwargs.pkl"
python testcrash/run.py
run.py:
```
import torch
import pickle
device = torch.device('cpu')
print(torch.__version__)
mylistfile = "xxx/testcrash/001-args.pkl"
mydictfile = "xxx/testcrash/001-kwargs.pkl"
with open(mylistfile,"rb") as f:
mylist = pickle.load(f)
with open(mydictfile,"rb") as f:
mydict = pickle.load(f)
print("test......")
torch.fliplr(*mylist,**mydict)
```
output:
2.6.0+cpu
/home/cas/anaconda3/envs/py310/lib/python3.10/site-packages/torch/_utils.py:410: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
device=storage.device,
test......
corrupted size vs. prev_size
已放弃 (核心已转储)
### Environment
python:3.10.0
pytorch:2.6.0+cpu
os:ubuntu-18.04
### Versions
python testcrash/collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (GCC) 11.2.0
Clang version: 12.0.1
CMake version: version 3.22.2
Libc version: glibc-2.27
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
字节序: Little Endian
CPU: 32
在线 CPU 列表: 0-31
每个核的线程数: 1
每个座的核数: 32
座: 1
NUMA 节点: 1
厂商 ID: GenuineIntel
CPU 系列: 6
型号: 85
型号名称: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
步进: 7
CPU MHz: 2095.076
BogoMIPS: 4190.15
虚拟化: VT-x
超管理器厂商: KVM
虚拟化类型: 完全
L1d 缓存: 32K
L1i 缓存: 32K
L2 缓存: 4096K
L3 缓存: 16384K
NUMA 节点0 CPU: 0-31
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.7.101
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.2.10.91
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.0.1
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.4.91
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.14.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.7.91
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.14.1
[pip3] torch==2.6.0+cpu
[pip3] torchaudio==2.6.0+cpu
[pip3] torchvision==0.21.0+cpu
[pip3] triton==3.2.0
[conda] torch 2.6.0+cpu pypi_0 pypi
[conda] torchaudio 2.6.0+cpu pypi_0 pypi
[conda] torchvision 0.21.0+cpu pypi_0 pypi
| true
|
3,015,903,788
|
Revert "Add a warning when a tensor with requires_grad=True is converted to a scalar (#143261)"
|
PaulZhang12
|
closed
|
[
"ci-no-td"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
This reverts commit 515b45e5693dbf9dd58d8472806cbe5f49e43074.
Reverted https://github.com/pytorch/pytorch/pull/143261 on behalf of https://github.com/clee2000 due to failing internal tests D72135661 ([comment](https://github.com/pytorch/pytorch/pull/143261#issuecomment-2767531682))
| true
|
3,015,860,789
|
DISABLED test_captured_scale_float16_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_captured_scale_float16_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41048885024).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_captured_scale_float16_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1748, in test_captured_scale
self.run_test(score_mod_scale, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 509, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 847, in sdpa_dense_backward
grad_softmax_scores - sum_scores + grad_logsumexp.unsqueeze(-1)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 550.12 MiB is free. Including non-PyTorch memory, this process has 21.50 GiB memory in use. Of the allocated memory 4.81 GiB is allocated by PyTorch, and 16.42 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_captured_scale_float16_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,860,785
|
DISABLED test_builtin_score_mods_float32_score_mod4_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float32_score_mod4_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41051796108).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float32_score_mod4_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,740,899
|
Wrong formula for CosineAnnealingLR
|
bbbbbbbbba
|
open
|
[
"module: docs",
"module: optimizer",
"triaged",
"actionable"
] | 3
|
NONE
|
### 📚 The doc issue
https://github.com/pytorch/pytorch/blob/1eba9b3aa3c43f86f4a2c807ac8e12c4a7767340/torch/optim/lr_scheduler.py#L1054-L1056
This formula does not incorporate the learning rate of the last step, is the same as the "If the learning rate is set solely by this scheduler" formula below, and does not seem to agree with the actual calculation for this case:
https://github.com/pytorch/pytorch/blob/1eba9b3aa3c43f86f4a2c807ac8e12c4a7767340/torch/optim/lr_scheduler.py#L1123-L1129
### Suggest a potential alternative/fix
I think the correct formula should be something like:
```
\eta_{t+1} & = \eta_{min} + (\eta_t - \eta_{min})\left.\left(1
+ \cos\left(\frac{T_{cur}+1}{T_{max}}\pi\right)\right)\middle/\left(1
+ \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right)\right.,
& T_{cur} \neq (2k+1)T_{max}; \\
```
cc @svekars @sekyondaMeta @AlannaBurke @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
3,015,732,432
|
[BE] Replace `std::runtime_error` with `TORCH_CHECK` [2/N]
|
shink
|
open
|
[
"open source",
"release notes: quantization"
] | 2
|
CONTRIBUTOR
|
Part of: #148114
Related commits:
- #151880
cc: @albanD
| true
|
3,015,711,694
|
Adding fbgemm to allowlist
|
jimone1
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 8
|
CONTRIBUTOR
|
Adding `torch.ops.fbgemm` to GraphPickler's allowlist. Otherwise, the fx graph module containing `fbgemm` node will return "Unable to pickle non-standard op" error.
The validation is done on the model and the difference appears only on the graph name not the node.
cc @aorenste @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,015,677,260
|
Adding torch.ops.fbgemm to whitelist in GraphPickler
|
jimone1
|
closed
|
[
"release notes: fx",
"fx"
] | 2
|
CONTRIBUTOR
|
As title, this is tested by running on the model see D73553912 as an example. The only difference is the module name.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,015,657,302
|
NCCL Error 1: unhandled CUDA error during DistributedDataParallel (DDP) training with NVIDIA GeForce RTX 5090
|
kingchou007
|
closed
|
[
"module: build"
] | 3
|
NONE
|
### 🐛 Describe the bug
I'm encountering an error while running a distributed training job using DistributedDataParallel (DDP) on a system with an NVIDIA GeForce RTX 5090 GPU. The job fails with the following error:
```
RuntimeError: NCCL Error 1: unhandled cuda error
The issue seems to be related to NCCL (NVIDIA Collective Communications Library) and CUDA compatibility. The error message mentions "named symbol not found", and NCCL is falling back to an internal implementation because it can't find the required CUDA symbols.
```
## Error Log
```
4d1a2ae696c8:50555:50555 [0] NCCL WARN Cuda failure 'named symbol not found'
4d1a2ae696c8:50555:50555 [0] NCCL INFO Bootstrap : Using eth0:172.17.0.2<0>
4d1a2ae696c8:50555:50555 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
4d1a2ae696c8:50555:50555 [0] NCCL INFO cudaDriverVersion 12080
NCCL version 2.14.3+cuda11.8
...
4d1a2ae696c8:50555:50719 [0] NCCL INFO Using network Socket
4d1a2ae696c8:50555:50719 [0] NCCL INFO NCCL_P2P_LEVEL set by environment to LOC
...
### Versions
/root/miniforge3/envs/rise/lib/python3.8/site-packages/torch/cuda/__init__.py:173: UserWarning:
NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.8.20 | packaged by conda-forge | (default, Sep 30 2024, 17:52:49) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 5090
GPU 1: NVIDIA GeForce RTX 5090
GPU 2: NVIDIA GeForce RTX 5090
GPU 3: NVIDIA GeForce RTX 5090
Nvidia driver version: 570.86.16
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7542 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2900.0000
CPU min MHz: 1500.0000
BogoMIPS: 5800.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-nccl-cu12==2.26.2.post1
[pip3] pytorch3d==0.7.8
[pip3] torch==2.0.0+cu118
[pip3] torch-geometric==2.6.1
[pip3] torchaudio==2.0.1+cu118
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[conda] cudatoolkit 11.8.0 h4ba93d1_13 conda-forge
[conda] nomkl 3.0 0 anaconda
[conda] numpy 1.24.4 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
[conda] nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2.post1 pypi_0 pypi
[conda] pytorch3d 0.7.8 dev_0 <develop>
[conda] torch 2.0.0+cu118 pypi_0 pypi
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torchaudio 2.0.1+cu118 pypi_0 pypi
[conda] torchvision 0.15.1+cu118 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @malfet @seemethere @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,015,585,361
|
[cuDNN][SDPA] Fix head-dim 256 condition for SM 10.0
|
eqy
|
closed
|
[
"module: cudnn",
"module: cuda",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"module: sdpa"
] | 9
|
COLLABORATOR
|
turns out the backward is not supported yet, whoops
cc @csarofeen @ptrblck @xwang233 @msaroufim @jerryzh168
| true
|
3,015,573,050
|
[vec128] Fix fmsub NEON defintion
|
malfet
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: cpu (aarch64)"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152075
As reported in https://github.com/pytorch/pytorch/issues/149292, according to manual, `vfmsq_f32` implements `c - a * b` rather than `a * b - c`, so it's call must be prefixed with `vnegq_f32`
Also, adjust the tests to use OpMath for FMA computation to avoid accuracy error accumulation due to non-fused multiply-and-add over lower precision dtypes
Note that `Vectorized::fmsub` is not currently instantiated anywhere, so it could safely remain broken
TODO:
- Enable C++ testing on MacOS and/or aarch64 platforms (right now Mac tests are build without C++ tests)
Fixes https://github.com/pytorch/pytorch/issues/149292
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,015,553,258
|
Cause `ceil_div` to accept values of differing types an upcast to the larger type
|
r-barnes
|
open
|
[
"fb-exported"
] | 3
|
CONTRIBUTOR
|
Test Plan: Sandcastle
Reviewed By: swolchok
Differential Revision: D73550062
| true
|
3,015,545,729
|
[export][function schema] support exporting hop with function schema argument
|
ydwu4
|
closed
|
[
"Merged",
"fx",
"ciflow/inductor",
"release notes: export"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151067
* #152248
* #152247
* #152246
* #152245
* #152244
* __->__ #152073
* #152072
We need to make function schema proxyable to trace a the auto_functionalized hop that takes function schema as inputs. The implementation basically follows how we support torchbind object:
1. upon seeing an untracked function schema arg, we creates a constant get_attr node
2. we track the function schema argument in export to support lift/unlift.
3. we need to support serde for functional schema. We'll add support for this in follow-up PRs.
However, compared with torchbind object:
1. we don't need a dynamo implementation, because the function schema is added when we auto_functionalize a hop to the argument of auto_functionalized. One potential use case is users re-traces an exported program with strict mode. Since non-strict is the default now, we don't see a use case yet.
2. we don't need an inductor implementation, because the function schema will go away after auto_functionalized re-inplacing pass.
edit: we greatly simplifies (and generalizes) the implementation following @zou3519 's suggestion of using pytree.register_constant
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,015,545,632
|
[export][be] better type annotation for lift_constants_pass
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/inductor",
"release notes: export"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151067
* #152248
* #152247
* #152246
* #152245
* #152244
* #152073
* __->__ #152072
| true
|
3,015,508,051
|
[inductor][BE] Clean up use_mixed_mm and mixed_mm_choice usage inside pytorch
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152071
Differential Revision: [D73551912](https://our.internmc.facebook.com/intern/diff/D73551912/)
Decided to leave the mixed_mm tests alive.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,481,487
|
failing to read rames even toughh the cam is connected
|
zouaoui21
|
closed
|
[] | 3
|
NONE
|
### 🐛 Describe the bug
when i run the code , the url is correct , and it connects to the camera but never reads the frames
for this code :
import cv2
rtsp_url = "..................................."
video = cv2.VideoCapture(rtsp_url)
video.set(cv2.CAP_PROP_BUFFERSIZE, 3) # Increase buffer to prevent frame drops
if not video.isOpened():
print("❌ Failed to open RTSP stream.")
else:
print("✅ Connected! Attempting to read frames...")
frame_count = 0
while video.isOpened():
ret, frame = video.read()
if not ret or frame is None:
print(f"❌ Failed to read frame {frame_count}. Stream may be disconnected.")
break
frame_count += 1
print(f"📸 Frame {frame_count} captured.")
cv2.imshow("RTSP", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video.release()
cv2.destroyAllWindows()
print("✅ Stream closed.")
this is the result :
✅ Connected! Attempting to read frames...
❌ Failed to read frame 0. Stream may be disconnected.
✅ Stream closed.
### Versions
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Famille Unilingue (10.0.26100 64 bits)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:20:11) [MSC v.1938 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.26100-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 12th Gen Intel(R) Core(TM) i7-1255U
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 1700
MaxClockSpeed: 1700
L2CacheSize: 6656
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.7.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0+cpu
[pip3] torchvision==0.21.0
[conda] _anaconda_depends 2024.10 py312_mkl_0
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46358
[conda] mkl-service 2.4.0 py312h2bbff1b_1
[conda] mkl_fft 1.3.10 py312h827c3e9_0
[conda] mkl_random 1.2.7 py312h0158946_0
[conda] numpy 1.26.4 py312hfd52020_0
[conda] numpy-base 1.26.4 py312h4dde369_0
[conda] numpydoc 1.7.0 py312haa95532_0
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0+cpu pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
| true
|
3,015,472,592
|
[Proposal] Drop legacy CUDA support to slim down the wheels
|
NevermindNilas
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"release notes: build"
] | 24
|
CONTRIBUTOR
|
Proposal of dropping legacy CUDA support to slim down the Windows wheels.
With the latest release of 2.7.0 and the new Blackwell support we've seen yet another rise in size to the wheel, going from ~2.5GB with Pytorch 2.6.0 all the way to ~3.1GB with pytorch 2.7.0 CUDA 12.8 on Python 3.12 and ~3.3GB with Python 3.13.
Python 3.12, Pytorch 2.7.0 Cuda 12.8

Python 3.13, Pytorch 2.7.0, Cuda 12.8

These .CI changes should imply the removal of support for many GPUs which are now about 8 years old if not older, including GPUs like the GTX960M, 950M, 940M, 930M and some other Quadro GPUs all the way from april 2016 like Quadro M500M as per [Nvidia's Documentation](https://developer.nvidia.com/cuda-gpus).
This change would also save on our bandwidth 😅
@seemethere
| true
|
3,015,451,478
|
Compiling attention (SDPA) with nested tensors fails when using DDP
|
mahyarkoy
|
open
|
[
"oncall: distributed",
"triaged",
"module: nestedtensor",
"oncall: pt2",
"module: sdpa"
] | 2
|
NONE
|
### 🐛 Describe the bug
When running the script below:
1. Compiling on single gpu no DDP works
2. No compiling using DDP works
3. But compiling using DDP breaks!
Using dense tensors as input (n) works fine in all cases.
To reproduce, run the script below with:
```
torchrun --standalone --nnodes=1 --nproc_per_node=2 compile_bug.py
```
compile_bug.py
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nested as nested
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
import os
torch.manual_seed(0)
torch.set_float32_matmul_precision('high')
@torch.compile ### BREAKS WHEN USING DDP, WORKS FINE WITHOUT DDP
class MultiheadAttention(nn.Module):
def __init__(self, embed_dim, nheads, dropout=0., k_dim=None, v_dim=None):
super().__init__()
self.nheads = nheads
self.dropout = dropout
self.embed_dim = embed_dim
self.k_dim = embed_dim if k_dim is None else k_dim
self.v_dim = embed_dim if v_dim is None else v_dim
self.query_proj = nn.Linear(self.embed_dim, self.k_dim * self.nheads)
self.key_proj = nn.Linear(self.embed_dim, self.k_dim * self.nheads)
self.value_proj = nn.Linear(self.embed_dim, self.v_dim * self.nheads)
self.out_proj = nn.Linear(self.v_dim * self.nheads, self.embed_dim)
def forward(self, query, key, value, is_causal=False):
### (..., embed_dim) -> (..., k_dim or v_dim)
query = self.query_proj(query)
key = self.key_proj(key)
value = self.value_proj(value)
### (N, L_t, k_dim*nheads) -> (N, L_t, nheads, k_dim) -> (N, nheads, L_t, k_dim)
query = query.reshape(query.size(0), -1, self.nheads, self.k_dim).transpose(1, 2)
### (N, L_s, k_dim*nheads) -> (N, L_s, nheads, k_dim) -> (N, nheads, L_s, k_dim)
key = key.reshape(key.size(0), -1, self.nheads, self.k_dim).transpose(1, 2)
### (N, L_s, v_dim*nheads) -> (N, L_s, nheads, v_dim) -> (N, nheads, L_s, v_dim)
value = value.reshape(value.size(0), -1, self.nheads, self.v_dim).transpose(1, 2)
### (N, nheads, L_t, v_dim)
attn_output = F.scaled_dot_product_attention(query, key, value,
dropout_p=self.dropout if self.training else 0.0, is_causal=is_causal)
### (N, nheads, L_t, v_dim) -> (N, L_t, nheads, v_dim) -> (N, L_t, nheads*v_dim)
attn_output = attn_output.transpose(1, 2).reshape(query.size(0), -1, self.nheads*self.v_dim)
### (N, L_t, nheads * v_dim) -> (N, L_t, embed_dim)
attn_output = self.out_proj(attn_output)
return (attn_output,)
## DIST setup
if 'WORLD_SIZE' in os.environ:
backend = 'nccl' if torch.cuda.is_available() else 'gloo'
dist.init_process_group(backend)
### Model setup
embed_dim = 512
num_heads = 8
k_dim = 64
v_dim = 64
dropout = 0.
att_layer = MultiheadAttention(embed_dim, num_heads,
k_dim=k_dim, v_dim=v_dim, dropout=dropout)
if dist.is_initialized():
rank = dist.get_rank()
device = f'cuda:{rank}'
att_layer.to(device)
att_layer = DDP(att_layer, device_ids=[rank], find_unused_parameters=False)
else:
device = 'cuda:0'
att_layer.to(device)
### Run on some data
t = torch.ones(4*512).reshape(4, 512).float()
n = nested.as_nested_tensor([t, t[:1]], layout=torch.jagged).to(device)
na = att_layer(query=n, key=n, value=n)
### Loss and backward
loss = na[0].values().sum()
loss.backward()
print(loss)
```
Error:
```
W0423 23:11:13.139000 916284 /nas/eclairnas01/users/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/distributed/run.py:766]
W0423 23:11:13.139000 916284 /nas/eclairnas01/users/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/distributed/run.py:766] *****************************************
W0423 23:11:13.139000 916284 /nas/eclairnas01/users/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0423 23:11:13.139000 916284 /nas/eclairnas01/users/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/distributed/run.py:766] *****************************************
[rank0]: Traceback (most recent call last):
[rank0]: File "/nas/home/mkhayat/projects/sparse_gs/bug_compile2.py", line 89, in <module>
[rank0]: na = att_layer(query=n, key=n, value=n)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1637, in forward
[rank0]: else self._run_ddp_forward(*inputs, **kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1464, in _run_ddp_forward
[rank0]: return self.module(*inputs, **kwargs) # type: ignore[index]
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 671, in _fn
[rank0]: raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 671, in _fn
[rank0]: raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1446, in __call__
[rank0]: return hijacked_callback(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1233, in __call__
[rank0]: result = self._inner_convert(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 619, in __call__
[rank0]: return _compile(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1079, in _compile
[rank0]: guarded_code = compile_inner(code, one_graph, hooks, transform)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
[rank0]: return function(*args, **kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 779, in compile_inner
[rank0]: return _compile_inner(code, one_graph, hooks, transform)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 815, in _compile_inner
[rank0]: out_code = transform_code_object(code, transform)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
[rank0]: transformations(instructions, code_options)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 736, in transform
[rank0]: tracer.run()
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3491, in run
[rank0]: super().run()
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
[rank0]: while self.step():
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3692, in RETURN_VALUE
[rank0]: self._return(inst)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3677, in _return
[rank0]: self.output.compile_subgraph(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1199, in compile_subgraph
[rank0]: self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1460, in compile_and_call_fx_graph
[rank0]: compiled_fn = self.call_user_compiler(gm)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1512, in call_user_compiler
[rank0]: return self._call_user_compiler(gm)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1569, in _call_user_compiler
[rank0]: raise BackendCompilerFailed(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1544, in _call_user_compiler
[rank0]: compiled_fn = compiler_fn(gm, self.example_inputs())
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 548, in compile_fn
[rank0]: submod_compiler.run(*example_inputs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/fx/interpreter.py", line 171, in run
[rank0]: self.env[node] = self.run_node(node)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 283, in run_node
[rank0]: compiled_submod_real = self.compile_submod(real_mod, new_args, kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 198, in compile_submod
[rank0]: self.compiler(input_mod, args),
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
[rank0]: compiled_gm = compiler_fn(gm, example_inputs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/__init__.py", line 2355, in __call__
[rank0]: return compile_fx(model_, inputs_, config_patches=self.config)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2169, in compile_fx
[rank0]: return aot_autograd(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 106, in __call__
[rank0]: cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1165, in aot_module_simplified
[rank0]: compiled_fn = AOTAutogradCache.load(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 842, in load
[rank0]: compiled_fn = dispatch_and_compile()
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1150, in dispatch_and_compile
[rank0]: compiled_fn, _ = create_aot_dispatcher_function(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 574, in create_aot_dispatcher_function
[rank0]: return _create_aot_dispatcher_function(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 824, in _create_aot_dispatcher_function
[rank0]: compiled_fn, fw_metadata = compiler_fn(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 1107, in aot_dispatch_autograd
[rank0]: compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
[rank0]: return self.compiler_fn(gm, example_inputs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2016, in fw_compiler_base
[rank0]: return inner_compile(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 633, in compile_fx_inner
[rank0]: return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
[rank0]: inner_compiled_fn = compiler_fn(gm, example_inputs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 840, in _compile_fx_inner
[rank0]: compiled_graph.post_compile(example_inputs, constants, graph_kwargs)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 578, in post_compile
[rank0]: set_tracing_context_output_strides(example_inputs, self)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2526, in set_tracing_context_output_strides
[rank0]: tuple(map_expr(e) for e in exprs) # type: ignore[misc]
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2526, in <genexpr>
[rank0]: tuple(map_expr(e) for e in exprs) # type: ignore[misc]
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2522, in map_expr
[rank0]: return shape_env.deserialize_symexpr(e)
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5569, in deserialize_symexpr
[rank0]: args = {
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5570, in <dictcomp>
[rank0]: str(e): SymInt(SymNode(e, self, int, int(val), fx_node=None))
[rank0]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/sympy/core/expr.py", line 307, in __int__
[rank0]: raise TypeError("Cannot convert symbols to int")
[rank0]: torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised:
[rank0]: TypeError: Cannot convert symbols to int
[rank0]: While executing %submod_0 : [num_users=1] = call_module[target=submod_0](args = (%l_query_, %s83, %l_self_modules_query_proj_parameters_weight_, %l_self_modules_query_proj_parameters_bias_, %l_self_modules_key_proj_parameters_weight_, %l_self_modules_key_proj_parameters_bias_, %l_self_modules_value_proj_parameters_weight_, %l_self_modules_value_proj_parameters_bias_), kwargs = {})
[rank0]: GraphModule: class GraphModule(torch.nn.Module):
[rank0]: def forward(self, L_self_modules_query_proj_parameters_weight_: "f32[512, 512][512, 1]", L_self_modules_query_proj_parameters_bias_: "f32[512][1]", s83: "Sym(s83)", L_query_: "f32[2, s83, 512][512*s83, 512, 1]", L_self_modules_key_proj_parameters_weight_: "f32[512, 512][512, 1]", L_self_modules_key_proj_parameters_bias_: "f32[512][1]", L_self_modules_value_proj_parameters_weight_: "f32[512, 512][512, 1]", L_self_modules_value_proj_parameters_bias_: "f32[512][1]", L_self_modules_out_proj_parameters_weight_: "f32[512, 512][512, 1]", L_self_modules_out_proj_parameters_bias_: "f32[512][1]"):
[rank0]: l_self_modules_query_proj_parameters_weight_ = L_self_modules_query_proj_parameters_weight_
[rank0]: l_self_modules_query_proj_parameters_bias_ = L_self_modules_query_proj_parameters_bias_
[rank0]: l_query_ = L_query_
[rank0]: l_self_modules_key_proj_parameters_weight_ = L_self_modules_key_proj_parameters_weight_
[rank0]: l_self_modules_key_proj_parameters_bias_ = L_self_modules_key_proj_parameters_bias_
[rank0]: l_self_modules_value_proj_parameters_weight_ = L_self_modules_value_proj_parameters_weight_
[rank0]: l_self_modules_value_proj_parameters_bias_ = L_self_modules_value_proj_parameters_bias_
[rank0]: l_self_modules_out_proj_parameters_weight_ = L_self_modules_out_proj_parameters_weight_
[rank0]: l_self_modules_out_proj_parameters_bias_ = L_self_modules_out_proj_parameters_bias_
[rank0]:
[rank0]: # No stacktrace found for following nodes
[rank0]: submod_0 = self.submod_0(l_query_, s83, l_self_modules_query_proj_parameters_weight_, l_self_modules_query_proj_parameters_bias_, l_self_modules_key_proj_parameters_weight_, l_self_modules_key_proj_parameters_bias_, l_self_modules_value_proj_parameters_weight_, l_self_modules_value_proj_parameters_bias_); l_query_ = l_self_modules_query_proj_parameters_weight_ = l_self_modules_query_proj_parameters_bias_ = l_self_modules_key_proj_parameters_weight_ = l_self_modules_key_proj_parameters_bias_ = l_self_modules_value_proj_parameters_weight_ = l_self_modules_value_proj_parameters_bias_ = None
[rank0]: submod_1 = self.submod_1(submod_0, s83, l_self_modules_out_proj_parameters_weight_, l_self_modules_out_proj_parameters_bias_); submod_0 = s83 = l_self_modules_out_proj_parameters_weight_ = l_self_modules_out_proj_parameters_bias_ = None
[rank0]: return (submod_1,)
[rank0]:
[rank0]: class submod_0(torch.nn.Module):
[rank0]: def forward(self, l_query_: "f32[2, s83, 512][512*s83, 512, 1]", s83: "Sym(s83)", l_self_modules_query_proj_parameters_weight_: "f32[512, 512][512, 1]", l_self_modules_query_proj_parameters_bias_: "f32[512][1]", l_self_modules_key_proj_parameters_weight_: "f32[512, 512][512, 1]", l_self_modules_key_proj_parameters_bias_: "f32[512][1]", l_self_modules_value_proj_parameters_weight_: "f32[512, 512][512, 1]", l_self_modules_value_proj_parameters_bias_: "f32[512][1]"):
[rank0]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:39 in forward, code: query = self.query_proj(query)
[rank0]: linear: "f32[2, s83, 512][512*s83, 512, 1]" = torch._C._nn.linear(l_query_, l_self_modules_query_proj_parameters_weight_, l_self_modules_query_proj_parameters_bias_); l_self_modules_query_proj_parameters_weight_ = l_self_modules_query_proj_parameters_bias_ = None
[rank0]:
[rank0]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:40 in forward, code: key = self.key_proj(key)
[rank0]: linear_1: "f32[2, s83, 512][512*s83, 512, 1]" = torch._C._nn.linear(l_query_, l_self_modules_key_proj_parameters_weight_, l_self_modules_key_proj_parameters_bias_); l_self_modules_key_proj_parameters_weight_ = l_self_modules_key_proj_parameters_bias_ = None
[rank0]:
[rank0]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:41 in forward, code: value = self.value_proj(value)
[rank0]: linear_2: "f32[2, s83, 512][512*s83, 512, 1]" = torch._C._nn.linear(l_query_, l_self_modules_value_proj_parameters_weight_, l_self_modules_value_proj_parameters_bias_); l_query_ = l_self_modules_value_proj_parameters_weight_ = l_self_modules_value_proj_parameters_bias_ = None
[rank0]:
[rank0]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:44 in forward, code: query = query.reshape(query.size(0), -1, self.nheads, self.k_dim).transpose(1, 2)
[rank0]: reshape: "f32[2, s83, 8, 64][512*s83, 512, 64, 1]" = linear.reshape(2, -1, 8, 64); linear = None
[rank0]: transpose: "f32[2, 8, s83, 64][512*s83, 64, 512, 1]" = reshape.transpose(1, 2); reshape = None
[rank0]:
[rank0]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:47 in forward, code: key = key.reshape(key.size(0), -1, self.nheads, self.k_dim).transpose(1, 2)
[rank0]: reshape_1: "f32[2, s83, 8, 64][512*s83, 512, 64, 1]" = linear_1.reshape(2, -1, 8, 64); linear_1 = None
[rank0]: transpose_1: "f32[2, 8, s83, 64][512*s83, 64, 512, 1]" = reshape_1.transpose(1, 2); reshape_1 = None
[rank0]:
[rank0]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:50 in forward, code: value = value.reshape(value.size(0), -1, self.nheads, self.v_dim).transpose(1, 2)
[rank0]: reshape_2: "f32[2, s83, 8, 64][512*s83, 512, 64, 1]" = linear_2.reshape(2, -1, 8, 64); linear_2 = None
[rank0]: transpose_2: "f32[2, 8, s83, 64][512*s83, 64, 512, 1]" = reshape_2.transpose(1, 2); reshape_2 = None
[rank0]:
[rank0]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:53 in forward, code: attn_output = F.scaled_dot_product_attention(query, key, value,
[rank0]: scaled_dot_product_attention: "f32[2, 8, s83, 64][512*s83, 64, 512, 1]" = torch._C._nn.scaled_dot_product_attention(transpose, transpose_1, transpose_2, dropout_p = 0.0, is_causal = False); transpose = transpose_1 = transpose_2 = None
[rank0]:
[rank0]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:56 in forward, code: attn_output = attn_output.transpose(1, 2).reshape(query.size(0), -1, self.nheads*self.v_dim)
[rank0]: transpose_3: "f32[2, s83, 8, 64][512*s83, 512, 64, 1]" = scaled_dot_product_attention.transpose(1, 2); scaled_dot_product_attention = None
[rank0]: reshape_3: "f32[2, s83, 512][512*s83, 512, 1]" = transpose_3.reshape(2, -1, 512); transpose_3 = None
[rank0]: return (reshape_3,)
[rank0]:
[rank0]: class submod_1(torch.nn.Module):
[rank0]: def forward(self, attn_output_1: "f32[2, s83, 512][512*s83, 512, 1]", s83: "Sym(s83)", l_self_modules_out_proj_parameters_weight_: "f32[512, 512][512, 1]", l_self_modules_out_proj_parameters_bias_: "f32[512][1]"):
[rank0]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:59 in forward, code: attn_output = self.out_proj(attn_output)
[rank0]: linear: "f32[2, s83, 512][512*s83, 512, 1]" = torch._C._nn.linear(attn_output_1, l_self_modules_out_proj_parameters_weight_, l_self_modules_out_proj_parameters_bias_); attn_output_1 = l_self_modules_out_proj_parameters_weight_ = l_self_modules_out_proj_parameters_bias_ = None
[rank0]: return linear
[rank0]:
[rank0]: Original traceback:
[rank0]: None
[rank1]: Traceback (most recent call last):
[rank1]: File "/nas/home/mkhayat/projects/sparse_gs/bug_compile2.py", line 89, in <module>
[rank1]: na = att_layer(query=n, key=n, value=n)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1637, in forward
[rank1]: else self._run_ddp_forward(*inputs, **kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1464, in _run_ddp_forward
[rank1]: return self.module(*inputs, **kwargs) # type: ignore[index]
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 671, in _fn
[rank1]: raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
[rank1]: return fn(*args, **kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 671, in _fn
[rank1]: raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
[rank1]: return fn(*args, **kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1446, in __call__
[rank1]: return hijacked_callback(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1233, in __call__
[rank1]: result = self._inner_convert(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 619, in __call__
[rank1]: return _compile(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1079, in _compile
[rank1]: guarded_code = compile_inner(code, one_graph, hooks, transform)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
[rank1]: return function(*args, **kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 779, in compile_inner
[rank1]: return _compile_inner(code, one_graph, hooks, transform)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 815, in _compile_inner
[rank1]: out_code = transform_code_object(code, transform)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
[rank1]: transformations(instructions, code_options)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
[rank1]: return fn(*args, **kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 736, in transform
[rank1]: tracer.run()
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3491, in run
[rank1]: super().run()
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
[rank1]: while self.step():
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
[rank1]: self.dispatch_table[inst.opcode](self, inst)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3692, in RETURN_VALUE
[rank1]: self._return(inst)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3677, in _return
[rank1]: self.output.compile_subgraph(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1199, in compile_subgraph
[rank1]: self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1460, in compile_and_call_fx_graph
[rank1]: compiled_fn = self.call_user_compiler(gm)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1512, in call_user_compiler
[rank1]: return self._call_user_compiler(gm)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1569, in _call_user_compiler
[rank1]: raise BackendCompilerFailed(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1544, in _call_user_compiler
[rank1]: compiled_fn = compiler_fn(gm, self.example_inputs())
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 548, in compile_fn
[rank1]: submod_compiler.run(*example_inputs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/fx/interpreter.py", line 171, in run
[rank1]: self.env[node] = self.run_node(node)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 283, in run_node
[rank1]: compiled_submod_real = self.compile_submod(real_mod, new_args, kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 198, in compile_submod
[rank1]: self.compiler(input_mod, args),
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
[rank1]: compiled_gm = compiler_fn(gm, example_inputs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/__init__.py", line 2355, in __call__
[rank1]: return compile_fx(model_, inputs_, config_patches=self.config)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2169, in compile_fx
[rank1]: return aot_autograd(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 106, in __call__
[rank1]: cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1165, in aot_module_simplified
[rank1]: compiled_fn = AOTAutogradCache.load(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 842, in load
[rank1]: compiled_fn = dispatch_and_compile()
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1150, in dispatch_and_compile
[rank1]: compiled_fn, _ = create_aot_dispatcher_function(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 574, in create_aot_dispatcher_function
[rank1]: return _create_aot_dispatcher_function(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 824, in _create_aot_dispatcher_function
[rank1]: compiled_fn, fw_metadata = compiler_fn(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 1107, in aot_dispatch_autograd
[rank1]: compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
[rank1]: return self.compiler_fn(gm, example_inputs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2016, in fw_compiler_base
[rank1]: return inner_compile(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 633, in compile_fx_inner
[rank1]: return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
[rank1]: inner_compiled_fn = compiler_fn(gm, example_inputs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 840, in _compile_fx_inner
[rank1]: compiled_graph.post_compile(example_inputs, constants, graph_kwargs)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 578, in post_compile
[rank1]: set_tracing_context_output_strides(example_inputs, self)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2526, in set_tracing_context_output_strides
[rank1]: tuple(map_expr(e) for e in exprs) # type: ignore[misc]
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2526, in <genexpr>
[rank1]: tuple(map_expr(e) for e in exprs) # type: ignore[misc]
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2522, in map_expr
[rank1]: return shape_env.deserialize_symexpr(e)
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5569, in deserialize_symexpr
[rank1]: args = {
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5570, in <dictcomp>
[rank1]: str(e): SymInt(SymNode(e, self, int, int(val), fx_node=None))
[rank1]: File "/nas/home/mkhayat/miniconda3/envs/gspy310new/lib/python3.10/site-packages/sympy/core/expr.py", line 307, in __int__
[rank1]: raise TypeError("Cannot convert symbols to int")
[rank1]: torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised:
[rank1]: TypeError: Cannot convert symbols to int
[rank1]: While executing %submod_0 : [num_users=1] = call_module[target=submod_0](args = (%l_query_, %s83, %l_self_modules_query_proj_parameters_weight_, %l_self_modules_query_proj_parameters_bias_, %l_self_modules_key_proj_parameters_weight_, %l_self_modules_key_proj_parameters_bias_, %l_self_modules_value_proj_parameters_weight_, %l_self_modules_value_proj_parameters_bias_), kwargs = {})
[rank1]: GraphModule: class GraphModule(torch.nn.Module):
[rank1]: def forward(self, L_self_modules_query_proj_parameters_weight_: "f32[512, 512][512, 1]", L_self_modules_query_proj_parameters_bias_: "f32[512][1]", s83: "Sym(s83)", L_query_: "f32[2, s83, 512][512*s83, 512, 1]", L_self_modules_key_proj_parameters_weight_: "f32[512, 512][512, 1]", L_self_modules_key_proj_parameters_bias_: "f32[512][1]", L_self_modules_value_proj_parameters_weight_: "f32[512, 512][512, 1]", L_self_modules_value_proj_parameters_bias_: "f32[512][1]", L_self_modules_out_proj_parameters_weight_: "f32[512, 512][512, 1]", L_self_modules_out_proj_parameters_bias_: "f32[512][1]"):
[rank1]: l_self_modules_query_proj_parameters_weight_ = L_self_modules_query_proj_parameters_weight_
[rank1]: l_self_modules_query_proj_parameters_bias_ = L_self_modules_query_proj_parameters_bias_
[rank1]: l_query_ = L_query_
[rank1]: l_self_modules_key_proj_parameters_weight_ = L_self_modules_key_proj_parameters_weight_
[rank1]: l_self_modules_key_proj_parameters_bias_ = L_self_modules_key_proj_parameters_bias_
[rank1]: l_self_modules_value_proj_parameters_weight_ = L_self_modules_value_proj_parameters_weight_
[rank1]: l_self_modules_value_proj_parameters_bias_ = L_self_modules_value_proj_parameters_bias_
[rank1]: l_self_modules_out_proj_parameters_weight_ = L_self_modules_out_proj_parameters_weight_
[rank1]: l_self_modules_out_proj_parameters_bias_ = L_self_modules_out_proj_parameters_bias_
[rank1]:
[rank1]: # No stacktrace found for following nodes
[rank1]: submod_0 = self.submod_0(l_query_, s83, l_self_modules_query_proj_parameters_weight_, l_self_modules_query_proj_parameters_bias_, l_self_modules_key_proj_parameters_weight_, l_self_modules_key_proj_parameters_bias_, l_self_modules_value_proj_parameters_weight_, l_self_modules_value_proj_parameters_bias_); l_query_ = l_self_modules_query_proj_parameters_weight_ = l_self_modules_query_proj_parameters_bias_ = l_self_modules_key_proj_parameters_weight_ = l_self_modules_key_proj_parameters_bias_ = l_self_modules_value_proj_parameters_weight_ = l_self_modules_value_proj_parameters_bias_ = None
[rank1]: submod_1 = self.submod_1(submod_0, s83, l_self_modules_out_proj_parameters_weight_, l_self_modules_out_proj_parameters_bias_); submod_0 = s83 = l_self_modules_out_proj_parameters_weight_ = l_self_modules_out_proj_parameters_bias_ = None
[rank1]: return (submod_1,)
[rank1]:
[rank1]: class submod_0(torch.nn.Module):
[rank1]: def forward(self, l_query_: "f32[2, s83, 512][512*s83, 512, 1]", s83: "Sym(s83)", l_self_modules_query_proj_parameters_weight_: "f32[512, 512][512, 1]", l_self_modules_query_proj_parameters_bias_: "f32[512][1]", l_self_modules_key_proj_parameters_weight_: "f32[512, 512][512, 1]", l_self_modules_key_proj_parameters_bias_: "f32[512][1]", l_self_modules_value_proj_parameters_weight_: "f32[512, 512][512, 1]", l_self_modules_value_proj_parameters_bias_: "f32[512][1]"):
[rank1]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:39 in forward, code: query = self.query_proj(query)
[rank1]: linear: "f32[2, s83, 512][512*s83, 512, 1]" = torch._C._nn.linear(l_query_, l_self_modules_query_proj_parameters_weight_, l_self_modules_query_proj_parameters_bias_); l_self_modules_query_proj_parameters_weight_ = l_self_modules_query_proj_parameters_bias_ = None
[rank1]:
[rank1]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:40 in forward, code: key = self.key_proj(key)
[rank1]: linear_1: "f32[2, s83, 512][512*s83, 512, 1]" = torch._C._nn.linear(l_query_, l_self_modules_key_proj_parameters_weight_, l_self_modules_key_proj_parameters_bias_); l_self_modules_key_proj_parameters_weight_ = l_self_modules_key_proj_parameters_bias_ = None
[rank1]:
[rank1]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:41 in forward, code: value = self.value_proj(value)
[rank1]: linear_2: "f32[2, s83, 512][512*s83, 512, 1]" = torch._C._nn.linear(l_query_, l_self_modules_value_proj_parameters_weight_, l_self_modules_value_proj_parameters_bias_); l_query_ = l_self_modules_value_proj_parameters_weight_ = l_self_modules_value_proj_parameters_bias_ = None
[rank1]:
[rank1]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:44 in forward, code: query = query.reshape(query.size(0), -1, self.nheads, self.k_dim).transpose(1, 2)
[rank1]: reshape: "f32[2, s83, 8, 64][512*s83, 512, 64, 1]" = linear.reshape(2, -1, 8, 64); linear = None
[rank1]: transpose: "f32[2, 8, s83, 64][512*s83, 64, 512, 1]" = reshape.transpose(1, 2); reshape = None
[rank1]:
[rank1]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:47 in forward, code: key = key.reshape(key.size(0), -1, self.nheads, self.k_dim).transpose(1, 2)
[rank1]: reshape_1: "f32[2, s83, 8, 64][512*s83, 512, 64, 1]" = linear_1.reshape(2, -1, 8, 64); linear_1 = None
[rank1]: transpose_1: "f32[2, 8, s83, 64][512*s83, 64, 512, 1]" = reshape_1.transpose(1, 2); reshape_1 = None
[rank1]:
[rank1]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:50 in forward, code: value = value.reshape(value.size(0), -1, self.nheads, self.v_dim).transpose(1, 2)
[rank1]: reshape_2: "f32[2, s83, 8, 64][512*s83, 512, 64, 1]" = linear_2.reshape(2, -1, 8, 64); linear_2 = None
[rank1]: transpose_2: "f32[2, 8, s83, 64][512*s83, 64, 512, 1]" = reshape_2.transpose(1, 2); reshape_2 = None
[rank1]:
[rank1]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:53 in forward, code: attn_output = F.scaled_dot_product_attention(query, key, value,
[rank1]: scaled_dot_product_attention: "f32[2, 8, s83, 64][512*s83, 64, 512, 1]" = torch._C._nn.scaled_dot_product_attention(transpose, transpose_1, transpose_2, dropout_p = 0.0, is_causal = False); transpose = transpose_1 = transpose_2 = None
[rank1]:
[rank1]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:56 in forward, code: attn_output = attn_output.transpose(1, 2).reshape(query.size(0), -1, self.nheads*self.v_dim)
[rank1]: transpose_3: "f32[2, s83, 8, 64][512*s83, 512, 64, 1]" = scaled_dot_product_attention.transpose(1, 2); scaled_dot_product_attention = None
[rank1]: reshape_3: "f32[2, s83, 512][512*s83, 512, 1]" = transpose_3.reshape(2, -1, 512); transpose_3 = None
[rank1]: return (reshape_3,)
[rank1]:
[rank1]: class submod_1(torch.nn.Module):
[rank1]: def forward(self, attn_output_1: "f32[2, s83, 512][512*s83, 512, 1]", s83: "Sym(s83)", l_self_modules_out_proj_parameters_weight_: "f32[512, 512][512, 1]", l_self_modules_out_proj_parameters_bias_: "f32[512][1]"):
[rank1]: # File: /nas/home/mkhayat/projects/sparse_gs/bug_compile2.py:59 in forward, code: attn_output = self.out_proj(attn_output)
[rank1]: linear: "f32[2, s83, 512][512*s83, 512, 1]" = torch._C._nn.linear(attn_output_1, l_self_modules_out_proj_parameters_weight_, l_self_modules_out_proj_parameters_bias_); attn_output_1 = l_self_modules_out_proj_parameters_weight_ = l_self_modules_out_proj_parameters_bias_ = None
[rank1]: return linear
[rank1]:
[rank1]: Original traceback:
[rank1]: None
```
### Versions
Nightly version of torch 2.8.0.dev20250410+cu128
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @chauhang @penguinwu
| true
|
3,015,447,159
|
[AOTI] aoti_compile_and_package + use_runtime_constant_folding gives "Error: CUDA driver error: file not found"
|
henrylhtsang
|
closed
|
[
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Hi, noticed a problem when using runtime constant folding with the aoti_compile_and_package API. Old API doesn't seem to have this problem, see the commented lines.
repro:
```
import torch
import torch._inductor.config
import torch.nn as nn
torch._inductor.config.aot_inductor.use_runtime_constant_folding = True
class Model(torch.nn.Module):
def __init__(self, device):
super().__init__()
self.w_pre = nn.Buffer(torch.randn(128, 128, device=device))
self.b = nn.Buffer(torch.randn(128, device=device))
def forward(self, x):
w_transpose = torch.transpose(self.w_pre, 0, 1)
w_relu = torch.nn.functional.relu(w_transpose)
w = w_relu + self.b
return torch.matmul(x, w)
def main():
input = (torch.randn(128, 128, device="cuda"),)
model = Model("cuda").cuda()
ep = torch.export.export(model, input, strict=False)
# path = torch._inductor.aot_compile(ep.module(), input)
# aot_model = torch._export.aot_load(path, "cuda")
path = torch._inductor.aoti_compile_and_package(ep)
aot_model = torch._inductor.aoti_load_package(path)
output = aot_model(*input)
print("done")
if __name__ == "__main__":
main()
```
### Error logs
```
I0423 16:04:55.421340 379199 model_package_loader.cpp:412] Extract file: data/aotinductor/model/cb7mj25cpi6eu3hpd5luh2t3a3uswzgysapcgfjvtihvplbain5y.wrapper.so to /tmp/s0otEP/data/aotinductor/model/cb7mj25cpi6eu3hpd5luh2t3a3uswzgysapcgfjvtihvplbain5y.wrapper.so
**Error: CUDA driver error: file not found**
Traceback (most recent call last):
/henrylhtsang/repros/aot.py", line 56, in main
output = aot_model(*input)
torch/_inductor/package/package.py", line 251, in __call__
flat_outputs = self.loader.boxed_run(flat_inputs) # type: ignore[attr-defined]
RuntimeError: run_func_( container_handle_, input_handles.data(), input_handles.size(), output_handles.data(), output_handles.size(), reinterpret_cast<AOTInductorStreamHandle>(stream_handle), proxy_executor_handle_) API call failed at /torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 152
```
### Versions
trunk
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
| true
|
3,015,442,422
|
[Graph Partition] fix extra reference in runner.partitions to cudagraphify functions
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
When CompiledFxGraph is deallocated, its cudagraphifed fn (i.e., `current_callable`) is expected to also be deallocated.
Without graph partition, this is true since the cudagraphified fn is only refered by compiled_fx_graph.current_callable.
However, with graph partition, runner.partitions hold cudagraphified fns while compiled_fx_graph.current_callable holds the runner.call. Thus the cudagraphied fn may not be deallocated when CompiledFxGraph is deallocated. This leads to errors in several unit tests (e.g., test_unaligned_static_input_no_cudagraphs and test_unaligned_static_input_non_trees).
In this PR, we also clean up runner.partitions when CompiledFxGraph is deallocated. This fixes the issue.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,427,295
|
More logs to show why fx graph cache isn't hit / created?
|
henrylhtsang
|
closed
|
[
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
Hi, when working on https://github.com/pytorch/pytorch/blob/main/torch/_inductor/compile_fx.py#L732-L990, it is very hard to tell why sometimes fx graph cache isn't hit, even with TORCH_LOGS="+inductor".
In my case, tlparse provides a bit more info, like

but not enough to understand why the cache wasn't hit. I would still have to add log.debug everywhere to figure that out.
cc @chauhang @penguinwu
| true
|
3,015,377,615
|
[MPS] Adjust test_sum_dtypes so it can run on MPS.
|
dcci
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 8
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,377,419
|
[Bug] Memory leak in autograd with custom CUDA operations
|
khlaifiabilel
|
open
|
[
"needs reproduction",
"module: cpp-extensions",
"module: autograd",
"module: memory usage",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
## 🐛 Bug
<!-- A clear and concise description of the bug -->
When using custom CUDA operations with PyTorch's autograd system, there appears to be a memory leak during backward passes in long training loops. The memory usage gradually increases even when no new tensors should be created or retained.
## To Reproduce
Steps to reproduce the behavior:
1. Define a custom CUDA operation using the PyTorch C++ extension system
2. Create a model that uses this operation within an autograd computation graph
3. Run the model in a training loop for 100+ iterations
4. Monitor memory usage using `nvidia-smi` or other memory profiling tools
```python
import torch
from torch.utils.cpp_extension import load_inline
import time
# Define a simple custom CUDA extension
cuda_source = """
__global__ void add_one_kernel(const float* input, float* output, int size) {
const int index = blockIdx.x * blockDim.x + threadIdx.x;
if (index < size) {
output[index] = input[index] + 1.0f;
}
}
torch::Tensor add_one_cuda(torch::Tensor input) {
auto output = torch::empty_like(input);
const int threads = 1024;
const int blocks = (input.numel() + threads - 1) / threads;
add_one_kernel<<<blocks, threads>>>(
input.data_ptr<float>(),
output.data_ptr<float>(),
input.numel()
);
return output;
}
"""
cpp_source = """
#include <torch/extension.h>
torch::Tensor add_one_cuda(torch::Tensor input);
torch::Tensor add_one(torch::Tensor input) {
return add_one_cuda(input);
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("add_one", &add_one, "Add one to a tensor");
}
"""
# Load the extension
my_op = load_inline(
name="my_op",
cpp_sources=cpp_source,
cuda_sources=cuda_source,
functions=["add_one"],
verbose=True
)
# Create a simple model using the custom op
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(10, 10)
def forward(self, x):
x = self.linear(x)
# Use our custom operation
x = my_op.add_one(x)
return x
# Training loop
model = MyModel().cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
criterion = torch.nn.MSELoss()
# Track memory usage
initial_memory = torch.cuda.memory_allocated()
print(f"Initial memory: {initial_memory / 1024**2:.2f} MB")
for i in range(200):
x = torch.randn(100, 10, device="cuda")
y = torch.randn(100, 10, device="cuda")
optimizer.zero_grad()
output = model(x)
loss = criterion(output, y)
loss.backward()
optimizer.step()
if i % 20 == 0:
current_memory = torch.cuda.memory_allocated()
print(f"Iteration {i}: {current_memory / 1024**2:.2f} MB")
print(f"Diff from start: {(current_memory - initial_memory) / 1024**2:.2f} MB")
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1026-azure-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 NVL
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9V84 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 40
Socket(s): 1
Stepping: 1
BogoMIPS: 4800.09
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves user_shstk avx512_bf16 clzero xsaveerptr rdpru arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 160 MiB (5 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-39
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.4
[pip3] numpydoc==1.7.0
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnxruntime==1.21.0
[pip3] torch==2.7.0
[pip3] triton==3.3.0
[conda] _anaconda_depends 2024.10 py312_mkl_0
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.10 py312h5eee18b_0
[conda] mkl_random 1.2.7 py312h526ad5a_0
[conda] numpy 2.2.4 pypi_0 pypi
[conda] numpydoc 1.7.0 py312h06a4308_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.7.0 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
cc @malfet @zou3519 @xmfan @ezyang @albanD @gqchen @nikitaved @soulitzer @Varal7
| true
|
3,015,374,952
|
[inductor][invoke_subgraph] Run joint graph passes for inference
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152357
* #152207
* __->__ #152062
* #151961
* #151957
* #151477
* #151633
* #151409
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,331,099
|
Add graph inputs/outputs to comm overlap pass signature
|
wconstab
|
closed
|
[
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146558
* #146562
* __->__ #152061
* #152060
* #146561
To support peak-memory-aware passes, we can pass graph inputs/outputs
to these passes so they can compute a memory timeline.
This PR should be a functional no-op for existing passes
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,330,984
|
Add 'step' counter to visualize_overlap log
|
wconstab
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146558
* #146562
* #146561
* __->__ #152060
Example of log after the change:
```
[rank0]:V0227 15:07:20.704000 1594243 torch/_inductor/comms.py:621] [0/0] [__overlap] ==== Visualize overlap after reordering pass <function group_copy_collective at 0x7f41c1922050> (ran in 0.026380538940429688 sec)====
[rank0]:V0227 15:07:20.705000 1594243 torch/_inductor/comms.py:569] [0/0] [__overlap] 0: GroupedSchedulerNode(name='op6_op7') (size=[512], stride=[1]), (size=[4096], stride=[1]) () (0 ns)
[rank0]:V0227 15:07:20.705000 1594243 torch/_inductor/comms.py:569] [0/0] [__overlap] 1: GroupedSchedulerNode(name='op55_op56') (size=[512], stride=[1]), (size=[4096], stride=[1]) () (0 ns)
[rank0]:V0227 15:07:20.705000 1594243 torch/_inductor/comms.py:569] [0/0] [__overlap] 2: GroupedSchedulerNode(name='op75_op76') (size=[512], stride=[1]), (size=[4096], stride=[1]) () (0 ns)
[rank0]:V0227 15:07:20.706000 1594243 torch/_inductor/comms.py:569] [0/0] [__overlap] 3: GroupedSchedulerNode(name='op121_op122') (size=[512], stride=[1]), (size=[4096], stride=[1]) () (0 ns)
[rank0]:V0227 15:07:20.706000 1594243 torch/_inductor/comms.py:569] [0/0] [__overlap] 4: GroupedSchedulerNode(name='op141_op142') (size=[512], stride=[1]), (size=[4096], stride=[1]) () (0 ns)
[rank0]:V0227 15:07:20.706000 1594243 torch/_inductor/comms.py:569] [0/0] [__overlap] 5: GroupedSchedulerNode(name='op187_op188') (size=[512], stride=[1]), (size=[4096], stride=[1]) () (0 ns)
[rank0]:V0227 15:07:20.706000 1594243 torch/_inductor/comms.py:569] [0/0] [__overlap] 6: GroupedSchedulerNode(name='op207_op208') (size=[512], stride=[1]), (size=[4096], stride=[1]) () (0 ns)
[rank0]:V0227 15:07:20.707000 1594243 torch/_inductor/comms.py:569] [0/0] [__overlap] 7: GroupedSchedulerNode(name='op253_op254') (size=[512], stride=[1]), (size=[4096], stride=[1]) () (0 ns)
[rank0]:V0227 15:07:20.707000 1594243 torch/_inductor/comms.py:569] [0/0] [__overlap] 8: GroupedSchedulerNode(name='op273_op274') (size=[512], stride=[1]), (size=[4096], stride=[1]) () (0 ns)
[rank0]:V0227 15:07:20.707000 1594243 torch/_inductor/comms.py:569] [0/0] [__overlap] 9: GroupedSchedulerNode(name='op319_op320') (size=[512], stride=[1]), (size=[4096], stride=[1]) () (0 ns)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,326,104
|
DISABLED test_comprehensive_linalg_pinv_singular_cuda_float32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_linalg_pinv_singular_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41035465962).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_linalg_pinv_singular_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 648, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 599, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 397, in compute_grads
return torch.autograd.grad(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2153, in backward
return impl_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2139, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2231, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
disable(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 856, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2191, in bw_compiler
return inner_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 724, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 860, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 844, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1453, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1340, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2209, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2256, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2998, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_jenkins/7b/c7b2ziknpe63hzypobeboleo47is52h7pvplnv6rdnlnel3p5g5r.py", line 135, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 446, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 466, in _wait_futures
scope[key] = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3500, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 341, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/torchinductor_jenkins/triton/0/JMNOW52LS777ATESQEEMHD3IUZLFBHMNK5TLYYTG64OF6YC2TTFQ/triton_poi_fused_sub_0.cubin')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2263, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(3, 0), device="cuda:0", dtype=torch.float32], args=TensorList[Tensor[size=(3, 0), device="cuda:0", dtype=torch.float32]], kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_WITH_SLOW=1 PYTORCH_TEST_SKIP_FAST=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_linalg_pinv_singular_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,326,018
|
DISABLED test_comprehensive_floor_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 29
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_floor_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41036342325).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 9 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_floor_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 648, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 489, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 860, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 844, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1453, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1340, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2209, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2256, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 2998, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpt2gwy4g0/kj/ckjbb6qco3zkul3zd5k3tfyqnawa2agepqom6ahagazbr5m3wqaf.py", line 75, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 446, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 466, in _wait_futures
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3500, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 341, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpkeqbnqhx/triton/5QNBQ545MCQXWLPGRRH2F67OXJUR5TZCGDSGGRQET555GILLR24Q/triton_poi_fused_floor_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(20, 20), device="cuda:0", dtype=torch.float16], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_floor_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,015,325,956
|
DISABLED test_comprehensive_bitwise_right_shift_cuda_int32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 27
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_bitwise_right_shift_cuda_int32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41036342342).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 9 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_bitwise_right_shift_cuda_int32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 648, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 489, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 860, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 844, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1453, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1340, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2209, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2256, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 2998, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpf1sh4de1/lb/clb7fv6voav27zm4jfnlkgpjp33xdspaz5lpiisne24n6me2scl6.py", line 80, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 446, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 466, in _wait_futures
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3500, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 341, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpd3de94rl/triton/RJ76XZAHTJZ3ZFKHFK2VI7TIUO4BO6LZOILFVZBUY4QQ7DQRPC3Q/triton_poi_fused_bitwise_right_shift_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.int32], args=TensorList[Tensor[size=(), device="cuda:0", dtype=torch.int32]], kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_bitwise_right_shift_cuda_int32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,015,325,955
|
DISABLED test_comprehensive_native_layer_norm_cuda_float32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_native_layer_norm_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41036342342).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_native_layer_norm_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 648, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 489, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 860, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 844, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1453, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1340, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2209, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2256, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 2998, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmp7n0zla6k/lm/clmwwvttkuj5pvrjxbxsd5oafuvueqkkg7ww2uf4dto47bu2io27.py", line 236, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 446, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 466, in _wait_futures
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3500, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 341, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpln2kf1fx/triton/ITD4R6QVD67WVUWAQV6YQQDYGMIU4NIUZA3JTYOLAAH5R3ZL3U7Q/triton_poi_fused_native_layer_norm_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 4: SampleInput(input=Tensor[size=(2, 2, 3), device="cuda:0", dtype=torch.float32], args=((2,3),Tensor[size=(2, 3), device="cuda:0", dtype=torch.float32],Tensor[size=(2, 3), device="cuda:0", dtype=torch.float32],-0.5), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=4 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_native_layer_norm_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,015,325,421
|
Test
|
svekars
|
open
|
[
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
3,015,324,600
|
DISABLED test_comprehensive_sort_cuda_float32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_sort_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41036519831).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_sort_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 648, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 489, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 860, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 844, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1453, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1340, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2209, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2256, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2998, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmphnguf6q8/ia/ciarqs3itwl3erauarkh72waychdb6cw3kszdbtuxsmbgj7bu3tc.py", line 108, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 446, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 466, in _wait_futures
scope[key] = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3500, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 341, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpmtscw5y_/triton/QBSZFMNVZHWTBDR54C5W5EM6E6KMO3EKOZO5BO7PHFML2HSDTPYA/triton_poi_fused_sort_1.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 26: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.float32], args=(0), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=26 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_sort_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,324,527
|
DISABLED test_comprehensive_nn_functional_max_pool3d_cuda_float32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_max_pool3d_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41035465928).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_nn_functional_max_pool3d_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 648, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 489, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 860, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 844, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1453, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1340, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2209, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2256, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2998, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_jenkins/hj/chjbayfdlxailjcfln2wl5qoisiw3v344yicggjc2o5n66mvl2aq.py", line 250, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 446, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 466, in _wait_futures
scope[key] = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3500, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 341, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/torchinductor_jenkins/triton/0/LOXVQKYJOOWWEQNLEK3Y4RTYHB5MVDEEF3CX5X5CJJIRWGHGQQWQ/triton_per_fused_max_pool3d_with_indices_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(1, 2, 3, 6, 5), device="cuda:0", dtype=torch.float32], args=(), kwargs={'kernel_size': '3', 'stride': '2', 'ceil_mode': 'True', 'padding': '0', 'dilation': '1', 'return_indices': 'True'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_WITH_SLOW=1 PYTORCH_TEST_SKIP_FAST=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_nn_functional_max_pool3d_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,324,465
|
DISABLED test_index_multiple_cuda (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_index_multiple_cuda&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41035547301).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_index_multiple_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,324,396
|
DISABLED test_builtin_score_mods_float32_score_mod7_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float32_score_mod7_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41032594592).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float32_score_mod7_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1127, in test_builtin_score_mods
self.run_test(score_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 509, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 847, in sdpa_dense_backward
grad_softmax_scores - sum_scores + grad_logsumexp.unsqueeze(-1)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 542.12 MiB is free. Including non-PyTorch memory, this process has 21.51 GiB memory in use. Of the allocated memory 5.88 GiB is allocated by PyTorch, and 15.37 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_float32_score_mod7_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,324,338
|
DISABLED test_builtin_score_mods_float32_score_mod2_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float32_score_mod2_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41036274598).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float32_score_mod2_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1127, in test_builtin_score_mods
self.run_test(score_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 509, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 847, in sdpa_dense_backward
grad_softmax_scores - sum_scores + grad_logsumexp.unsqueeze(-1)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 412.12 MiB is free. Including non-PyTorch memory, this process has 21.63 GiB memory in use. Of the allocated memory 5.80 GiB is allocated by PyTorch, and 15.57 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_float32_score_mod2_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,323,533
|
unbreak fb:operator_benchmark_test
|
sharpobject
|
open
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
NONE
|
Summary: unbreak fb:operator_benchmark_test
Test Plan: works on my machine
Differential Revision: D73540912
| true
|
3,015,292,815
|
[Graph Partition] Pass all cudagraph tree tests
|
BoyuanFeng
|
open
|
[
"oncall: distributed",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,015,258,285
|
[Build] fix functorch install dir
|
stefantalpalaru
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
NONE
| null | true
|
3,015,253,927
|
Pin theme to a branch
|
svekars
|
closed
|
[
"module: docs",
"Merged",
"ciflow/trunk",
"topic: docs",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
cc @sekyondaMeta @AlannaBurke
| true
|
3,015,240,302
|
[DTensor] make test_dtensor_ops report dtensor_args
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"merging"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149764
* __->__ #152045
Before:
Does not report DTensor args, and you can't tell which combination of
sharding/replication is used for that particular iteration
```
RuntimeError: failed to run: torch.flatten, with (*[tensor([[[-6.1074e-01, 1.1260e+00, 1.7686e+00, -7.8216e+
[ 8.8558e-01, -3.0949e+00, -5.4584e+00, -8.5322e+00],
[-2.9770e-01, -3.2814e+00, -7.5875e+00, -8.1269e+00],
[-6.0136e+00, -5.1712e+00, -4.2667e+00, -4.2142e+00]],
[[-7.5171e+00, 5.3900e+00, -7.9208e+00, 6.1000e+00],
[-1.7350e+00, -3.6188e-03, -7.1592e+00, 9.2951e-02],
[ 5.7143e+00, -3.0805e+00, 7.6227e+00, -7.4862e+00],
[ 4.3167e-01, -4.9678e+00, -1.2441e+00, -2.3042e+00]],
[[-7.4280e+00, -2.7754e+00, -5.2989e+00, -6.1920e+00],
[-2.5225e+00, -5.2520e+00, 6.5686e+00, -6.0350e+00],
[-5.1740e+00, -1.6405e+00, -4.4463e+00, -5.1884e+00],
[ 3.9581e+00, -6.3151e-01, -3.3223e+00, 4.0546e+00]],
[[-2.8112e+00, 3.8742e+00, -4.4612e+00, -5.0016e+00],
[ 7.0568e+00, -2.0951e-01, -8.0049e+00, -4.1438e+00],
[ 3.1207e+00, -7.6518e+00, 7.1084e+00, -1.0500e+00],
[ 8.8823e+00, -1.1178e+00, 4.8485e+00, -8.8593e+00]]],
requires_grad=True)], **{})
```
After:
You can see the particular DTensor spec that failed
```
RuntimeError: failed to run: torch.flatten, with (*[DTensor(local_tensor=tensor([[[-6.0136, -5.1712, -4.2667,
[[ 0.4317, -4.9678, -1.2441, -2.3042]],
[[ 3.9581, -0.6315, -3.3223, 4.0546]],
[[ 8.8823, -1.1178, 4.8485, -8.8593]]], requires_grad=True),
device_mesh=DeviceMesh('cpu', [0, 1, 2,3]), placements=(Shard(dim=1),))], **{})
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k
| true
|
3,015,240,169
|
Move verbose warning to warning_once
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
It was printing 1000s of lines for me..
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k
| true
|
3,015,219,633
|
[DO NOT LAND] Use cudaGetDevice in OSSProxyExecutor
|
yiming0416
|
closed
|
[
"fb-exported",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 10
|
CONTRIBUTOR
|
Summary: I am trying to use `cudaGetDevice()` in `oss_proxy_executor.cpp` and guard it under the macro `USE_CUDA`. However, seems like the code under `USE_CUDA` was never invoked even if I built PyTorch on a GPU machine. The `device_idx` remains -1, ideally it should change to `0` after `cudaGetDevice()` is called.
Test Plan: CI
Differential Revision: D73537817
| true
|
3,015,211,547
|
distrubuted: false positive Grad strides vs Bucket strides warning
|
nikitaved
|
open
|
[
"oncall: distributed"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
I am training a model on a single node with 4 GPUs using the HF [Accelerate](https://github.com/huggingface/accelerate) through SLURM. And this is the warning message I get:
```
/my_cluster_folder/site-packages/torch/autograd/graph.py:824: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
grad.sizes() = [768, 1], strides() = [1, 1]
bucket_view.sizes() = [768, 1], strides() = [1, 768] (Triggered internally at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:331.)
```
Technically speaking, the contract is not breached.
It is not necessarily a bug, but the warning message could be improved for such cases, as the code seems to directly compare strides: https://github.com/pytorch/pytorch/blob/562328501e167206dc7d4b16895b5ae538520e06/torch/csrc/distributed/c10d/reducer.cpp#L330-L332
### Versions
```
PyTorch version: 2.8.0.dev20250422+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Rocky Linux 9.5 (Blue Onyx) (x86_64)
GCC version: (conda-forge gcc 13.3.0-2) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.34
Python version: 3.13.2 | packaged by conda-forge | (main, Feb 17 2025, 14:10:22) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-503.23.1.el9_5.x86_64-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7402 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 61%
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5599.65
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 24 MiB (48 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.8.0.87
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] optree==0.14.0
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250422+cu128
[pip3] torchaudio==2.6.0.dev20250422+cu128
[pip3] torchvision==0.22.0.dev20250422+cu128
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,015,170,919
|
[map] always turn on dynamo for map
|
ydwu4
|
open
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary:
X-link: https://github.com/pytorch/executorch/pull/10409
Reland D72896450
Make map consistent with other control flow ops. After the change, map is able to support accessing closures in the map fn.
Test Plan: See existing tests.
Reviewed By: zou3519
Differential Revision: D73138427
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,015,145,214
|
Improve stable library apis per Scott's feedback
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 6
|
CONTRIBUTOR
|
Following 3 suggestions:
1. inline at::Tensor arg
2. use uniq ptr of array vs std::vector
3. document the `std::optional<S>()` case
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152040
| true
|
3,015,136,256
|
Fix GuardOnDataDependentSymNode in the normalize operator
|
henryoier
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 17
|
CONTRIBUTOR
|
Test Plan:
Dumped the local net torch.package to local
Ran
```
buck2 run scripts/shengqin:test_model_export -- /tmp/mtia_local_torch_package {\"local\":null}
```
succeeded
Reviewed By: hongyang-zhao
Differential Revision: D73405271
| true
|
3,015,135,494
|
[AOTInductor] Inherit Buffer if not being updated
|
22quinn
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 6
|
CONTRIBUTOR
|
Summary: Inherit buffer from original constants buffer if it's not being updated.
Test Plan: TBD
@diff-train-skip-merge
| true
|
3,015,131,954
|
Re-enable FakeTensor caching for SymInts
|
aorenste
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Summary:
This backs out D60320595 which itself turned off FakeTensor caching when a SymInt was present.
Tests seem to pass so I'm assuming some dynamic shape work fixed what was breaking previously.
Test Plan: Reran the tests listed in T196779132 and they seem to pass.
Differential Revision: D73532965
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.