id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,954,343,638
|
[Release-only] Pin intel-oneapi-dnnl to 2025.0.1-6
|
chuanqi129
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/xpu"
] | 1
|
COLLABORATOR
|
To fix CI builds. Addresses https://github.com/pytorch/pytorch/issues/149995 for release/2.7 branch
| true
|
2,954,226,997
|
[Easy/Profiler] Set Duration to -1 for unfinished CPU events
|
sraikund16
|
closed
|
[
"enhancement",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: profiler"
] | 4
|
CONTRIBUTOR
|
Summary: Some OSS Kineto users were requesting that we allow for 0 duration events in Kineto even though they won't be seen on the trace. To allow this we changed the handling of said events in D71510383. However this causes unfinished events in collection to never be post processed; this diff fixes said issue.
Test Plan: https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/dynocli/0/1743102222/localhost/libkineto_activities_631490.json.gz&bucket=gpu_traces
Differential Revision: D71993609
| true
|
2,954,223,980
|
[PGNCCL][BE] Merge mutex into TensorShelf for encapsulation
|
kwen2501
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150130
* #150079
* #148590
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
Differential Revision: [](https://our.internmc.facebook.com/intern/diff/)
| true
|
2,954,125,396
|
Add one_shot_all_reduce_copy to allow non-symm-mem allocated tensors to be reduced
|
ngimel
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ci-no-td"
] | 12
|
COLLABORATOR
|
Per title, we want to be able to use it even if inputs are not registered. Separate copy would add latency, and one-shot is all about the lowest possible latency.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,954,098,470
|
Revert "Parallelize sort"
|
ZainRizvi
|
closed
|
[] | 1
|
CONTRIBUTOR
|
Reverts pytorch/pytorch#149765
Reverting because it breaks inductor tests. Details in https://github.com/pytorch/pytorch/pull/149505#issuecomment-2759082390
| true
|
2,953,970,030
|
[dynamic shapes] guard_or_false for _reshape_view_helper, utils._infer_size for wildcard dims
|
pianpwk
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: export",
"ci-no-td"
] | 32
|
CONTRIBUTOR
|
For reshape/view: removes fast paths for 0 elements, checking dimensions to skip. Modifies the loop accumulating input elements, to raise a UserError if we run out of dimensions, graph breaking for compile and erroring out for export.
For infer_size: assumes if user passes us an unbacked, it's probably not -1
Will think about changes in https://docs.google.com/document/d/1WYx6EZwVDXtBnWyrzoecgGWdiK0V3XZKftfpWwQ5i3E/edit?tab=t.0#heading=h.22k54zym11qp in a later PR
| true
|
2,953,947,765
|
update release 2.7 xla pin
|
zpcore
|
closed
|
[
"open source",
"release notes: releng"
] | 1
|
CONTRIBUTOR
|
Fix the CI failure with outdated XLA pin. This mirrors the fix in https://github.com/pytorch/pytorch/pull/149381.
| true
|
2,953,784,780
|
[DO NOT MERGE] Tests runners enqueued forever
|
jeanschmidt
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Test ci jobs that request a label that should keep enqueued for a very long time
| true
|
2,953,780,491
|
torch.unique undocumented behaviour
|
zurkin1
|
open
|
[
"triaged",
"module: python frontend"
] | 6
|
NONE
|
### 📚 The doc issue
https://pytorch.org/docs/stable/generated/torch.unique.html
The documentation page is wrong and gives unclear explanation for torch.unique (which affects torch.unique_consecutive as well). Using the example from this page:
a = torch.tensor([[[1, 1, 0, 0],[1, 1, 0, 0],[0, 0, 1, 1],],[[0, 0, 1, 1],[0, 0, 1, 1],[1, 1, 1, 1],],[[1, 1, 0, 0],[1, 1, 0, 0],[0, 0, 1, 1],],])
torch.unique(a, dim=0):
- Since the two matrices (a[0, :, :] == a[2, :, :]).all() we remove one and get:
tensor([[[0, 0, 1, 1],[0, 0, 1, 1],[1, 1, 1, 1]],[[1, 1, 0, 0],[1, 1, 0, 0],[0, 0, 1, 1]]])
This part of the documentation is correct.
torch.unique(a, dim=1):
Here each of the tensors is treated as one of the elements to apply the unique operation upon.
tensor([[[0, 0, 1, 1],
[1, 1, 0, 0]],
[[1, 1, 1, 1],
[0, 0, 1, 1]],
[[0, 0, 1, 1],
[1, 1, 0, 0]]])
The results is removed duplicate rows in each matrix (separately) in contrast to the documentation (that talks about some minor matrix comparison a[:, idx, :], but it is wrong).
torch.unique(a, dim=2):
Same as before. Comparing complete columns within each matrix. Minor matrices are irrelevant.
### Suggest a potential alternative/fix
Suggestion:
1) Please correct the documentation.
2) Many times we care about unique (or unique_consecutive) of *elements* (not complete rows or columns). In the case of dim=1 we want unique elements on every row of the matrix (each matrix processed separately). This feature is currently not supported and pretty tricky to program. Below is my solution (workaround) for that, but it would be best to add as a parameter to unique, for example:
torch.unique(input, sorted=True,..., dim1, dim2)
And we always require the cardinality of |dim2| < |dim1|
So if we operate on dim1==matrix level (for example), we can check uniqueness of rows, columns, or elements.
In the same way if dim1==1 (for example rows of a matrix) we can only check uniqueness of elements.
This provides the maximal flexibility for the user.
--------
My implementation for element uniqueness accross columns of a batch of tensors (in my case binary tensors):
def process_vertical(bin_mats):
"""
Processes vertical lines of binary recurrence matrices for a batch.
Args:
bin_mats: A batch of binary recurrence matrices (shape: [batch_size, n_vectors, n_vectors]).
Returns:
counts: A vector of counts of 1's per column, seperated by a -1 marker.
"""
# Add a dummy top row of -1's to split sequence of 1's accross columns and mark the end of the column.
# 0, 0: No padding on columns (left and right)
# 0: No padding at the top
# 1: Add one row at the bottom
padded_mats = torch.nn.functional.pad(bin_mats, (0, 0, 0, 1), value=-1)
# Transpose the matrices for column-wise processing
padded_mats = padded_mats.transpose(1, 2) # Shape: [batch_size, n_vectors, n_vectors]
unique_vec, counts_vec = torch.unique_consecutive(padded_mats, return_counts=True)
# Filter counts to include only counts of 1's and -1's.
counts_of_ones = counts_vec * unique_vec
return bin_mats.shape, counts_of_ones
cc @albanD
| true
|
2,953,754,611
|
Add flag for source hash symbol allocation
|
bobrenjc93
|
closed
|
[
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150123
https://github.com/pytorch/pytorch/pull/149665 is quite difficult to land so let's do it step by step. Let's land the flag first.
| true
|
2,953,743,321
|
[pytorch][triton] Warp specialization support in TritonTemplate for torchinductor (#148503)
|
mandroid6
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Summary:
Currently only `num_warps` and `num_stages` are supported as one of the kernel options for inductor auto-tuning using `TritonTemplate`.
In order to allow warp-specialization kernel options should allow specifying `num_consumer_groups` and `num_buffers_warp_spec` as well.
NOTE: Currently gating changes to FBCODE using HAS_WARP_SPEC which is only available on triton/release-3.3.x
Test Plan:
## Unit test
Added tests for `test_triton_template_warp_specialization` to verify generated kenrnel contains configs for `num_consumer_groups` and `num_buffers_warp_spec`.
## Functional Testing
Specific to flexattention.
```
import torch
from torch.nn.attention.flex_attention import flex_attention
from triton.testing import do_bench
make_tensor = lambda: torch.rand(8, 16, 8192, 128, device="cuda", dtype=torch.bfloat16)
q, k, v = make_tensor(), make_tensor(), make_tensor()
flex_compiled = torch.compile(flex_attention, fullgraph=True)
print(do_bench(lambda: flex_compiled(q, k, v, kernel_options={"num_warps": 4})))
```
triton do_bench results:
- default compile: 15.176783561706543
- with warp-spec: 9.452800750732422
## Extra notes
- generated triton kernel using `TORCH_LOGS=output_code`: P1740612877
- TTGIR for fused kernel: P1740614685
Differential Revision: D71982587
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,953,535,553
|
torch.compile on MPS progress tracker
|
malfet
|
open
|
[
"triaged",
"module: mps"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
`torch.compile` support for MPS device is an early prototype and attempt to use it to accelerate end-to-end network is likely to fail. This issue is used to highlight known problems and track progress towards tentative beta status for 2.8.0 release
- [x] multi-stage well ford reductions are not implemented, which makes it useless for ResNet
- [x] Reduction performance is worth than eager for LLMs
- [x] Track performance/failures in the [dashboard](https://hud.pytorch.org/benchmark/torchbench/inductor_inductor?dashboard=torchinductor&startTime=Thu%2C%2017%20Apr%202025%2015%3A52%3A34%20GMT&stopTime=Thu%2C%2024%20Apr%202025%2015%3A52%3A34%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=mps&lBranch=main&lCommit=78953ee1223391df5c162ac6d7e3eb70294a722e&rBranch=main&rCommit=a40e876b08277795a6552cf5e77e8649237c6812)
- [x] Fix tiled reduction algorithm (`test_var_mean_tile_reduction_True_mps`)
- [ ] Enable argument buffer support
- [x] Fix [rms_norm tracing](https://github.com/pytorch/pytorch/issues/150629)
- [ ] Dynamic shape support
- [ ] [Perf] Enable matmul decomps
- [ ] [Perf][Stretch] Enable FlexAttention for MPS
- [ ] [Perf] Figure out whether to decompose `aten._scaled_dot_product_attention_math_for_mps`
- [ ] `test_sort_transpose` on MacOS-14
### Versions
2.7.0/nightly
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
2,953,504,191
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_bool (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 8
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_bool&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39505122983).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_bool`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,953,504,057
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_bfloat16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 9
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_bfloat16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39512574192).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1420, in only_fn
return fn(slf, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 1349, in test_foreach_copy_with_multi_dtypes
out = foreach_copy_(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_copy_', keys=('aten::_foreach_copy_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.bfloat16]], args=(TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.bfloat16]]), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,953,404,418
|
PyTorch 2.6 License Issues
|
AWilcke
|
open
|
[
"oncall: releng",
"triaged",
"module: third_party"
] | 4
|
NONE
|
Our scanner detected these licenses in the torch-2.6.0.dist-info/LICENSE file:
third_party/kineto/libkineto/third_party/dynolog/third_party/cpr/test/LICENSE - under GPL-3.0
Bison implementation for Yacc-like parsers in C - under LGPL-3.0 (with a linking exception)
an NVIDIA license and a GPL-3.0 license - these can be found at the end of the file, but there is no package associated with either of them, so our team does not know what to action to remove the licenses
Could these findings be removed in a future version to avoid ringing all compliance alarm bells, please?
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim
| true
|
2,953,136,233
|
Fix typo
|
hotdog123what321
|
closed
|
[
"open source",
"topic: not user facing"
] | 6
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
2,953,103,956
|
S390x: update more tests
|
AlekseiNikiforovIBM
|
open
|
[
"module: cpu",
"triaged",
"open source",
"ciflow/trunk",
"release notes: quantization",
"release notes: releng",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ciflow/s390"
] | 4
|
COLLABORATOR
|
Enable more tests on s390x.
Fix a couple of s390x-specific issues.
Mark more tests as failing or skipped on s390x.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @ezyang @SherlockNoMad @EikanWang @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,953,054,732
|
[BE] Suppress user_warnings while running opinfo tests
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150115
Some of the samples are constructed in a way that are expected to trigger those, but what's the point displaying them
| true
|
2,953,028,619
|
[Miscompilation] inductor produce inconsistent inference results with the eager mode
|
Cookiee235
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: inductor",
"oncall: cpu inductor"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Eager produced `37269600` while inductor produced `inf` (> 3.4e38). They have a significant difference.
```python
import torch
class SimpleModel(torch.nn.Module):
def forward(self, x):
x = torch.arctan(x)
x = torch.linalg.cond(x)
return x
model = SimpleModel()
inputs = torch.ones(2, 2, dtype=torch.float32)
res = model(inputs)
compiled_model = torch.compile(model, backend='inductor')
with torch.no_grad():
compiled_out = compiled_model(inputs)
print(res)
print(compiled_out)
non_nan_mask = ~torch.isnan(res)
torch.testing.assert_close(res[non_nan_mask], compiled_out[non_nan_mask])
````
### Error logs
tensor(37269600.)
tensor(inf)
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0327/87.py", line 19, in <module>
torch.testing.assert_close(res[non_nan_mask], compiled_out[non_nan_mask])
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: inf at index (0,) (up to 1e-05 allowed)
Greatest relative difference: nan at index (0,) (up to 1.3e-06 allowed)
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 80%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.25
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
2,952,988,954
|
[inductor] Significant difference produced when compile the model resnet18
|
Cookiee235
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
inputs = torch.randn(1, 3, 224, 224, device='cuda')
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)
model = model.cuda()
model.eval()
with torch.no_grad():
res = model(inputs)
compiled_model = torch.compile(model, backend='inductor')
with torch.no_grad():
compiled_out = compiled_model(inputs)
torch.testing.assert_close(res, compiled_out)
```
### Error logs
```
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0327/93.py", line 13, in <module>
torch.testing.assert_close(res, compiled_out)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 988 / 1000 (98.8%)
Greatest absolute difference: 0.0020771026611328125 at index (0, 111) (up to 1e-05 allowed)
Greatest relative difference: 1.8532894849777222 at index (0, 20) (up to 1.3e-06 allowed)
```
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 80%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.25
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
2,952,949,345
|
[Dynamo] Fix `dict.items()` return type
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 22
|
CONTRIBUTOR
|
Fixes #150110
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,952,944,439
|
Use constants.onnx default opset for export compat
|
novikov-alexander
|
open
|
[
"triaged",
"open source",
"release notes: onnx"
] | 6
|
NONE
|
The current rules for opsets are confusing, and the comments associated with them are outdated. This is particularly problematic for dynamo export, where the opset is hardcoded to version 18. To improve clarity and maintainability, it would be beneficial to use global constants wherever possible.
| true
|
2,952,925,974
|
[Dynamo] `dict.items()` returns a tuple instead of `dict_items` obj
|
shink
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```python
def repro():
def fn():
d = dict({"a": 1, "b": "2", "c": torch.tensor(3)})
return d.items()
opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
ref = fn()
res = opt_fn()
print(f"Eager: {ref}")
print(f"Dynamo: {res}")
```
Will get:
```
Eager: dict_items([('a', 1), ('b', '2'), ('c', tensor(3))])
Dynamo: (('a', 1), ('b', '2'), ('c', tensor(3)))
```
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,952,898,235
|
Feature universe -# 添加宇宙同构优化模块 / Add Cosmic Isomorphism Optimization Module
|
loning
|
closed
|
[
"open source"
] | 4
|
NONE
|
# 添加宇宙同构优化模块 / Add Cosmic Isomorphism Optimization Module
## 概述 / Overview
本PR基于量子经典二元论和宇宙同构原理实现了一组PyTorch优化器和模型优化组件,将理论物理概念应用于深度学习优化。
This PR implements a set of PyTorch optimizers and model optimization components based on quantum-classical dualism and cosmic isomorphism principles, applying theoretical physics concepts to deep learning optimization.
## 理论基础 / Theoretical Foundation
该实现基于[量子经典二元论形式化表达](https://github.com/loning/universe/blob/trae/formal_theory/formal_theory.md)和[量子宇宙模拟理论](https://github.com/loning/universe/blob/trae/formal_theory/formal_theory_quantum_simulation.md)中提出的核心概念,特别是信息-熵比优化和经典化效率最大化原理。
This implementation is based on core concepts proposed in [Formal Theory of Quantum-Classical Dualism](https://github.com/loning/universe/blob/trae/formal_theory/formal_theory.md) and [Quantum Universe Simulation Theory](https://github.com/loning/universe/blob/trae/formal_theory/formal_theory_quantum_simulation.md), particularly the information-entropy ratio optimization and classical efficiency maximization principles.
## 核心组件 / Core Components
1. **宇宙同构动态计算图 (Cosmic Dynamic Graph, CDG)** - 自动识别并优化计算图中的低效节点,基于信息熵比进行优化。
Automatically identifies and optimizes inefficient nodes in computation graphs based on information-entropy ratio.
2. **宇宙经典化效率最大化 (Cosmic Classical Efficiency, CCE)** - 优化参数更新路径,使计算效率最大化。
Optimizes parameter update paths to maximize computational efficiency.
3. **熵驱动宇宙状态空间压缩 (Cosmic State Compression, CSC)** - 动态压缩张量空间,保持高效状态。
Dynamically compresses tensor spaces while preserving high efficiency states.
## 技术特点 / Technical Features
- **信息熵比计算** - 计算张量的信息量与熵的比值,作为优化指标
Calculates information-entropy ratio of tensors as optimization metric
- **全面的中英文文档** - 每个函数和重要步骤都有详细的中英双语注释
Comprehensive bilingual documentation for every function and important step
- **兼容标准PyTorch** - 容易集成到现有PyTorch项目中
Compatible with standard PyTorch, easy to integrate into existing projects
## 性能优势 / Performance Advantages
根据量子宇宙模拟理论预测,该优化方法具有以下优势:
According to quantum universe simulation theory predictions, this optimization method offers:
- **动态剪枝** - 自动冻结低效节点,减少计算开销
Dynamic pruning - automatically freezes inefficient nodes, reducing computational overhead
- **状态压缩** - 减少内存使用,同时保持关键信息
State compression - reduces memory usage while preserving critical information
- **优化的梯度路径** - 基于经典化效率修正梯度方向
Optimized gradient paths - corrects gradient directions based on classical efficiency
## 示例 / Example
详细的工作示例请参见 `standalone.py`,展示了所有三个核心优化组件的使用。
See `standalone.py` for a detailed working example demonstrating all three core optimization components.
| true
|
2,952,715,250
|
More revert
|
jamesjwu
|
closed
|
[
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"keep-going"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150108
* #150107
* #149054
* #149657
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,952,696,923
|
[not for commit] Revert some parts of previous diff
|
jamesjwu
|
closed
|
[
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"keep-going"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150108
* __->__ #150107
* #149054
* #149657
I'm going crazy debugging a test timeout, testing if reverting parts of my stack cause the timeout to disappear
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,952,601,466
|
Add option to define OpenBLAS version for manylinux Dockerfile_2_28_aarch64
|
davsva01
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 1
|
NONE
|
Adds optional variable OPENBLAS_VERSION to `.ci/docker/common/install_openblas.sh` used to define which version of OpenBLAS to install. Adds argument to `Dockerfile_2_28_aarch64` image.
| true
|
2,952,357,794
|
[cherry-pick] [Submodule] [cpuinfo] cpuinfo update (#149305)
|
ozanMSFT
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
(cherry picked from commit ce54c430c0e9d5e6e9ee0b1d85bddd04fbcbca4e)
(PR: #149305 )
---
Updating `cpuinfo` module.
Relevant:
https://github.com/pytorch/cpuinfo/issues/270
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149305 Approved by: https://github.com/malfet
| true
|
2,952,277,524
|
multidimensional slicing
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 9
|
CONTRIBUTOR
|
Differential Revision: D71962884
Fixes #150057
| true
|
2,952,272,936
|
fix range constraints for expr
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
During tracing it is possible for a `s1: VR[2, inf]` to be replaced by a `s0: VR[3, inf]` (note smaller range) by the shape env. But after export, unfortunately we'd previously record `range_constraints[s0] = VR[2, inf]` (note larger range), which is incorrect.
This is because we'd map `s1.node.expr` (`s0`) to the `var_to_range` of `s1.node._expr` (`s1`) when creating `range_constraints`. The comment surrounding this code suggests this predated `bound_sympy`, but now we can do better.
For users, this means that when using `Dim.DYNAMIC` previously they wouldn't get input constraints checked sufficiently, now they do (shifting errors early).
Differential Revision: D71962694
| true
|
2,952,267,011
|
Adding call to RecordCCall such that the PyCCall Events are inserted into the queue. This ensures that the profiling doesn't break with 'with_stack' flag set.
|
arjun-choudhry
|
closed
|
[
"triaged",
"open source",
"ciflow/trunk",
"release notes: profiler",
"topic: bug fixes"
] | 9
|
NONE
|
Created in leiu of #148958.
Closes #136817 , #101632
| true
|
2,952,149,923
|
[RLEASE ONLY CHANGES] Apply release only chnages to release 2.7 (#149056)
|
etaf
|
closed
|
[
"module: rocm",
"open source",
"release notes: releng",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
* [RLEASE ONLY CHANGES] Apply release only chnages to release 2.7
* fix_lint_workflow
* docker_release
* fix_check_binary
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,952,148,746
|
[RLEASE ONLY CHANGES] Apply release only chnages to release 2.7 (#149056)
|
etaf
|
closed
|
[
"module: rocm",
"release notes: releng",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
* [RLEASE ONLY CHANGES] Apply release only chnages to release 2.7
* fix_lint_workflow
* docker_release
* fix_check_binary
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,952,125,630
|
DISABLED test_comprehensive_fft_irfftn_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
IvanKobzarev
|
closed
|
[
"triaged",
"skipped",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22inductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoCUDA%3A%3Atest_comprehensive_fft_irfftn_cuda_float16%22%5D)).
The test gradient accuracy is stride dependent. Usually it's about 1e-2, 7e-1.
When in aot_autograd we change tangent strideness - this failure appears.
Disabling this test for now, to find the proper tolerance or solution how to avoid this fuzzer problems.
cc @chauhang @penguinwu
| true
|
2,951,986,576
|
Aborted (core dumped)
|
Cookiee235
|
open
|
[
"module: crash",
"module: cuda",
"module: error checking",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
cuda_tensor = torch.tensor([1.0, 2.0, 3.0], device='cuda')
mem_ptr = cuda_tensor.data_ptr()
torch.cuda.caching_allocator_delete(mem_ptr)
```
Aborted (core dumped)
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.25
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
cc @ptrblck @msaroufim @eqy @malfet
| true
|
2,951,985,930
|
Fix `L1Loss`, `MSELoss`, `HuberLoss` missing `weight` param
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"release notes: nn"
] | 5
|
CONTRIBUTOR
|
Fixes #149841
## Changes
- Add missing `weight` param for `L1Loss`, `MSELoss`, `HuberLoss`
- Add doc description
- Add weight test case
## Test Result






| true
|
2,951,965,711
|
FSDP OOM during `sync_params_and_buffers`
|
KimmiShi
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 7
|
NONE
|
### 🐛 Describe the bug
I am trying to use FSDP to infer a 72B model, and to avoid CPU OOM I only load weights on rank-0, and then FSDP broadcast param to other ranks. This works well with a 7B param model, however when I try this script on a 72B model (on 16 A100s) I got GPU OOM.
```python
import torch
import torch.distributed as dist
from transformers import AutoProcessor, Qwen2VLForConditionalGeneration, AutoModelForCausalLM
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import StateDictType
from torch.distributed.fsdp.api import BackwardPrefetch, ShardingStrategy
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy, size_based_auto_wrap_policy
from torch.distributed.fsdp import fully_shard, register_fsdp_forward_method
# from torch.distributed._composable.fsdp import fully_shard, register_fsdp_forward_method
from torch.distributed.device_mesh import init_device_mesh
from transformers import AutoTokenizer, AutoModel
import os
import functools
def setup_distributed() -> None:
"""Initialize distributed training environment."""
# local_rank = int(os.environ["LOCAL_RANK"])
# Initializes the distributed backend which will take care of sychronizing nodes/GPUs
try:
rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
except KeyError as e:
raise RuntimeError(f"Could not find {e} in the torch environment")
local_rank = rank % 8
torch.cuda.set_device(local_rank)
# initialize the default process group
dist.init_process_group(
rank=rank,
world_size=world_size,
backend="nccl",
)
def get_module_class_from_name(module, name):
"""
Gets a class from a module by its name.
Args:
module (`torch.nn.Module`): The module to get the class from.
name (`str`): The name of the class.
"""
modules_children = list(module.children())
if module.__class__.__name__ == name:
return module.__class__
elif len(modules_children) == 0:
return
else:
for child_module in modules_children:
module_class = get_module_class_from_name(child_module, name)
if module_class is not None:
return module_class
# adapted from transformers/trainer.py
def wrap_fsdp(model, sub_group):
transformer_cls_names_to_wrap = [
"Embedding",
"Qwen2VLDecoderLayer",
"Qwen2VLVisionBlock"
"Qwen2DecoderLayer",
# "Qwen2RMSNorm",
]
transformer_cls_to_wrap = set()
for layer_class in transformer_cls_names_to_wrap:
transformer_cls = get_module_class_from_name(model, layer_class)
# if transformer_cls is None:
# raise ValueError(f"Could not find the transformer layer class {layer_class} in the model.")
if transformer_cls is not None:
transformer_cls_to_wrap.add(transformer_cls)
auto_wrap_policy = functools.partial(
transformer_auto_wrap_policy, transformer_layer_cls=transformer_cls_to_wrap
)
sharded_model = FSDP(
module=model,
process_group=sub_group,
sharding_strategy=ShardingStrategy.FULL_SHARD, # ZeRO2: SHARD_GRAD_OP, ZeRO3: FULL_SHARD
auto_wrap_policy=auto_wrap_policy,
forward_prefetch=True,
backward_prefetch=BackwardPrefetch.BACKWARD_PRE,
device_id=torch.cuda.current_device(),
use_orig_params=False,
sync_module_states=True,
param_init_fn=(lambda module: module.to_empty(device=torch.device("cuda"), recurse=False))
if dist.get_rank() != 0 else None
)
return sharded_model
pretrain_name_or_path='Qwen/Qwen2-VL-72B-Instruct'
setup_distributed()
rank=dist.get_rank()
ws=dist.get_world_size()
print(f"{ws=}", flush=True)
input_ids = torch.randint(1, 999, [4, 320], device='cuda')
tokenizer = AutoTokenizer.from_pretrained(pretrain_name_or_path)
number=3+rank
prompts=['what is AI?', f"{number}+2=?"]
inputs = tokenizer(prompts,padding=True,
return_tensors="pt",)
for k, v in inputs.items():
inputs[k] = v.cuda()
def init_mod(pretrain_name_or_path):
if 'vl' in pretrain_name_or_path.lower():
model = Qwen2VLForConditionalGeneration.from_pretrained(pretrain_name_or_path)
else:
model = AutoModelForCausalLM.from_pretrained(pretrain_name_or_path)
return model
if rank==0:
model = init_mod(pretrain_name_or_path)
else:
with torch.device('meta'):
model = init_mod(pretrain_name_or_path)
sharded_model = wrap_fsdp(model, None)
with torch.no_grad():
out = sharded_model(input_ids=input_ids)
out = sharded_model.generate(**inputs, synced_gpus=True, max_new_tokens=10)
out = tokenizer.batch_decode(out)
print(out, flush=True)
```
tracebacks:
```
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 483, in __init__
[2025-03-27 15:41:41] [rank4]: _auto_wrap(
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 101, in _auto_wrap
[2025-03-27 15:41:41] [rank4]: _recursive_wrap(**recursive_wrap_kwargs, **root_kwargs) # type: ignore[arg-type]
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py", line 545, in _recursive_wrap
[2025-03-27 15:41:41] [rank4]: wrapped_child, num_wrapped_params = _recursive_wrap(
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py", line 545, in _recursive_wrap
[2025-03-27 15:41:41] [rank4]: wrapped_child, num_wrapped_params = _recursive_wrap(
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py", line 545, in _recursive_wrap
[2025-03-27 15:41:41] [rank4]: wrapped_child, num_wrapped_params = _recursive_wrap(
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py", line 563, in _recursive_wrap
[2025-03-27 15:41:41] [rank4]: return _wrap(module, wrapper_cls, **kwargs), nonwrapped_numel
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py", line 492, in _wrap
[2025-03-27 15:41:41] [rank4]: return wrapper_cls(module, **kwargs)
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 509, in __init__
[2025-03-27 15:41:41] [rank4]: _init_param_handle_from_module(
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py", line 636, in _init_param_handle_from_module
[2025-03-27 15:41:41] [rank4]: _init_param_handle_from_params(state, managed_params, fully_sharded_module)
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py", line 648, in _init_param_handle_from_params
[2025-03-27 15:41:41] [rank4]: handle = FlatParamHandle(
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 602, in __init__
[2025-03-27 15:41:41] [rank4]: self._init_flat_param_and_metadata(
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 761, in _init_flat_param_and_metadata
[2025-03-27 15:41:41] [rank4]: self.flat_param: FlatParameter = self.flatten_tensors_into_flat_param(
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 883, in flatten_tensors_into_flat_param
[2025-03-27 15:41:41] [rank4]: flat_param_data = self.flatten_tensors(tensors, aligned_numel)
[2025-03-27 15:41:41] [rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 875, in flatten_tensors
[2025-03-27 15:41:41] [rank4]: return torch.cat(flat_tensors, dim=0)
[2025-03-27 15:41:41] [rank4]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.27 GiB. GPU 4 has a total capacity of 79.35 GiB of which 1.81 GiB is free. Process 3681145 has 77.53 GiB memory in use. Of the allocated memory 31.23 GiB is allocated by PyTorch, and 44.50 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management
```
Does it make sense to use so much memory?
I tried `os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True"`
and seems doesn't help:
```
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 483, in __init__
[rank2]: _auto_wrap(
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 101, in _auto_wrap
[rank2]: _recursive_wrap(**recursive_wrap_kwargs, **root_kwargs) # type: ignore[arg-type]
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py", line 545, in _recursive_wrap
[rank2]: wrapped_child, num_wrapped_params = _recursive_wrap(
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py", line 545, in _recursive_wrap
[rank2]: wrapped_child, num_wrapped_params = _recursive_wrap(
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py", line 545, in _recursive_wrap
[rank2]: wrapped_child, num_wrapped_params = _recursive_wrap(
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py", line 563, in _recursive_wrap
[rank2]: return _wrap(module, wrapper_cls, **kwargs), nonwrapped_numel
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/wrap.py", line 492, in _wrap
[rank2]: return wrapper_cls(module, **kwargs)
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 509, in __init__
[rank2]: _init_param_handle_from_module(
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py", line 636, in _init_param_handle_from_module
[rank2]: _init_param_handle_from_params(state, managed_params, fully_sharded_module)
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py", line 648, in _init_param_handle_from_params
[rank2]: handle = FlatParamHandle(
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 602, in __init__
[rank2]: self._init_flat_param_and_metadata(
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 761, in _init_flat_param_and_metadata
[rank2]: self.flat_param: FlatParameter = self.flatten_tensors_into_flat_param(
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 883, in flatten_tensors_into_flat_param
[rank2]: flat_param_data = self.flatten_tensors(tensors, aligned_numel)
[rank2]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 875, in flatten_tensors
[rank2]: return torch.cat(flat_tensors, dim=0)
[rank2]: RuntimeError: CUDA driver error: invalid argument
```
I have also tried to use FSDP2, however I do not know how to sync param to meta inited ranks, I need to load weights from pretrained HF model.
### Versions
```
torch 2.6.0
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
2,951,952,614
|
Remove torch XPU ABI=0 build logic for old compiler
|
guangyey
|
open
|
[
"module: mkldnn",
"open source",
"ciflow/trunk",
"topic: build",
"ciflow/xpu",
"release notes: xpu",
"ciflow/linux-aarch64"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150095
# Motivation
Follow https://github.com/pytorch/pytorch/pull/149888, this PR intends to remove ABI=0 build logic for PyTorch XPU build with old compiler.
# Additional Context
This PR depends on XPU CI pass, which will be fixed by https://github.com/pytorch/pytorch/pull/149843 and https://github.com/intel/torch-xpu-ops/pull/1515
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,951,923,764
|
[CPU]detectron2_fcos_r_50_fpn multiple thread float32 static shape default wrapper eager_two_runs_differ accuracy failure in 2025-03-24 nightly release
|
zxd1997066
|
closed
|
[
"module: cpu",
"triaged"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
detectron2_fcos_r_50_fpn multiple thread float32 static shape default wrapper accuracy failure
the bad commit: 842d51500be144d53f4d046d31169e8f46c063f6
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference accuracy torchbench detectron2_fcos_r_50_fpn float32
Testing with inductor.
multi-threads testing....
loading model: 0it [00:03, ?it/s]
cpu eval detectron2_fcos_r_50_fpn
WARNING:common:fp64 golden ref were not generated for detectron2_fcos_r_50_fpn. Setting accuracy check to cosine
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
eager_two_runs_differ
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,accuracy,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips,compilation_latency
cpu,detectron2_fcos_r_50_fpn,4,eager_two_runs_differ,0,0,0,0,0,0,0,0
```
the last good commit: 85f6d6142148f91ac2a1118ae4abf0598f3c9426
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference accuracy torchbench detectron2_fcos_r_50_fpn float32
Testing with inductor.
multi-threads testing....
loading model: 0it [00:03, ?it/s]
cpu eval detectron2_fcos_r_50_fpn
WARNING:common:fp64 golden ref were not generated for detectron2_fcos_r_50_fpn. Setting accuracy check to cosine
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
pass
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,accuracy,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips,compilation_latency
cpu,detectron2_fcos_r_50_fpn,4,pass,944,29,22,4,0,0,22,145.010350
```
</table>
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>373ffb19</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>621c801f786a0fb24766f8b30b5d3e08b5c25fd3</td>
<td>main</td>
<td>f80bee4934dc2d6c8031f481d699cd4832a1a932</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+318bace</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference accuracy torchbench detectron2_fcos_r_50_fpn float32
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/842d51500be144d53f4d046d31169e8f46c063f6
[torchbench-detectron2_fcos_r_50_fpn-inference-float32-static-default-multiple-accuracy-crash_guilty_commit.log](https://github.com/user-attachments/files/19481492/torchbench-detectron2_fcos_r_50_fpn-inference-float32-static-default-multiple-accuracy-crash_guilty_commit.log)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @chuanqi129
| true
|
2,951,879,564
|
[export] Generate meta kernel
|
angelayi
|
closed
|
[
"ciflow/inductor",
"release notes: export"
] | 1
|
CONTRIBUTOR
|
After draft-export tracing, we accumulate an "operator profile" of all the calls to this operator. The operator profile includes a list of the input tensor metadata and output tensor metadata, where the tensor metadata contains the rank, dtype, device, and layout. We can then use this to generate and register a meta kernel, where if we receive inputs that have the same profile, we will return an output that matches what we previously profiled.
caveat is that if we re-draft-export with new shapes, this will override the existing registration.
| true
|
2,951,875,267
|
Add `_foreach_fill_` ops
|
zeshengzong
|
open
|
[
"open source",
"release notes: foreach_frontend"
] | 2
|
CONTRIBUTOR
|
Fixes #108445
| true
|
2,951,866,376
|
'torch.mps' has no attribute 'current_device'
|
morestart
|
closed
|
[] | 5
|
NONE
|
### 🐛 Describe the bug
'torch.mps' has no attribute 'current_device'
### Versions
2.6.0
| true
|
2,951,857,786
|
[inductor] No type promotion for slice_scatter
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150082
* __->__ #150090
* #148953
* #150036
* #149667
* #149087
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,951,823,534
|
Compilation failed for the frozen model
|
Cookiee235
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
For the frozen model, the `torch.compile` will fail and throw "AttributeError: 'RecursiveScriptModule' object has no attribute 'training'".
A workaround to this bug is to add "model.eval" or "model.trainning =False" for the frozen model.
All in all, I still hope this bug can be fixed in the Pytorch source code.
```python
import torch
inputs = torch.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
model = torch.nn.Sequential(
torch.nn.Linear(3, 3),
)
model.eval()
model = torch.jit.script(model)
model = torch.jit.freeze(model)
#model.eval()
#model.training = False
compiled_model = torch.compile(model, backend='inductor')
```
### Error logs
```
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0327/10.py", line 12, in <module>
compiled_model = torch.compile(model, backend='inductor')
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/__init__.py", line 2574, in compile
return torch._dynamo.optimize(
~~~~~~~~~~~~~~~~~~~~~~~
...<3 lines>...
disable=disable,
~~~~~~~~~~~~~~~~
)(model) # type: ignore[return-value]
~^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 569, in __call__
new_mod = OptimizedModule(mod, self)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 319, in __init__
self.training = self._orig_mod.training
^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/jit/_script.py", line 829, in __getattr__
return super().__getattr__(attr)
~~~~~~~~~~~~~~~~~~~^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/jit/_script.py", line 536, in __getattr__
return super().__getattr__(attr)
~~~~~~~~~~~~~~~~~~~^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1940, in __getattr__
raise AttributeError(
f"'{type(self).__name__}' object has no attribute '{name}'"
)
AttributeError: 'RecursiveScriptModule' object has no attribute 'training'
```
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.25
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,951,812,709
|
RuntimeError: (*bias): last dimension must be contiguous
|
pass-lin
|
closed
|
[] | 0
|
NONE
|
When I implemented the model using keras, I reported this error to the gpu in the torch backend.
this error report in [here](https://btx.cloud.google.com/invocations/a4bd9556-5747-4656-8df5-1c2a92206b57/targets/keras_hub%2Fgithub%2Fubuntu%2Fgpu%2Ftorch%2Fpresubmit/log),and the issue in [here](https://github.com/keras-team/keras-hub/pull/2145)
The amazing thing about this bug is that the same test error seems to only appear in Google's online test docker. I don't get this error in my local tests. I wonder what might be causing this
| true
|
2,951,804,139
|
DISABLED test_triton_kernel_to_post_grad_tracing_cuda (__main__.TestProvenanceTracingArtifact)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_triton_kernel_to_post_grad_tracing_cuda&suite=TestProvenanceTracingArtifact&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39488365321).
Over the past 3 hours, it has been determined flaky in 13 workflow(s) with 26 failures and 13 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_triton_kernel_to_post_grad_tracing_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_provenance_tracing.py", line 188, in test_triton_kernel_to_post_grad_tracing_cuda
self._test_triton_kernel_to_post_grad_tracing(device="cuda")
File "/var/lib/jenkins/workspace/test/inductor/test_provenance_tracing.py", line 114, in _test_triton_kernel_to_post_grad_tracing
self._check_provenance_tracing_artifact(filepath, expected_data)
File "/var/lib/jenkins/workspace/test/inductor/test_provenance_tracing.py", line 61, in _check_provenance_tracing_artifact
self.assertEqual(sorted(actual_data.items()), sorted(expected_data.items()))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4055, in assertEqual
error_metas = not_close_error_metas(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1216, in not_close_error_metas
raise error_meta.to_error() from None # noqa: RSE102
AssertionError: The length of the sequences mismatch: 3 != 2
To execute this test, run the following from the base repo dir:
python test/inductor/test_provenance_tracing.py TestProvenanceTracingArtifact.test_triton_kernel_to_post_grad_tracing_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_provenance_tracing.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,951,606,948
|
fix ambiguous error message
|
Cookiee235
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 25
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,951,555,096
|
backward cleanup for #148430
|
laithsakka
|
closed
|
[
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150085
* #148430
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,951,484,817
|
[CD] Fix the libgomp twice load issue
|
chuanqi129
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 8
|
COLLABORATOR
|
Fixes #149422
| true
|
2,951,429,475
|
[Inductor] RuntimeError: Sparse CSR tensors do not have strides
|
Cookiee235
|
open
|
[
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class TestModel(torch.nn.Module):
def forward(self, x):
x_sparse = x.to_sparse_csr()
mat1 = torch.ones(3, 2)
mat2 = torch.ones(2, 3)
mm_res = torch.sparse.sampled_addmm(x_sparse, mat1, mat2)
dense_res = mm_res.to_dense()
return dense_res
model = TestModel()
inputs = torch.tensor([[1.0, 0.0, 0.0], [0.0, 2.0, 0.0], [0.0, 0.0, 3.0]])
res = model(inputs)
#compiled_model = torch.compile(model, backend='eager') # run well
compiled_model = torch.compile(model, backend='inductor') # crash
compiled_model(inputs)
```
### Error logs
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0327/82.py", line 18, in <module>
compiled_out = compiled_model(inputs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 663, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1541, in _call_user_compiler
raise BackendCompilerFailed(
self.compiler_fn, e, inspect.currentframe()
).with_traceback(e.__traceback__) from None
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1516, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/__init__.py", line 2349, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2087, in compile_fx
return aot_autograd(
~~~~~~~~~~~~~
...<6 lines>...
cudagraphs=cudagraphs,
~~~~~~~~~~~~~~~~~~~~~~
)(model_, example_inputs_)
~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1160, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
dispatch_and_compile,
...<5 lines>...
remote,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 779, in load
compiled_fn = dispatch_and_compile()
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1145, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
functional_call,
^^^^^^^^^^^^^^^^
...<3 lines>...
shape_env,
^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
flat_fn, fake_flat_args, aot_config, fake_mode, shape_env
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
~~~~~~~~~~~^
flat_fn,
^^^^^^^^
...<2 lines>...
fw_metadata=fw_metadata,
^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 166, in aot_dispatch_base
aot_forward_graph_str = fw_module.print_readable(
print_output=False, include_stride=True, include_device=True
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/graph_module.py", line 931, in print_readable
return _print_readable(
self,
...<4 lines>...
colored,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/graph_module.py", line 316, in _print_readable
verbose_python_code = graph.python_code(
root_module="self",
...<3 lines>...
colored=colored,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/graph.py", line 1630, in python_code
return self._python_code(
~~~~~~~~~~~~~~~~~^
root_module,
^^^^^^^^^^^^
...<4 lines>...
colored=colored,
^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/graph.py", line 1649, in _python_code
return self._codegen._gen_python_code(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self.nodes,
^^^^^^^^^^^
...<5 lines>...
colored=colored,
^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/graph.py", line 754, in _gen_python_code
emit_node(node)
~~~~~~~~~^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/graph.py", line 645, in emit_node
f"{stringify_shape(meta_val.stride())}"
~~~~~~~~~~~~~~~^^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Sparse CSR tensors do not have strides
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 80%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.25
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,951,428,840
|
[invoke_subgraph] Support None in the fwd output
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150486
* #150450
* __->__ #150082
| true
|
2,951,333,414
|
strange error when distributed training
|
Pydataman
|
open
|
[
"needs reproduction",
"oncall: distributed",
"triaged",
"oncall: pt2"
] | 2
|
NONE
|
in torch2.2.1+cuda121, there is no problem with small datasets after distributed training, but this problem occurs during training with hundreds of millions of large datasets.
TypeError: _broadcast_coalesced(): incompatible function arguments. The following argument types are supported:
1. (process_group: torch._C._distributed_c10d.ProcessGroup, tensors: List[torch.Tensor], buffer_size: int, src: int = 0) -> None
Invoked with: <torch.distributed.distributed_c10d.ProcessGroup object at 0x7fea22984f30>, [tensor([1.0000e+00, 6.4938e-01, 4.2170e-01, 2.7384e-01, 1.7783e-01, 1.1548e-01,
why?
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu
| true
|
2,951,274,145
|
[FlexAttention] Allow dispatch to SAC for flex
|
drisspg
|
open
|
[
"module: activation checkpointing",
"release notes: nn",
"module: inductor",
"ciflow/inductor",
"module: higher order operators",
"module: flex attention"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150080
cc @soulitzer @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @zou3519 @ydwu4 @Chillee @yanboliang @BoyuanFeng
| true
|
2,951,144,392
|
[c10d] Move unstashing from watchdog to main thread
|
kwen2501
|
closed
|
[
"oncall: distributed",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150130
* __->__ #150079
* #148590
This is a fix to the PR below.
### Context
If a work has completed but user didn't call `work.wait()`, we have responsibility to unstash the tensors (to allow memory recycle). Previously we perform the unstashing in watchdog thread, after it detects that the work has completed while the "tensor shelf" is still full. But this caused some rare issues. (Note that this situation can also happen if the user "deliberately" defer the work.wait() call to very late, or the GPU kernel runs super fast.)
### Solution
This PR moves the unstashing from watchdog thread to main thread, and the "rare issue" seems to go away. To do that, we created a "shelf pool" in PG, and when watchdog is about to erase a work, it transfers the work's shelf to that pool (this is just a shared_ptr copy, thus low in cost). To clean the shelf pool in main thread, we piggyback user calls. For example, every collective call would trigger the function `workEnqueue()`, so we put the cleaning there.
### Side topic:
(cc @ngimel )
The fact that unstashing tensors from watchdog thread causes an error / data corruption is weird.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
Differential Revision: [D71995632](https://our.internmc.facebook.com/intern/diff/D71995632)
| true
|
2,951,140,664
|
[RFC] Remove periodic/unstable jobs that has been continuously broken for more than 30 days
|
malfet
|
open
|
[
"module: ci",
"triaged"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Right now we have a number of non-merge blocking jobs which still running and/if when they started to fail nobody is looking to fix those
I propose to start removing jobs that has been continuously broken for more than 30 days
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra
| true
|
2,951,108,402
|
Enable -Wunused on torch targets
|
cyyever
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/binaries",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"ciflow/periodic",
"ci-no-td",
"skip-url-lint"
] | 19
|
COLLABORATOR
|
For GCC, ``-Wunused`` contains:
```
-Wunused-function
Warn whenever a static function is declared but not defined or a non\-inline static function is unused.
-Wunused-label
Warn whenever a label is declared but not used.
To suppress this warning use the unused attribute.
-Wunused-parameter
Warn whenever a function parameter is unused aside from its declaration.
To suppress this warning use the unused attribute.
-Wunused-variable
Warn whenever a local variable or non-constant static variable is unused aside from its declaration
To suppress this warning use the unused attribute.
```
For Clang, some of the diagnostics controlled by ``-Wunused`` are enabled by default:
```
Controls [-Wunused-argument](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-argument),
[-Wunused-but-set-variable](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-but-set-variable),
[-Wunused-function](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-function),
[-Wunused-label](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-label), [-Wunused-lambda-capture](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-lambda-capture),
[-Wunused-local-typedef](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-local-typedef),
[-Wunused-private-field](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-private-field),
[-Wunused-property-ivar](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-property-ivar),
[-Wunused-value](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-value), [-Wunused-variable](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-variable).
```
These checks are all usefull. This PR aims to enable ``-Wunused`` without breaking code.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
2,951,088,706
|
[WIP] Fix XPU build.
|
etaf
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/xpu"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149862
* __->__ #150076
| true
|
2,951,065,160
|
[BE] do not retain/release tensor
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
`Tensor::as_strided__symint` is inplace op that returns self, no need to retain it
| true
|
2,951,057,540
|
[inductor][comms] skip reorder_for_locality for wait nodes
|
xmfan
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150074
* #150258
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,951,057,488
|
[ca][ddp] loud error with c++ reducer
|
xmfan
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150258
* __->__ #150073
* #150074
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,951,047,161
|
[StaticRuntime] Fuse SigridHash
|
csteegz
|
open
|
[
"oncall: jit",
"fb-exported",
"release notes: jit"
] | 3
|
NONE
|
Summary: Previously, SigridHash could be fused by static runtime. That got broken when a new parameter was added to SigridHash - This diff brings back that fusion to try to drive performance benefits.
Test Plan: Unit tests and internal preproc perf tests
Differential Revision: D69498170
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,951,039,389
|
Revert "[BE][Attention] Use `isneginf` (#139763)"
|
jeffhataws
|
closed
|
[
"triaged",
"open source",
"better-engineering",
"topic: not user facing"
] | 10
|
NONE
|
This reverts commit 157c18a180398eddef52da559fe1649e35ce61f1.
Fixes https://github.com/pytorch/xla/issues/8746 and https://github.com/pytorch/xla/issues/8423
| true
|
2,951,004,628
|
[cachinghostallocator] remove the check on cudaHostRegister path
|
842974287
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary:
In the cudaHostAlloc path, the flag we used is `cudaHostAllocDefault` [0] which don't really have this strict enforcement (devicePtr retrieved from ` cudaHostGetDevicePointer(()` point to the same addr as the hostPtr) according to the guide [1]. This diff removes the check so that the host register path works for ROCm.
[0]https://github.com/pytorch/pytorch/blob/6aca002d82e5131cbf48496a04e7b0213ace1c03/aten/src/ATen/cuda/CachingHostAllocator.cpp#L97
[1] https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY.html#group__CUDART__MEMORY_1gb65da58f444e7230d3322b6126bb4902
Test Plan: test_pinned_memory_with_cudaregister tests
Differential Revision: D71932562
| true
|
2,950,997,818
|
[export][schema_upgrader][refactor] create a folder that holds different major version schemas
|
ydwu4
|
open
|
[
"fb-exported",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Summary:
A lot of places are directly referencing the the dataclasses in schema.py. Given we need to keep the original data class to maintain BC after a major version bump, we'll create multiple schemas of different major versions.
Note that this also implies that we need an upgrader if we remove a dataclass (or rename) a data class.
Test Plan: Existing export tests.
Differential Revision: D71936455
| true
|
2,950,988,021
|
Fix #149806 : Fix path lookup in _preload_cuda_deps
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
@pytorchbot label "bug"
Fixes #149806
| true
|
2,950,987,066
|
[MPS] fix attention enable_gqa crash on mps
|
pytorchbot
|
closed
|
[
"open source",
"release notes: mps",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
Fixes #149132
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,950,977,557
|
Delete linux-focal-cuda12_6-py3_10-gcc11-bazel-test
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
It's been broken for a while even when this jobs were still called ` linux-focal-cuda12.4-py3.10-gcc9-bazel-test`
Last time it run successfully on Feb 21st
| true
|
2,950,970,918
|
rework test_mem_get_info for single gpu case
|
Fuzzkatt
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Increase block size from 8 mb to 512 mb since jetson has unified cpu / gpu mem and the blocksizes seem to need to be much larger. Also add sleeps after synchronizes since they are needed to consistently pass on nvidia internal CI. Ideally would not be a long term solution, will follow up with debugging why torch.cuda.mem_get_info requires this.
cc @eqy
| true
|
2,950,953,846
|
No stacktrace found for torch.check deferred runtime asserts
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
```
@torch.compile(
backend="inductor",
fullgraph=True,
dynamic=True,
)
def f(a, b):
torch._check(
a.size()[0] > 0,
"linalg.vector_norm cannot compute the {ord} norm on an empty tensor "
"because the operation does not have an identity",
)
# _check_vector_norm_args(a, -1, 1)
return b*10
a = torch.ones(10, 10, device="cuda",dtype=torch.float64)
b = torch.torch.ones(10, 10, device="cuda",dtype=torch.float64)
torch._dynamo.decorators.mark_unbacked(a, 0)
with fresh_inductor_cache():
f(a, b)
````
```
===== Forward graph 0 =====
/home/lsakka/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "Sym(u0)", arg1_1: "Sym(s0)", arg2_1: "f64[s0, s0][s0, 1]cuda:0"):
# File: /home/lsakka/pytorch/example8.py:56 in f, code: a.size()[0] > 0,
ge_1: "Sym(u0 >= 0)" = arg0_1 >= 0
_assert_scalar = torch.ops.aten._assert_scalar.default(ge_1, "Runtime assertion failed for expression u0 >= 0 on node 'ge'"); ge_1 = _assert_scalar = None
# No stacktrace found for following nodes
gt: "Sym(u0 > 0)" = arg0_1 > 0; arg0_1 = None
_assert_scalar_1 = torch.ops.aten._assert_scalar.default(gt, "Runtime assertion failed for expression 0 < u0 on node 'gt_1'"); gt = _assert_scalar_1 = None
# File: /home/lsakka/pytorch/example8.py:62 in f, code: return b*10
mul: "f64[s0, s0][s0, 1]cuda:0" = torch.ops.aten.mul.Tensor(arg2_1, 10); arg2_1 = None
return (mul,)
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,950,952,062
|
Torch.check does not preserve original error message in the deferred runtime assert
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
```
@torch.compile(
backend="inductor",
fullgraph=True,
dynamic=True,
)
def f(a, b):
torch._check(
a.size()[0] > 0,
"linalg.vector_norm cannot compute the {ord} norm on an empty tensor "
"because the operation does not have an identity",
)
# _check_vector_norm_args(a, -1, 1)
return b*10
a = torch.ones(10, 10, device="cuda",dtype=torch.float64)
b = torch.torch.ones(10, 10, device="cuda",dtype=torch.float64)
torch._dynamo.decorators.mark_unbacked(a, 0)
f(a, b)
```
output:
```
/home/lsakka/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "Sym(u0)", arg1_1: "Sym(s0)", arg2_1: "f64[s0, s0][s0, 1]cuda:0"):
# File: /home/lsakka/pytorch/example8.py:56 in f, code: a.size()[0] > 0,
ge_1: "Sym(u0 >= 0)" = arg0_1 >= 0
_assert_scalar = torch.ops.aten._assert_scalar.default(ge_1, "Runtime assertion failed for expression u0 >= 0 on node 'ge'"); ge_1 = _assert_scalar = None
# No stacktrace found for following nodes
gt: "Sym(u0 > 0)" = arg0_1 > 0; arg0_1 = None
_assert_scalar_1 = torch.ops.aten._assert_scalar.default(gt, "Runtime assertion failed for expression 0 < u0 on node 'gt_1'"); gt = _assert_scalar_1 = None
# File: /home/lsakka/pytorch/example8.py:62 in f, code: return b*10
mul: "f64[s0, s0][s0, 1]cuda:0" = torch.ops.aten.mul.Tensor(arg2_1, 10); arg2_1 = None
return (mul,)
```
instead of "Runtime assertion failed for expression 0 < u0 on node 'gt_1'" we should get
"linalg.vector_norm cannot compute the {ord} norm on an empty tensor "
"because the operation does not have an identity",
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,950,943,166
|
Support HOPs in fx_graph_runnable
|
xmfan
|
open
|
[
"triaged",
"actionable",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 1
|
MEMBER
|
### 🚀 The feature, motivation and pitch
The fx_graph_runnable file is a standalone script that can be repro the run. Useful for fast debugging and allows trying out things directly in the graph, but it isn't runnable when HOPs are in the graph. As a workaround, i'm stiching dummy data into the graph.
https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/xmfan/d3b889bd-1c47-4957-ab04-c9b2e30bc935/custom/rank_0/0_2_0_0/fx_graph_runnable_64.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
```python
class Repro(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.fw_graph0 = <lambda>()
self.joint_graph0 = <lambda>()
self.mask_graph0 = <lambda>()
self.fw_graph1 = <lambda>()
self.joint_graph1 = <lambda>()
self.mask_graph1 = <lambda>()
self.fw_graph2 = <lambda>()
self.joint_graph2 = <lambda>()
self.mask_graph2 = <lambda>()
self.fw_graph3 = <lambda>()
self.joint_graph3 = <lambda>()
self.mask_graph3 = <lambda>()
self.fw_graph4 = <lambda>()
self.joint_graph4 = <lambda>()
self.mask_graph4 = <lambda>()
self.fw_graph5 = <lambda>()
self.joint_graph5 = <lambda>()
self.mask_graph5 = <lambda>()
self.fw_graph6 = <lambda>()
self.joint_graph6 = <lambda>()
self.mask_graph6 = <lambda>()
self.fw_graph7 = <lambda>()
self.joint_graph7 = <lambda>()
self.mask_graph7 = <lambda>()
self.fw_graph8 = <lambda>()
self.joint_graph8 = <lambda>()
self.mask_graph8 = <lambda>()
self.fw_graph9 = <lambda>()
self.joint_graph9 = <lambda>()
self.mask_graph9 = <lambda>()
self.fw_graph10 = <lambda>()
self.joint_graph10 = <lambda>()
self.mask_graph10 = <lambda>()
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh
| true
|
2,950,937,400
|
[Export] [Core ATen] [Decomposition] `linalg_vector_norm` not decomposed
|
YifanShenSZ
|
open
|
[
"triaged",
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
As defined in [native functions](https://github.com/pytorch/pytorch/blob/v2.6.0/aten/src/ATen/native/native_functions.yaml#L14189-L14199), operator `linalg_vector_norm` is not tagged as `core`, which means it doesn't belong to core aten. In another word, when we run
```
import torch
class Model(torch.nn.Module):
def forward(self, x):
return torch.linalg.vector_norm(x)
model = Model()
model.eval()
x = torch.rand(2, 3)
aten_program = torch.export.export(model, (x,))
print(aten_program)
core_aten_program = aten_program.run_decompositions()
print(core_aten_program)
```
We expect to see it decomposed in `core_aten_program`. But actually, now it isn't
### Versions
torch 2.6.0
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,950,928,340
|
[MPS] Add `chebyshev_polynomial_[uvw]`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150060
For both eager and inductor
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,950,908,164
|
[CI] Disable some tests that are failing in periodic
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/periodic",
"module: inductor",
"ciflow/inductor",
"keep-going"
] | 5
|
CONTRIBUTOR
|
Disabling some tests to restore periodic
nogpu avx512 timeout:
https://hud.pytorch.org/pytorch/pytorch/commit/59f14d19aea4091c65cca2417c509e3dbf60c0ed#38492953496-box
profiler failure: https://hud.pytorch.org/pytorch/pytorch/commit/7ae0ce6360b6e4f944906502d20da24c04debee5#38461255009-box
test_accelerator failure:
https://hud.pytorch.org/pytorch/pytorch/commit/87bfd66c3c7061db6d36d8daa62f08f507f90e39#39476723746-box
origin: 146098
test_overrides failure:
https://hud.pytorch.org/pytorch/pytorch/commit/bf752c36da08871d76a66fd52ad09f87e66fc770#39484562957-box
origin: 146098
inductor cpu repro:
https://hud.pytorch.org/pytorch/pytorch/commit/bb9c4260249ea0c57e87395eff5271fb479efb6a#38447525659-box
functorch eager transforms:
https://hud.pytorch.org/pytorch/pytorch/commit/8f858e226ba81fde41d39aa34f1fd4cb4a4ecc51#39488068620-box
https://hud.pytorch.org/pytorch/pytorch/commit/f2cea01f7195e59abd154b5551213ee3e38fa40d#39555064878
https://hud.pytorch.org/pytorch/pytorch/commit/b5281a4a1806c978e34c5cfa0befd298e469b7fd#39599355600
either 148288 or 148261?
https://hud.pytorch.org/hud/pytorch/pytorch/2ec9aceaeb77176c4bdeb2d008a34cba0cd57e3c/1?per_page=100&name_filter=periodic&mergeLF=true
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,950,908,001
|
Fix bug in _load_state_dict_from_keys method
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (checkpoint)"
] | 5
|
CONTRIBUTOR
|
Summary:
The _load_state_dict_from_keys method specifies that `Loads any key specified in this set. If no keys are specified, the entire checkpoint is loaded.`
But this isn't happening right now, because an empty keys arg is passed in as a set() to `_load_state_dict` and keys is expected to be None for it to actually be included in the state_dict https://fburl.com/code/l8yzojyx. So with the set() argument, the state_dict is always going to be empty
Test Plan: ensure existing tests pass
Differential Revision: D71930712
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,950,899,088
|
[export] Export fails with multiple dimension indexing
|
angelayi
|
closed
|
[
"oncall: pt2",
"module: dynamic shapes",
"export-triaged",
"oncall: export"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
def test_slicing(self):
class M(torch.nn.Module):
def forward(self, x, y):
b = x.item()
torch._check_is_size(b)
torch._check(b < y.shape[0])
return y[0, b]
print(torch.export.export(M(), (torch.tensor(4), torch.ones(10, 10))))
```
fails with
```
File "/data/users/angelayi/pytorch/moo.py", line 76, in forward
return y[0, b]
File "/data/users/angelayi/pytorch/torch/fx/experimental/proxy_tensor.py", line 1290, in __torch_function__
return func(*args, **kwargs)
File "/data/users/angelayi/pytorch/torch/fx/experimental/proxy_tensor.py", line 1337, in __torch_function__
return func(*args, **kwargs)
File "/data/users/angelayi/pytorch/torch/_export/non_strict_utils.py", line 689, in __torch_function__
return func(*args, **kwargs)
IndexError: only integers, slices (`:`), ellipsis (`...`), None and long or byte Variables are valid indices (got SymInt)
```
The workaround is to replace `y[0, b]` with `y[0][b]`.
x-post from ET: https://github.com/pytorch/executorch/issues/9486
### Versions
main
cc @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @suo @ydwu4
| true
|
2,950,889,692
|
FlexAttention inductor tensor has no attribute `get_dtype`
|
tsengalb99
|
closed
|
[
"needs reproduction",
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 3
|
NONE
|
### 🐛 Describe the bug
I am getting the following bug when compiling flex attention
``` torch._inductor.exc.InductorError: LoweringException: AttributeError: 'Tensor' object has no attribute 'get_dtype'```
This error does not happen without torch compile.
### Versions
Collecting environment information...
PyTorch version: 2.8.0.dev20250326+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2801.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31
NUMA node1 CPU(s): 32-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] kmeans-pytorch==0.3
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.8.0.87
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] pytorch-lightning==1.9.5
[pip3] pytorch-lightning-bolts==0.3.2.post1
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250326+cu128
[pip3] torch_pca==1.0.0
[pip3] torch-shampoo==1.0.0
[pip3] torchao==0.8.0
[pip3] torchaudio==2.6.0.dev20250326+cu128
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.22.0.dev20250326+cu128
[pip3] triton==3.2.0
[conda] kmeans-pytorch 0.3 pypi_0 pypi
[conda] numpy 2.2.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.8.0.87 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] pytorch-lightning 1.9.5 pypi_0 pypi
[conda] pytorch-lightning-bolts 0.3.2.post1 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0.dev20250326+cu128 pypi_0 pypi
[conda] torch-pca 1.0.0 pypi_0 pypi
[conda] torch-shampoo 1.0.0 pypi_0 pypi
[conda] torchao 0.8.0 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250326+cu128 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250326+cu128 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,950,885,166
|
Custom attributes for ONNX operations ?
|
borisfom
|
closed
|
[
"module: onnx",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
For custom ONNX operations, it would be nice to be able to specify custom attributes of the node.
For example, if I define custom ONNX op “mydomain::mycustomop”, which I plan to implement as TensorRT plugin, ‘mydomain’ currently ends up in ‘domain’ attribute of the node.
TensorRT parser, however, expects the namespace of the plugin operation to be specified in ‘plugin_namespace’ attribute.
### Alternatives
There currently is no way to add such an attribute directly in custom ONNX op definition, without direct surgery on the graph.
### Additional context
_No response_
| true
|
2,950,854,114
|
`scaled_dot_product_attention` backwards: illegal memory access with large inputs
|
jatentaki
|
open
|
[
"module: crash",
"module: cuda",
"triaged",
"module: sdpa"
] | 3
|
NONE
|
### 🐛 Describe the bug
With a large enough input, `scaled_dot_product_attention` crashes with illegal CUDA memory access in backwards pass. It appears important to provide an attention mask.
## Repro script
```python
import torch
device = torch.device("cuda")
dtype = torch.bfloat16 # doesn't seem to matter, also fails for float32 and float16
# Parameters that cause the crash. Ballpark, a bit smaller and anything larger fails as well
num_queries = 49999
num_keys = 49999
num_heads = 8
feature_dim = 64
# Generate random input tensors with gradients in the shape expected by scaled_dot_product_attention
query = torch.randn(1, num_heads, num_queries, feature_dim, device=device, dtype=dtype, requires_grad=True)
key = torch.randn(1, num_heads, num_keys, feature_dim, device=device, dtype=dtype, requires_grad=True)
value = torch.randn(1, num_heads, num_keys, feature_dim, device=device, dtype=dtype, requires_grad=True)
mask = torch.ones((num_queries, num_keys), dtype=torch.bool, device=query.device)
output = torch.nn.functional.scaled_dot_product_attention(
query,
key,
value,
attn_mask=mask,
)
# Backward pass
loss = output.sum()
loss.backward()
# Attempt to print the gradient norm reveals the failure
print(f"query.grad.norm(): {query.grad.norm().item()}")
```
## Output
```
~/Programs> python repro_flash_attention_crash.py
Traceback (most recent call last):
File "/home/mtyszkiewicz/Programs/repro_flash_attention_crash.py", line 30, in <module>
print(f"query.grad.norm(): {query.grad.norm().item()}")
File "/home/mtyszkiewicz/miniforge3/lib/python3.10/site-packages/torch/_tensor.py", line 872, in norm
return torch.norm(self, p, dim, keepdim, dtype=dtype)
File "/home/mtyszkiewicz/miniforge3/lib/python3.10/site-packages/torch/functional.py", line 1805, in norm
return torch.linalg.vector_norm(
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
## Compute-sanitizer
Running under compute-sanitizer shows the below (a selection)
```
========= Invalid __global__ read of size 16 bytes
========= at fmha_cutlassB_bf16_aligned_64x64_k64_sm80(PyTorchMemEffAttention::AttentionBackwardKernel<cutlass::arch::Sm80, cutlass::bfloat16_t, (bool)1, (bool)0, (bool)1, (int)64, (int)64, (int)64, (bool)0>::Params)+0x5740
========= by thread (64,0,0) in block (217,0,0)
========= Address 0x749e827e0b80 is out of bounds
========= and is 4,286,706,816 bytes before the nearest allocation at 0x749f82000000 of size 5,001,707,520 bytes
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 5880 Ada Generation
Nvidia driver version: 560.28.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 60%
CPU max MHz: 5881.0000
CPU min MHz: 400.0000
BogoMIPS: 8982.90
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @ptrblck @msaroufim @eqy
| true
|
2,950,841,426
|
[Dynamo] Add debug linting option for graph dedupe
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 6
|
CONTRIBUTOR
|
As title
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,950,801,002
|
Profiler doesn't seem to work on AMD CPUs
|
RedTachyon
|
open
|
[
"module: rocm",
"triaged",
"oncall: profiler"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Initially spotted in https://github.com/pytorch/torchtune/pull/2522
A minimal version of the code that crashes is something like:
```python
import torch
import torch.profiler
def minimal_crash():
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")
with torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
],
profile_memory=True,
with_stack=True,
record_shapes=True,
) as prof:
for i in range(10):
prof.step() # advance the profiler to the next step
# Print out a summary of the profiling run.
print(prof.key_averages().table(sort_by="cuda_time_total"))
if __name__ == '__main__':
minimal_crash()
```
The resulting error message is:
```
Using device: cuda
Traceback (most recent call last):
File "/opt/hpcaas/.mounts/fs-0df31b178aa4037ac/home/kwiat/projects/polygon/test_crash.py", line 28, in <module>
main()
File "/opt/hpcaas/.mounts/fs-0df31b178aa4037ac/home/kwiat/projects/polygon/test_crash.py", line 8, in main
with torch.profiler.profile(
File "/opt/hpcaas/.mounts/fs-0df31b178aa4037ac/home/kwiat/projects/torchtune-latent/.venv/lib/python3.12/site-packages/torch/profiler/profiler.py", line 748, in __exit__
self.stop()
File "/opt/hpcaas/.mounts/fs-0df31b178aa4037ac/home/kwiat/projects/torchtune-latent/.venv/lib/python3.12/site-packages/torch/profiler/profiler.py", line 764, in stop
self._transit_action(self.current_action, None)
File "/opt/hpcaas/.mounts/fs-0df31b178aa4037ac/home/kwiat/projects/torchtune-latent/.venv/lib/python3.12/site-packages/torch/profiler/profiler.py", line 793, in _transit_action
action()
File "/opt/hpcaas/.mounts/fs-0df31b178aa4037ac/home/kwiat/projects/torchtune-latent/.venv/lib/python3.12/site-packages/torch/profiler/profiler.py", line 212, in stop_trace
self.profiler.__exit__(None, None, None)
File "/opt/hpcaas/.mounts/fs-0df31b178aa4037ac/home/kwiat/projects/torchtune-latent/.venv/lib/python3.12/site-packages/torch/autograd/profiler.py", line 359, in __exit__
self.kineto_results = _disable_profiler()
^^^^^^^^^^^^^^^^^^^
RuntimeError: !stack.empty() INTERNAL ASSERT FAILED at "../torch/csrc/autograd/profiler_python.cpp":981, please report a bug to PyTorch. Python replay stack is empty.
```
Error asks to report it to PyTorch, so here I am.
From some initial testing, this always happens if the profiler is tracking CPU activity, and if I'm running it on an AMD CPU. When I run the same code on an Intel CPU, it goes through as expected. So my best guess is that it's some incompatibility between the PyTorch profiler and AMD CPUs.
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:38:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torch-tb-profiler==0.4.3
[pip3] torchao==0.8.0
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.11.0
[pip3] torchtune==0.0.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] No relevant packages
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,950,774,435
|
[MPS] `chebyshev_polynomial_t` returns garbage if 2nd arg is scalar
|
malfet
|
closed
|
[
"triaged",
"module: correctness (silent)",
"module: mps"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I.e. running something like `torch.special.chebyshev_polynomial_t(torch.rand(4, 4, device='mps'), 2)` will return tensor filled with 1s on M1 machine, but will work fine on everything newer
```
% python3 -c "import torch;x=torch.rand(4, 4, device='mps');print(torch.special.chebyshev_polynomial_t(x, 2))"
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]], device='mps:0')
```
But same command on different machine
```
% python3 -c "import torch;x=torch.rand(4, 4, device='mps');print(torch.special.chebyshev_polynomial_t(x, 2))"
tensor([[ 0.0103, -0.9365, -0.0997, 0.3640],
[-0.9811, 0.0197, -0.5390, -0.6257],
[-0.7586, -0.9924, -0.3608, -0.9997],
[-0.9963, -0.9560, -0.1980, 0.7501]], device='mps:0')
```
### Versions
nightly
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
2,950,771,333
|
`torch.view_as_complex()` does not work on memory layout produced by `torch.contiguous()` after transpose
|
alisterburt
|
open
|
[
"triaged",
"module: complex"
] | 0
|
NONE
|
### 🐛 Describe the bug
```python
import torch
def print_strides(x):
print(x.stride(0), x.stride(1), x.stride(2))
x = torch.rand(336, 1, 2)
print_strides(x) # 2 2 1
torch.view_as_complex(x) # success!
x = torch.rand(336, 2, 1)
x = x.transpose(1, 2).contiguous()
print_strides(x) # 2 1 1
torch.view_as_complex(x) # RuntimeError: Tensor must have a stride divisible by 2 for all but last dimension
```
c.f. arogozhnikov/einops#370
### Versions
n/a
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames
| true
|
2,950,714,492
|
2.7 test docker image has NCCL with older CUDA
|
andreasst
|
open
|
[
"oncall: distributed",
"oncall: releng",
"triaged",
"module: docker"
] | 2
|
NONE
|
### 🐛 Describe the bug
The `ghcr.io/pytorch/pytorch-test:2.7.0-cuda12.8-cudnn9-devel` nightly image (`sha256:f89440bd12a73cec62f03099885089d9d7f0084ea8fc08fa4967a63151dfa6f2`) has a NCCL version compiled against an older CUDA 12.2 version from pip package `nvidia-nccl-cu12`
```
$ strings /opt/conda/lib/python3.11/site-packages/nvidia/nccl/lib/libnccl.so.2 | grep "NCCL version" | head -1
NCCL version 2.25.1+cuda12.2
```
This causes a 45 second slowdown on startup on my system.
Note that this is different from the NCCL library in /usr/lib from apt package libnccl2 that uses CUDA 12.8 as expected:
```
$ strings /usr/lib/x86_64-linux-gnu/libnccl.so.2 | grep "NCCL version" | head -1
NCCL version 2.25.1+cuda12.8
```
The 2.6 image `pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel` doesn't have this issue. It uses version consistent with CUDA 12.4 that it is designed for (`NCCL version 2.21.5+cuda12.4`).
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11 | packaged by conda-forge | (main, Mar 3 2025, 20:43:55) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.1.0-32-cloud-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 5399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] optree==0.14.1
[pip3] torch==2.7.0+cu128
[pip3] torchaudio==2.7.0+cu128
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0+cu128
[pip3] triton==3.3.0
[conda] numpy 2.2.3 py311h5d046bc_0 conda-forge
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.7.1.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] optree 0.14.1 pypi_0 pypi
[conda] torch 2.7.0+cu128 pypi_0 pypi
[conda] torchaudio 2.7.0+cu128 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.22.0+cu128 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,950,698,249
|
cuda memory error thrown by torch.
|
Corey4005
|
open
|
[
"module: windows",
"triaged",
"module: wsl"
] | 1
|
NONE
|
### 🐛 Describe the bug
Hello, I am receiving a Error 2: out of memory error after installing torch on WSL2:
```
Python 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
/home/user/another/lib/python3.12/site-packages/torch/cuda/__init__.py:129: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() > 0
False
```
I installed pytorch on my machine using a virtual environment and then the 12.4 command
```
python3 -m venv ./new
pip3 install torch torchvision torchaudio
```
Here is the output from nvidia-smi:

Here is the output of collect_env.py:
```
python collect_env.py
Collecting environment information...
/home/user/another/lib/python3.12/site-packages/torch/cuda/__init__.py:129: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() > 0
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA RTX A4000
GPU 1: NVIDIA RTX A4000
GPU 2: NVIDIA RTX A4000
GPU 3: NVIDIA RTX A4000
Nvidia driver version: 572.60
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 7
BogoMIPS: 7391.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_vnni md_clear flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 320 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 10 MiB (10 instances)
L3 cache: 19.3 MiB (1 instance)
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
```
### Versions
Can you help me understand how to solve the memory error?
Also, I recieve the [similar](https://github.com/tensorflow/tensorflow/issues/88676#issuecomment-2755059000) error when I try to install tensorflow. I am wondering if there is something wrong with my system set up for WSL2.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,950,679,605
|
XPU build failure with DLE 2025.1.0
|
pbchekin
|
open
|
[
"module: build",
"oncall: profiler",
"module: xpu"
] | 9
|
NONE
|
Deep Learning Essentials 2025.1.0 has been [released](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html?packages=dl-essentials&dl-essentials-os=linux&dl-lin=offline). Building PyTorch XPU with this release fails with the following errors:
```
pytorch/third_party/kineto/libkineto/src/plugin/xpupti/XpuptiActivityApi.cpp: In member function ‘void libkineto::XpuptiActivityApi::enableXpuptiActivities(const std::set<libkineto::ActivityType>&)’:
pytorch/third_party/kineto/libkineto/src/plugin/xpupti/XpuptiActivityApi.cpp:195:33: error: ‘PTI_VIEW_SYCL_RUNTIME_CALLS’ was not declared in this scope; did you mean ‘PTI_VIEW_RUNTIME_API’?
195 | XPUPTI_CALL(ptiViewEnable(PTI_VIEW_SYCL_RUNTIME_CALLS));
```
```
pytorch/third_party/kineto/libkineto/src/plugin/xpupti/XpuptiActivityProfiler.cpp:1:
pytorch/third_party/kineto/libkineto/src/plugin/xpupti/XpuptiActivityProfiler.h:55:13: error: ‘pti_view_record_sycl_runtime’ does not name a type
55 | const pti_view_record_sycl_runtime* activity,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
```
pytorch/third_party/kineto/libkineto/src/plugin/xpupti/XpuptiActivityProfiler.h:55:13: error: ‘pti_view_record_sycl_runtime’ does not name a type
55 | const pti_view_record_sycl_runtime* activity,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
pytorch/third_party/kineto/libkineto/src/plugin/xpupti/XpuptiActivityHandlers.cpp:101:11: error: ‘pti_view_record_sycl_runtime’ does not name a type
101 | const pti_view_record_sycl_runtime* activity,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
And other errors, see https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/14091932056/job/39470474844.
PyTorch commit id: 87bfd66c3c7061db6d36d8daa62f08f507f90e39
cc @malfet @seemethere @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,950,637,882
|
ROCM Nightly Build failure
|
yangw-dev
|
closed
|
[
"module: rocm",
"module: ci",
"triaged"
] | 2
|
CONTRIBUTOR
|
# Description
See HUD: [pytorch nightly](https://hud.pytorch.org/hud/pytorch/pytorch/nightly/1?per_page=50)
the Pytorch nightly rocm keeps failing due to time out in upload-artifact step
see 2025-03-26 nightly release
[linux-binary-libtorch / libtorch-rocm6_2_4-shared-with-deps-release-build / build](https://github.com/pytorch/pytorch/actions/runs/14077848867/job/39436127051)
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,950,637,331
|
Merge Triton ScaledMM as epilogue to MM template
|
PaulZhang12
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 16
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150415
* __->__ #150045
Previously, scaled_mm's (FP8 matmul) Triton lowering for inductor was in a separate template. This PR consolidates that lowering into the mm template, with an added epilogue to deal with multiplying the scales. This paves the way for future scaled variants of BMM, Grouped GEMM in inductor.
Currently, there is still a separate template for TMA+persistent version of scaled_mm. The current mm lowering has a separate template for TMA + Persistent version. Will hopefully consolidate the extra scaled_mm TMA+persistent template when the consolidation for the mm template is done.
TODO: Consolidate TMA+Persistent logic into 1 template and remove separate scaled_mm TMA template
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,950,628,531
|
[draft][FSDP2] Reorder FSDP2 pre_forward
|
mori360
|
open
|
[
"oncall: distributed",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/148831
When using checkpoint() of FSDP2, pre_forward will be called twice
Then second time to be call will have the training state as pre_backward,
If mp_policy is set, args would not be casted in the seconds time, raise error `torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.`
This issue does not happen at FSDP1, because they have the different pre_forward:
FSDP1:
- _root_pre_forward
- _cast_forward_inputs(state.mixed_precision.param_dtype)
- _pre_forward
- return if training state is `HandleTrainingState.BACKWARD_PRE`
FSDP2:
- _pre_forward
- return if training state is `TrainingState.PRE_BACKWARD`
- _root_pre_forward
- cast_fn(self._mp_policy.param_dtype)
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,950,611,791
|
[programming model] make stacktraces for data-dependent errors more friendly
|
bdhirsh
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: compile ux"
] | 1
|
CONTRIBUTOR
|
@HDCharles recently ran into a particularly huge one (~380 lines):
```
tlp python generate.py --checkpoint_path ../checkpoints/$MODEL_REPO/model.pth --compile
Using device=cuda
Loading model ...
Time to load model: 12.12 seconds
/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/contextlib.py:105: FutureWarning: `torch.backends.cuda.sdp_kernel()` is deprecated. In the future, this context manager will be removed. Please see `torch.nn.attention.sdpa_kernel()` for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
W0326 11:55:47.977000 2716646 site-packages/torch/fx/experimental/symbolic_shapes.py:6612] [0/0] failed during evaluate_expr(Eq(u0, 1), hint=None, size_oblivious=False, forcing_spec=False
E0326 11:55:47.977000 2716646 site-packages/torch/fx/experimental/recording.py:299] [0/0] failed while running evaluate_expr(*(Eq(u0, 1), None), **{'fx_node': False, 'expr_sym_node_id': 140530685476944})
Traceback (most recent call last):
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 3260, in run_node
return node.target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/cdhernandez/ao/torchao/utils.py", line 419, in _dispatch__torch_function__
out = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/users/cdhernandez/ao/torchao/utils.py", line 435, in _dispatch__torch_dispatch__
return cls._ATEN_OP_OR_TORCH_FN_TABLE[func](func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/cdhernandez/ao/torchao/utils.py", line 395, in wrapper
return func(f, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/cdhernandez/ao/torchao/dtypes/affine_quantized_tensor_ops.py", line 386, in _
assert self.shape[0] >= indices[0].max(), f"for op {func}, got 0th dim index {indices[0].max()} which is outside of range for shape {self.shape}"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1282, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1823, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1384, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 2338, in _dispatch_impl
r = func.decompose(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_ops.py", line 799, in decompose
return self._op_dk(dk, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/fx/experimental/sym_node.py", line 521, in guard_bool
r = self.shape_env.evaluate_expr(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6603, in evaluate_expr
return self._evaluate_expr(
^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6822, in _evaluate_expr
raise self._make_data_dependent_error(
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(u0, 1) (unhinted: Eq(u0, 1)). (Size-like symbols: none)
Caused by: w1 = self.w1[expert_indices] # ata/users/cdhernandez/gpt-fast/mixtral-moe/model.py:313 in forward (_ops.py:799 in decompose)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/generate.py", line 67, in decode_one_token
logits = model(x, input_pos)
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 111, in forward
x = layer(x, input_pos, freqs_cis, mask)
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 132, in forward
moe_out = self.block_sparse_moe(self.ffn_norm(h))
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 289, in forward
out = self.cond_ffn(x, expert_indices, expert_weights, self.num_activated_experts)
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 313, in forward
w1 = self.w1[expert_indices]
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 3103, in get_fake_value
ret_val = wrap_fake_exception(
^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 2617, in wrap_fake_exception
return fn()
^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 3104, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 3301, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 3260, in run_node
return node.target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/cdhernandez/ao/torchao/utils.py", line 419, in _dispatch__torch_function__
out = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/users/cdhernandez/ao/torchao/utils.py", line 435, in _dispatch__torch_dispatch__
return cls._ATEN_OP_OR_TORCH_FN_TABLE[func](func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/cdhernandez/ao/torchao/utils.py", line 395, in wrapper
return func(f, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/cdhernandez/ao/torchao/dtypes/affine_quantized_tensor_ops.py", line 386, in _
assert self.shape[0] >= indices[0].max(), f"for op {func}, got 0th dim index {indices[0].max()} which is outside of range for shape {self.shape}"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1282, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1823, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1384, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 2338, in _dispatch_impl
r = func.decompose(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_ops.py", line 799, in decompose
return self._op_dk(dk, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/fx/experimental/sym_node.py", line 521, in guard_bool
r = self.shape_env.evaluate_expr(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6603, in evaluate_expr
return self._evaluate_expr(
^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6822, in _evaluate_expr
raise self._make_data_dependent_error(
RuntimeError: Dynamo failed to run FX node with fake tensors: call_function <built-in function getitem>(*(AffineQuantizedTensor(tensor_impl=PlainAQTTensorImpl(data=FakeTensor(..., device='cuda:0', size=(8, 14336, 4096), dtype=torch.int8)... , scale=FakeTensor(..., device='cuda:0', size=(8, 14336), dtype=torch.bfloat16)... , zero_point=FakeTensor(..., device='cuda:0', size=(8, 14336), dtype=torch.int64)... , _layout=PlainLayout()), block_size=(1, 1, 4096), shape=torch.Size([8, 14336, 4096]), device=cuda:0, dtype=torch.bfloat16, requires_grad=False), FakeTensor(..., device='cuda:0', size=(2,), dtype=torch.int64)), **{}): got GuardOnDataDependentSymNode('Could not guard on data-dependent expression Eq(u0, 1) (unhinted: Eq(u0, 1)). (Size-like symbols: none)\n\nCaused by: w1 = self.w1[expert_indices] # ata/users/cdhernandez/gpt-fast/mixtral-moe/model.py:313 in forward (_ops.py:799 in decompose)\nFor more information, run with TORCH_LOGS="dynamic"\nFor extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"\nIf you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1\nFor more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing\n\nUser Stack (most recent call last):\n (snipped, see stack below for prefix)\n File "/data/users/cdhernandez/gpt-fast/mixtral-moe/generate.py", line 67, in decode_one_token\n logits = model(x, input_pos)\n File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 111, in forward\n x = layer(x, input_pos, freqs_cis, mask)\n File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 132, in forward\n moe_out = self.block_sparse_moe(self.ffn_norm(h))\n File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 289, in forward\n out = self.cond_ffn(x, expert_indices, expert_weights, self.num_activated_experts)\n File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 313, in forward\n w1 = self.w1[expert_indices]\n\nFor C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/generate.py", line 373, in <module>
main(
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/generate.py", line 316, in main
y = generate(
^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/generate.py", line 127, in generate
generated_tokens, _ = decode_n_tokens(model, next_token.view(batch_size, -1), input_pos, max_new_tokens - 1, callback=callback, **sampling_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/generate.py", line 74, in decode_n_tokens
next_token, next_prob = decode_one_token(
^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 586, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1422, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 594, in __call__
return _compile(
^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1053, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 755, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 791, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1418, in transform_code_object
transformations(instructions, code_options)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 256, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 709, in transform
tracer.run()
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3305, in run
super().run()
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1216, in run
while self.step():
^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1126, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 794, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2753, in CALL
self._call(inst)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2747, in _call
self.call_function(fn, args, kwargs)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1050, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/lazy.py", line 201, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/nn_module.py", line 952, in call_function
return variables.UserFunctionVariable(fn, source=source).call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1067, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3526, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3705, in inline_call_
self.run()
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1216, in run
while self.step():
^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1126, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 794, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2753, in CALL
self._call(inst)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2747, in _call
self.call_function(fn, args, kwargs)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1050, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/nn_module.py", line 952, in call_function
return variables.UserFunctionVariable(fn, source=source).call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1067, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3526, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3705, in inline_call_
self.run()
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1216, in run
while self.step():
^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1126, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 794, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2753, in CALL
self._call(inst)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2747, in _call
self.call_function(fn, args, kwargs)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1050, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/lazy.py", line 201, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/nn_module.py", line 952, in call_function
return variables.UserFunctionVariable(fn, source=source).call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1067, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3526, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3705, in inline_call_
self.run()
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1216, in run
while self.step():
^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1126, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 794, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2753, in CALL
self._call(inst)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2747, in _call
self.call_function(fn, args, kwargs)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1050, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/lazy.py", line 201, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/nn_module.py", line 952, in call_function
return variables.UserFunctionVariable(fn, source=source).call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1067, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3526, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3705, in inline_call_
self.run()
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1216, in run
while self.step():
^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1126, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 794, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 414, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 1111, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 789, in <lambda>
return lambda tx, args, kwargs: obj.call_function(
^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 1111, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 1076, in _handle_insert_op_in_graph
return wrap_fx_proxy(tx, proxy)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 2284, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 2350, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 2446, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cdhernandez/.conda/envs/pytorch-3.12/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 3190, in get_fake_value
raise UserError( # noqa: B904
torch._dynamo.exc.UserError: Could not guard on data-dependent expression Eq(u0, 1) (unhinted: Eq(u0, 1)). (Size-like symbols: none)
Caused by: w1 = self.w1[expert_indices] # ata/users/cdhernandez/gpt-fast/mixtral-moe/model.py:313 in forward (_ops.py:799 in decompose)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/generate.py", line 67, in decode_one_token
logits = model(x, input_pos)
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 111, in forward
x = layer(x, input_pos, freqs_cis, mask)
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 132, in forward
moe_out = self.block_sparse_moe(self.ffn_norm(h))
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 289, in forward
out = self.cond_ffn(x, expert_indices, expert_weights, self.num_activated_experts)
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 313, in forward
w1 = self.w1[expert_indices]
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example
from user code:
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/generate.py", line 67, in decode_one_token
logits = model(x, input_pos)
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 111, in forward
x = layer(x, input_pos, freqs_cis, mask)
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 132, in forward
moe_out = self.block_sparse_moe(self.ffn_norm(h))
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 289, in forward
out = self.cond_ffn(x, expert_indices, expert_weights, self.num_activated_experts)
File "/data/users/cdhernandez/gpt-fast/mixtral-moe/model.py", line 313, in forward
w1 = self.w1[expert_indices]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
Detected rank: None
```
This one is probably a bit worse than average, because the data dependent code (comparison against `indices[0].max()` lived inside of a tensor subclass's `__torch_dispatch__`. But we can probably still trim away a number of the frames in this stack to make it more readable to someone not as well versed in torch.compile
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,950,603,265
|
[MPSInductor] Run chebyshev_polynomial_t tests
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Test name should start with `test_`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,950,579,726
|
Need a more descriptive error when running ROCm tests on a non-ROCm machine
|
ahmadsharif1
|
closed
|
[
"module: rocm",
"module: error checking",
"triaged",
"rocm"
] | 3
|
CONTRIBUTOR
|
Hi,
I was trying to reproduce this error:
https://github.com/pytorch/pytorch/actions/runs/14086862973/job/39455645090
```
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=1 PYTORCH_TEST_WITH_ROCM=1 python test/test_ops.py TestCommonCUDA.test_noncontiguous_samples_native_layer_norm_cuda_float32
Traceback (most recent call last):
File "/home/ahmads/personal/pytorch/test/test_ops.py", line 2825, in <module>
instantiate_device_type_tests(TestCommon, globals())
File "/home/ahmads/personal/pytorch/torch/testing/_internal/common_device_type.py", line 928, in instantiate_device_type_tests
device_type_test_class.instantiate_test(
File "/home/ahmads/personal/pytorch/torch/testing/_internal/common_device_type.py", line 536, in instantiate_test
instantiate_test_helper(
File "/home/ahmads/personal/pytorch/torch/testing/_internal/common_device_type.py", line 443, in instantiate_test_helper
test = decorator(test)
^^^^^^^^^^^^^^^
File "/home/ahmads/personal/pytorch/torch/testing/_internal/common_device_type.py", line 1784, in skipCUDAIfNoCusolver
not has_cusolver() and not has_hipsolver(), "cuSOLVER not available"
^^^^^^^^^^^^^^^
File "/home/ahmads/personal/pytorch/torch/testing/_internal/common_device_type.py", line 1776, in has_hipsolver
rocm_version = _get_torch_rocm_version()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ahmads/personal/pytorch/torch/testing/_internal/common_cuda.py", line 267, in _get_torch_rocm_version
return tuple(int(x) for x in rocm_version.split("."))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ahmads/personal/pytorch/torch/testing/_internal/common_cuda.py", line 267, in <genexpr>
return tuple(int(x) for x in rocm_version.split("."))
^^^^^^
ValueError: invalid literal for int() with base 10: 'None'
```
I think this error should be more descriptive and say something like "you need to run this on a ROCm machine" instead of this error.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @malfet
| true
|
2,950,540,697
|
[c10d] Test multiple CUDA Graph captures
|
kwen2501
|
open
|
[
"oncall: distributed",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150040
1. Do multiple captures
2. Perform multiple collectives in one capture
3. Multiple replays (existing)
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,950,516,964
|
[CI] VS2022 jobs seems to be running VS2019 still
|
malfet
|
open
|
[
"module: windows",
"module: ci",
"triaged",
"module: regression"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
See https://hud.pytorch.org/hud/pytorch/pytorch/039ebdc19287bba56fdb6d6bb9e693b3c88de927/1?per_page=50&name_filter=vs2022&mergeLF=true
But if one to look at any of the build logs VS2019 is still used
```
2025-03-26T17:38:23.1336363Z -- The CXX compiler identification is MSVC 19.29.30158.0
2025-03-26T17:38:23.6600802Z -- The C compiler identification is MSVC 19.29.30158.0
2025-03-26T17:38:24.0097135Z -- Detecting CXX compiler ABI info
2025-03-26T17:38:30.4169012Z -- Detecting CXX compiler ABI info - done
2025-03-26T17:38:30.4190067Z -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
2025-03-26T17:38:30.4296882Z -- Detecting CXX compile features
2025-03-26T17:38:30.4318409Z -- Detecting CXX compile features - done
2025-03-26T17:38:30.4588448Z -- Detecting C compiler ABI info
2025-03-26T17:38:30.8521865Z -- Detecting C compiler ABI info - done
2025-03-26T17:38:30.8543019Z -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
```
### Versions
CI
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @seemethere @pytorch/pytorch-dev-infra
| true
|
2,950,489,776
|
[ONNX] Annotate None inputs in symbolic ops
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 3
|
COLLABORATOR
|
Add `None` to type annotations of `torch.onnx.ops.symbolic*` ops and improve tests to test support for optional inputs. Previously it was omitted mistakenly even though the implementation supports it.
| true
|
2,950,394,057
|
[MPSInductor] Move threadfence at the right location
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Not sure how it worked in the past, but fence should be before first read from the shared memory, not after it.
This bug was exposed by https://github.com/pytorch/pytorch/pull/148969 which removed unnecessary barrier before calling `threadgroup_reduce` functions
Test plan:
```
% python3 generate.py --checkpoint_path checkpoints/stories15M/model.pth --prompt "Once upon a time" --device mps --compile
```
Before that it produced gibberish, now it works fine
| true
|
2,950,357,360
|
[easy] Use config patch to toggle capture_scalar_output
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148953
* __->__ #150036
* #149667
* #149087
| true
|
2,950,342,522
|
[aotd] Config to guess_tangents_stride
|
IvanKobzarev
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150035
Differential Revision: [D71907684](https://our.internmc.facebook.com/intern/diff/D71907684)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,950,292,909
|
Compile earlier PyTorch versions on Blackwell
|
asiron
|
closed
|
[
"module: build",
"module: cuda",
"triaged"
] | 1
|
NONE
|
It seems that if you have a 5000 series GPU (Blackwell) with Compute Capability 12.0, you are forced to use CUDA 12.8. Is it supposed to be possible to compile older PyTorch versions (specifically 1.13 or 2.0) using CUDA 12.8 ?
I tried by pulling and checking out `v1.13.1` then I exported the following variables:
```
export CUDA_HOME=/usr/local/cuda-12.8
export PATH=/usr/local/cuda-12.8/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64:$LD_LIBRARY_PATH
export TORCH_CUDA_ARCH_LIST="7.0;8.6;12.0"
```
I also patched `cmake/Modules_CUDA_fix/upstream/FindCUDA/select_compute_arch.cmake` and made sure to add a check for `12.0`:
```git
diff --git a/cmake/Modules_CUDA_fix/upstream/FindCUDA/select_compute_arch.cmake b/cmake/Modules_CUDA_fix/upstream/FindCUDA/select_compute_arch.cmake
index 7f22d476d2f..8a5e1974b1e 100644
--- a/cmake/Modules_CUDA_fix/upstream/FindCUDA/select_compute_arch.cmake
+++ b/cmake/Modules_CUDA_fix/upstream/FindCUDA/select_compute_arch.cmake
@@ -237,6 +237,9 @@ function(CUDA_SELECT_NVCC_ARCH_FLAGS out_variable)
elseif(${arch_name} STREQUAL "Ampere")
set(arch_bin 8.0)
set(arch_ptx 8.0)
+ elseif(${arch_name} STREQUAL "12.0")
+ set(arch_bin 12.0)
+ set(arch_ptx 12.0)
else()
message(SEND_ERROR "Unknown CUDA Architecture Name ${arch_name} in CUDA_SELECT_NVCC_ARCH_FLAGS")
endif()
(END)
```
Then I ran `python setup.py bdist_wheel`, but I am getting errors during `fbgemm` compilation:
```
FAILED: third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/Utils.cc.o
/usr/bin/c++ -DFBGEMM_STATIC -I/home/mzurad/software/pytorch-from-scratch/pytorch/third_party/cpuinfo/include -I/home/mzurad/software/pytorch-from-scratch/pytorch/third_party/fbgemm/third_party/asmjit/src
-I/home/mzurad/software/pytorch-from-scratch/pytorch/third_party/fbgemm/include -I/home/mzurad/software/pytorch-from-scratch/pytorch/third_party/fbgemm -isystem /home/mzurad/software/pytorch-from-scratch
/pytorch/third_party/protobuf/src -isystem /home/mzurad/software/pytorch-from-scratch/pytorch/third_party/gemmlowp -isystem /home/mzurad/software/pytorch-from-scratch/pytorch/third_party/neon2sse -isystem
/home/mzurad/software/pytorch-from-scratch/pytorch/third_party/XNNPACK/include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -Wall -Wextra -Werror -Wno-deprecated-declarations -O
3 -DNDEBUG -std=c++14 -fPIC -fvisibility=hidden -MD -MT third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/Utils.cc.o -MF third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/Utils.cc.o.d -o third_party
/fbgemm/CMakeFiles/fbgemm_generic.dir/src/Utils.cc.o -c /home/mzurad/software/pytorch-from-scratch/pytorch/third_party/fbgemm/src/Utils.cc
In file included from /home/mzurad/software/pytorch-from-scratch/pytorch/third_party/fbgemm/include/fbgemm/Utils.h:10,
from /home/mzurad/software/pytorch-from-scratch/pytorch/third_party/fbgemm/src/Utils.cc:8:
/home/mzurad/software/pytorch-from-scratch/pytorch/third_party/fbgemm/include/fbgemm/./UtilsAvx2.h:47:37: error: ‘int32_t’ in namespace ‘std’ does not name a type
47 | template <typename BIAS_TYPE = std::int32_t>
| ^~~~~~~
```
So my question is if it's possible at all and if yes, how difficult do you think it is ?
cc @malfet @seemethere @ptrblck @msaroufim @eqy
| true
|
2,950,289,612
|
[nn.utils] scale_grad_ with for_each
|
IvanKobzarev
|
open
|
[
"release notes: nn",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150033
Distributed workloads like torch tune often have to do scaling gradients before loss computation.
https://github.com/pytorch/torchtune/blob/main/torchtune/training/_grad_scaler.py#L11
Adding `scale_grad_` that allows to utilize for_each for this, groupping by device, dtype.
Groupping function implemented in python to be dynamo traceable.
Compiled graphs:
```
def forward(self, arg0_1: "f32[10, 10][10, 1]cpu", arg1_1: "f32[10][1]cpu", arg2_1: "f32[10, 10][10, 1]cuda:0", arg3_1: "f32[10][1]cuda:0"):
# File: /data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py:6627 in fn, code: scale_grad_(test_model.parameters(), torch.tensor(0.5), foreach=True)
_tensor_constant0 = self._tensor_constant0
lift_fresh_copy: "f32[][]cpu" = torch.ops.aten.lift_fresh_copy.default(_tensor_constant0); _tensor_constant0 = None
# File: /data/users/ivankobzarev/a/pytorch/torch/nn/utils/scale_grad.py:56 in _scale_grad_, code: torch._foreach_mul_(device_grads, scaler.to(device))
_foreach_mul = torch.ops.aten._foreach_mul.Tensor([arg0_1, arg1_1], lift_fresh_copy)
getitem: "f32[10, 10][10, 1]cpu" = _foreach_mul[0]
getitem_1: "f32[10][1]cpu" = _foreach_mul[1]; _foreach_mul = None
device_put: "f32[][]cuda:0" = torch.ops.prims.device_put.default(lift_fresh_copy, device(type='cuda', index=0)); lift_fresh_copy = None
convert_element_type: "f32[][]cuda:0" = torch.ops.prims.convert_element_type.default(device_put, torch.float32); device_put = None
_foreach_mul_1 = torch.ops.aten._foreach_mul.Tensor([arg2_1, arg3_1], convert_element_type); convert_element_type = None
getitem_2: "f32[10, 10][10, 1]cuda:0" = _foreach_mul_1[0]
getitem_3: "f32[10][1]cuda:0" = _foreach_mul_1[1]; _foreach_mul_1 = None
copy_: "f32[10, 10][10, 1]cpu" = torch.ops.aten.copy_.default(arg0_1, getitem); arg0_1 = getitem = copy_ = None
copy__1: "f32[10][1]cpu" = torch.ops.aten.copy_.default(arg1_1, getitem_1); arg1_1 = getitem_1 = copy__1 = None
copy__2: "f32[10, 10][10, 1]cuda:0" = torch.ops.aten.copy_.default(arg2_1, getitem_2); arg2_1 = getitem_2 = copy__2 = None
copy__3: "f32[10][1]cuda:0" = torch.ops.aten.copy_.default(arg3_1, getitem_3); arg3_1 = getitem_3 = copy__3 = None
return ()
```
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.