id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,796,503,359
|
test trigger dispatch
|
yangw-dev
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,796,500,096
|
Repro collective timeout and FR dump
|
wconstab
|
closed
|
[
"oncall: distributed",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145125
* #144834
* #145099
* #145011
* #145010
the timeout is unfortunatley not reliable to repro. I'm not yet sure
what the root cause is, so for now I am just uploading my FR trace files
to improve the analyzer script.
Unfortunatley, these traces that I got on one instance were apparently corrupted, or at least fr_trace complained of an unpickling error
[traces.tar.gz](https://github.com/user-attachments/files/18461972/traces.tar.gz)
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,796,499,862
|
Add upload testlog
|
yangw-dev
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,796,488,729
|
[mps/inductor] Skip "double" tests as 64-bits FP is not supported.
|
dcci
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
257 tests failed (before) -> 242 tests failed (after)
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,478,705
|
inductor: Don't throw an internal error when a nn.module is missing a attribute
|
c00w
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 17
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145122
If a nn.module getattr call throws, we should make sure that we don't crash with an internal error
Note that I couldn't figure out how to test this, so advice would be awesome. I have my best case attempt at https://github.com/pytorch/pytorch/pull/145799, but it doesn't seem to reproduce the crash.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,796,477,013
|
Operator Benchmark: Additional operators
|
apakbin
|
closed
|
[
"open source",
"ciflow/trunk",
"release notes: benchmark"
] | 11
|
CONTRIBUTOR
|
Added additional operators: add_, addcmul, arange, baddbmm, bmm, clamp, div, div_, gelu, index_add, logical_and, mul_, sub_, topk, where
| true
|
2,796,465,903
|
[triton] Update triton pin to include warp specialization support
|
htyu
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ci-no-td"
] | 18
|
CONTRIBUTOR
|
The warp specialization work has been landed to the triton rc/3.2.x branch as https://github.com/triton-lang/triton/commit/b2684bf3b0270eff6f104260b6a96c0c139cd56f
| true
|
2,796,431,216
|
WIP sccache simplified
|
wdvr
|
closed
|
[
"Stale",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Trying to see if we can get rid of all of the wrapper code now
| true
|
2,796,430,482
|
[fx] move DCE rand check to import time
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 3
|
MEMBER
|
Mitigates the deterministic benchmark regression: https://github.com/pytorch/pytorch/issues/144775#issuecomment-2593411844. and maybe the dashboard issue.
fx.Node.is_impure is unexpectedly a hot spot. It gets called for every node in the graph whenever we invoke DCE, which should be okay, EXCEPT we invoke DCE on the full graph ~10 times at various stages of torch.compile, and an insane number of times (>O(parameters)) for the subgraphs traced by the pattern matcher.
I considered addressing this problem by reducing the amount of times DCE is called, but I think we can only trim the ones from the pattern matcher, which will require some refactor/caching solution that I leave out of this PR.
torch.Tag.nondeterministic_seeded is provided by native_functions.yml and is implemented as a list. Most of the time, it has <=2 elements, so it's not really worth it to turn it into a set for fast lookup.
Using the deterministic instruction count benchmarks
```python
# before
aotdispatcher_partitioner_cpu,compile_time_instruction_count,8914894946
aotdispatcher_partitioner_cpu,compile_time_instruction_count,8866669058
# after
aotdispatcher_partitioner_cpu,compile_time_instruction_count,8770562314
aotdispatcher_partitioner_cpu,compile_time_instruction_count,8779547794
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145118
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,796,423,389
|
[EXPERIMENTAL][dynamo] optimize `DictGetItemGuardAccessor`
|
StrongerXi
|
closed
|
[
"Stale",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145117
* #143313
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,796,420,878
|
WIP remove -E workaround for nvcc
|
wdvr
|
closed
|
[
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
follow up on https://github.com/pytorch/pytorch/pull/145012 to remove workaround https://github.com/pytorch/pytorch/pull/142813/files
Testing to see if sccache now handles the nvcc caching correctly
| true
|
2,796,383,244
|
Obey sm_carveout (limit on number of SMs) in inductor persistent kernel
|
davidberard98
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
See https://github.com/pytorch/pytorch/pull/144974#issuecomment-2599011250
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @eellison @lw
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,796,375,328
|
[Utilization][Usage Log] Add data model for record
|
yangw-dev
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Add data model for consistency and data model change in the future.
The data model will be used during the post-test-process pipeline
| true
|
2,796,368,030
|
[Utilization] Add datamodel for logging record
|
yangw-dev
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
add dataModel
| true
|
2,796,254,287
|
Update ci_expected_accuracy for TIMM levit_128 for further investigation
|
huydhn
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 14
|
CONTRIBUTOR
|
TSIA, it looks like an upstream change, but I'm not sure from where yet.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,796,237,382
|
Prevent legacy_load when weights_only=True (correctly)
|
pytorchbot
|
closed
|
[
"open source",
"ciflow/trunk"
] | 1
|
COLLABORATOR
|
Only prevent `legacy_load` (.tar format removed in https://github.com/pytorch/pytorch/pull/713), not the whole of `_legacy_load` (.tar format + _use_new_zipfile_serialization=False)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145020
Differential Revision: [D68301405](https://our.internmc.facebook.com/intern/diff/D68301405)
| true
|
2,796,233,067
|
Delete torch._library.register_functional_op
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145110
* #145109
Fixes #117816, #117834, #117871
This has been superceded by auto_functionalized_v2. There are no
internal usages and this is private API so it is safe to delete.
| true
|
2,796,232,991
|
Skip test responsible for causing flakiness
|
zou3519
|
closed
|
[
"Merged",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145110
* __->__ #145109
Investigation is a separate issue. For now I want to get the CI back up
and running on the other tests. The problem seems to be that
IncludeDispatchKeyGuard doesn't actually reset the state, which seems
very, very wrong.
| true
|
2,796,227,000
|
torch._C._IncludeDispatchKeyGuard is very broken?
|
zou3519
|
open
|
[
"triaged",
"module: dispatch"
] | 1
|
CONTRIBUTOR
|
```py
import torch
print(torch._C._meta_in_tls_dispatch_include())
with torch._C._IncludeDispatchKeyGuard(torch.DispatchKey.Meta):
print(torch._C._meta_in_tls_dispatch_include())
print(torch._C._meta_in_tls_dispatch_include())
```
prints False, True, True, which is completely bogus
| true
|
2,796,212,140
|
PEP585 update - torch/_inductor/fx_passes
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145107
See #145101 for details.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,211,651
|
PEP585 update - torch/_inductor/codegen
|
aorenste
|
closed
|
[
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145106
See #145101 for details.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,210,483
|
PEP585 update - torch/_dynamo
|
aorenste
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145102
* __->__ #145105
See #145101 for details.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan @yf225
| true
|
2,796,210,469
|
futher scheduler changes for invoke_quant: prologue low prec, (slightly) more aggressive fusion
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145104
Respect invoke_quant low precision options, also, be more aggressive in attepmting fusion.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,210,380
|
Maintain multiple configs
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
Previously, we would finalize the config of a triton template after its first fusion. this maintains multiple configs, in case we epilogue fuse, then prologue fuse, and prologue fusion has a new better config.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,209,939
|
PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145102
* #145105
See #145101 for details.
| true
|
2,796,194,618
|
PEP585 update - benchmarks tools torchgen
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: releng",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
This is one of a series of PRs to update us to PEP585 (changing Dict -> dict, List -> list, etc). Most of the PRs were completely automated with RUFF as follows:
Since RUFF UP006 is considered an "unsafe" fix first we need to enable unsafe fixes:
```
--- a/tools/linter/adapters/ruff_linter.py
+++ b/tools/linter/adapters/ruff_linter.py
@@ -313,6 +313,7 @@
"ruff",
"check",
"--fix-only",
+ "--unsafe-fixes",
"--exit-zero",
*([f"--config={config}"] if config else []),
"--stdin-filename",
```
Then we need to tell RUFF to allow UP006 (as a final PR once all of these have landed this will be made permanent):
```
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -40,7 +40,7 @@
[tool.ruff]
-target-version = "py38"
+target-version = "py39"
line-length = 88
src = ["caffe2", "torch", "torchgen", "functorch", "test"]
@@ -87,7 +87,6 @@
"SIM116", # Disable Use a dictionary instead of consecutive `if` statements
"SIM117",
"SIM118",
- "UP006", # keep-runtime-typing
"UP007", # keep-runtime-typing
]
select = [
```
Finally running `lintrunner -a --take RUFF` will fix up the deprecated uses.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145101
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,796,174,444
|
`torch.onnx.export` (dynamo=False) fails with uninformative error when exporting `apply_rotary_pos_emb`/`repeat_interleave`
|
xenova
|
open
|
[
"module: onnx",
"triaged",
"OSS contribution wanted"
] | 3
|
NONE
|
### 🐛 Describe the bug
When attempting to export a HF transformers model that performs `apply_rotary_pos_emb` (definition below), an uninformative error is thrown:
<details>
<summary>See reproduction code</summary>
```py
import torch
def rotate_half(x):
"""Rotates half the hidden dims of the input."""
x1 = x[..., 0::2]
x2 = x[..., 1::2]
return torch.stack((-x2, x1), dim=-1).flatten(-2)
def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
"""Applies Rotary Position Embedding to the query and key tensors.
Args:
q (`torch.Tensor`): The query tensor.
k (`torch.Tensor`): The key tensor.
cos (`torch.Tensor`): The cosine part of the rotary embedding.
sin (`torch.Tensor`): The sine part of the rotary embedding.
position_ids (`torch.Tensor`, *optional*):
Deprecated and unused.
unsqueeze_dim (`int`, *optional*, defaults to 1):
The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
Returns:
`tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
"""
cos = cos.unsqueeze(unsqueeze_dim)
sin = sin.unsqueeze(unsqueeze_dim)
# Interleave them instead of usual shape
cos = cos[..., : cos.shape[-1] // 2].repeat_interleave(2, dim=-1)
sin = sin[..., : sin.shape[-1] // 2].repeat_interleave(2, dim=-1)
# Keep half or full tensor for later concatenation
rotary_dim = cos.shape[-1]
q_rot, q_pass = q[..., :rotary_dim], q[..., rotary_dim:]
k_rot, k_pass = k[..., :rotary_dim], k[..., rotary_dim:]
# Apply rotary embeddings on the first half or full tensor
q_embed = (q_rot * cos) + (rotate_half(q_rot) * sin)
k_embed = (k_rot * cos) + (rotate_half(k_rot) * sin)
# Concatenate back to full shape
q_embed = torch.cat([q_embed, q_pass], dim=-1)
k_embed = torch.cat([k_embed, k_pass], dim=-1)
return q_embed, k_embed
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, q, k, cos, sin):
return apply_rotary_pos_emb(q, k, cos, sin)
model = Model()
torch.manual_seed(0)
q = torch.randn(2, 2, 40, 32)
k = torch.randn(2, 2, 40, 32)
cos = torch.randn(1, 40, 28)
sin = torch.randn(1, 40, 28)
output = model(q, k, cos, sin)
assert output[0].shape == q.shape
assert output[1].shape == k.shape
assert abs(output[0].mean().item() - -0.022467557340860367) < 1e-8
assert abs(output[1].mean().item() - 0.0071802930906414986) < 1e-8
torch.onnx.export(
model,
(q, k, cos, sin),
"model.onnx",
input_names=["q", "k", "cos", "sin"],
output_names=["q_embed", "k_embed"],
opset_version=18,
dynamo=False, # True works, False is broken
)
```
</details>
produces:
```
Traceback (most recent call last):
File "/workspaces/optimum/o/rot.py", line 74, in <module>
torch.onnx.export(
File "/usr/local/python/3.12.1/lib/python3.12/site-packages/torch/onnx/__init__.py", line 375, in export
export(
File "/usr/local/python/3.12.1/lib/python3.12/site-packages/torch/onnx/utils.py", line 502, in export
_export(
File "/usr/local/python/3.12.1/lib/python3.12/site-packages/torch/onnx/utils.py", line 1564, in _export
graph, params_dict, torch_out = _model_to_graph(
^^^^^^^^^^^^^^^^
File "/usr/local/python/3.12.1/lib/python3.12/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
graph = _optimize_graph(
^^^^^^^^^^^^^^^^
File "/usr/local/python/3.12.1/lib/python3.12/site-packages/torch/onnx/utils.py", line 639, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/python/3.12.1/lib/python3.12/site-packages/torch/onnx/utils.py", line 1836, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/python/3.12.1/lib/python3.12/site-packages/torch/onnx/symbolic_helper.py", line 369, in wrapper
return fn(g, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/python/3.12.1/lib/python3.12/site-packages/torch/onnx/symbolic_opset11.py", line 519, in cat
return opset9.cat(g, tensor_list, dim)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/python/3.12.1/lib/python3.12/site-packages/torch/onnx/symbolic_helper.py", line 281, in wrapper
return fn(g, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/python/3.12.1/lib/python3.12/site-packages/torch/onnx/symbolic_opset9.py", line 563, in cat
assert all(
AssertionError
```
The issue seems to be caused by the `repeat_interleave` call. However, the error message does not describe this.
NOTE: The model exports correctly with dynamo=True, and I understand that the previous exporter is not being worked on, so feel free to close this issue if needed. I raise it for these reasons:
- [Optimum](http://github.com/huggingface/optimum) still uses the previous exporter, and so this issue is present.
- I haven't been able to find this issue reported, so I'm adding it so I can link to it in my workaround (rewriting the `repeat_interleave` function).
- To possibly get a more informative error (since the current one doesn't indicate what the problem is)
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.12.1 (main, Sep 30 2024, 17:05:21) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1025-azure-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7763 64-Core Processor
Stepping: 1
CPU MHz: 3243.502
BogoMIPS: 4890.85
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 512 KiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip vaes vpclmulqdq rdpid fsrm
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnxconverter-common==1.14.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241219
[pip3] onnxslim==0.1.42
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] No relevant packages
| true
|
2,796,160,432
|
Make MultiProcContinuousTest timeout configurable
|
wconstab
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145125
* #144834
* __->__ #145099
* #145011
* #145010
Allows test classes using MPCT to set their own timeout as a class
property, which is good enough since the processgroup is shared across
test instances and the timeout is set at processgroup init.
Also sets a default timeout of 2 minutes, which is probably (?) long
enough for reasonable tests, but can be changed if it causes flakyness.
It's preferable to have as short default timeout as possible, since when
debugging tests getting a timeout quickly helps.
| true
|
2,796,153,815
|
Use STL string_view header
|
r-barnes
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"Stale",
"release notes: cpp",
"topic: improvements"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,796,136,375
|
[ca] Use aot_eager on flex attention test
|
xmfan
|
closed
|
[
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 4
|
MEMBER
|
FIXES https://github.com/pytorch/pytorch/issues/144912
The flex attention lowering incompatibilities are covered by https://github.com/pytorch/pytorch/blob/main/test/inductor/test_flex_attention.py. For the CA + flex integration, we don't actually need to test the lowering, only the frontend graph capture.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145097
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,094,098
|
DISABLED test_basic (__main__.TestPythonDispatch)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: __torch_dispatch__"
] | 3
|
NONE
|
Platforms: linux, rocm, slow, win, windows, asan
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_basic&suite=TestPythonDispatch&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35778393396).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_basic`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_python_dispatch.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_python_dispatch.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @Chillee @ezyang @zou3519 @albanD @samdow
| true
|
2,796,092,424
|
cpp_wrapper/aot_inductor: handle conjugation and negation dispatch keys
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146424
* #146109
* #145683
* #145655
* #145654
* __->__ #145095
Handles conjugation and negation in the same way that runtime dispatch does: by on-the-fly cloning a tensor with either key applied.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,078,456
|
Tracking issue: Incorrect Meta Strides / Turn On PyDispatcher in FakeTensor Mode
|
eellison
|
open
|
[
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: decompositions",
"module: pt2-dispatcher"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Incorrect Strides can manifest in errors within torch.compile. Potentially what makes them trickier is that they only sometimes cause errors. An incorrect stride can lay dormant for a while and then cause a problem.
See, [this discussion](https://github.com/pytorch/pytorch/issues/144699#issuecomment-2591018702) with @ezyang, @bdhirsh and myself about incorrect strides.
There are a number of known issues that yet unfixed. Some of them have outstanding prs, please check with the pr author before taking it on.
- [ ] `full_like`: https://github.com/pytorch/pytorch/issues/144699
- [ ] `_unsafe_index` : https://github.com/pytorch/pytorch/issues/139312
- [ ] `_fft_r2c`: https://github.com/pytorch/pytorch/issues/135087
- [ ] `_constant_pad_nd`: https://github.com/pytorch/pytorch/issues/144187
Additionally, there are a number of stride & other issues that have been exposed by enabling PyDispatcher in FakeTensorMode. This causes us to potentially route through different decompositions and metas. It is what we use in torch.compile, which means we lack coverage of this mode in our other tests.
Tests exposed by this [turning this on](https://github.com/pytorch/pytorch/pull/138953#issuecomment-2438965279):
- [ ] dropout
- [ ] MultiLabelMarginLoss
Fft tests as well, but that might be related to `_fft_r2c` in the existing issue.
### Versions
master
cc @chauhang @penguinwu @SherlockNoMad @zou3519 @bdhirsh @yf225
| true
|
2,796,073,888
|
Inductor aten.clone lowering ignores Conjugate and Negative dispatch keys
|
benjaminglass1
|
open
|
[
"triaged",
"actionable",
"module: correctness (silent)",
"bug",
"oncall: pt2",
"module: inductor",
"module: pt2-dispatcher"
] | 4
|
COLLABORATOR
|
### 🐛 Describe the bug
In runtime-dispatched `torch`, conjugation and certain forms of negation are lazily evaluated at dispatch. The current lowering for `aten.clone` ignores this. See the minimal reproducer below:
```python
import torch
fn = torch.compile(torch.ops.aten.clone.default) # this issue does not occur in "eager" or "aot_eager"
u = torch.randn(5, dtype=torch.complex64).conj().imag # sets Negative dispatch key
assert torch.all(fn(u) == u) # fails
```
### Error logs
_No response_
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0a0+git6759d9c
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (conda-forge gcc 12.4.0-1) 12.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.39
Python version: 3.9.21 | packaged by conda-forge | (main, Dec 5 2024, 13:51:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Quadro RTX 8000
GPU 1: Quadro RTX 8000
Nvidia driver version: 560.35.05
cuDNN version: Probably one of the following:
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 2970WX 24-Core Processor
CPU family: 23
Model: 8
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 77%
CPU max MHz: 3000.0000
CPU min MHz: 2200.0000
BogoMIPS: 5987.89
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 1.5 MiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 64 MiB (8 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-5,24-29
NUMA node1 CPU(s): 12-17,36-41
NUMA node2 CPU(s): 6-11,30-35
NUMA node3 CPU(s): 18-23,42-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0a0+git6759d9c
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.7.0
[pip3] torchaudio==2.6.0a0+b6d4675
[pip3] torchdata==0.11.0a0+227d3d7
[pip3] torchmultimodal==0.1.0b0
[pip3] torchtext==0.17.0a0+1d4ce73
[pip3] torchvision==0.22.0a0+d3beb52
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] cuda-cudart 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-dev_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart-static 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-static_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cupti 12.4.127 he02047a_2 conda-forge
[conda] cuda-cupti-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-libraries-dev 12.4.1 ha770c72_1 conda-forge
[conda] cuda-nvrtc 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvrtc-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvtx 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvtx-dev 12.4.127 ha770c72_2 conda-forge
[conda] cuda-opencl 12.4.127 he02047a_1 conda-forge
[conda] cuda-opencl-dev 12.4.127 he02047a_1 conda-forge
[conda] cudnn 9.3.0.75 h62a6f1c_2 conda-forge
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] libcublas 12.4.5.8 he02047a_2 conda-forge
[conda] libcublas-dev 12.4.5.8 he02047a_2 conda-forge
[conda] libcufft 11.2.1.3 he02047a_2 conda-forge
[conda] libcufft-dev 11.2.1.3 he02047a_2 conda-forge
[conda] libcurand 10.3.5.147 he02047a_2 conda-forge
[conda] libcurand-dev 10.3.5.147 he02047a_2 conda-forge
[conda] libcusolver 11.6.1.9 he02047a_2 conda-forge
[conda] libcusolver-dev 11.6.1.9 he02047a_2 conda-forge
[conda] libcusparse 12.3.1.170 he02047a_2 conda-forge
[conda] libcusparse-dev 12.3.1.170 he02047a_2 conda-forge
[conda] libmagma 2.8.0 h0af6554_0 conda-forge
[conda] libmagma_sparse 2.8.0 h0af6554_0 conda-forge
[conda] libnvjitlink 12.4.127 he02047a_2 conda-forge
[conda] libnvjitlink-dev 12.4.127 he02047a_2 conda-forge
[conda] magma 2.8.0 h51420fd_0 conda-forge
[conda] mkl 2025.0.0 h901ac74_941 conda-forge
[conda] mkl-include 2025.0.0 hf2ce2f3_941 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-labs-segment-anything-fast 0.2 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0a0+git6759d9c dev_0 <develop>
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchao 0.7.0 pypi_0 pypi
[conda] torchaudio 2.6.0a0+b6d4675 pypi_0 pypi
[conda] torchdata 0.11.0a0+227d3d7 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchtext 0.17.0a0+1d4ce73 pypi_0 pypi
[conda] torchvision 0.22.0a0+d3beb52 pypi_0 pypi
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @bdhirsh
| true
|
2,796,033,502
|
[MPSInductor] Implement `i0` and `i1` ops
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145092
* #145087
Using shared definitions with eager op
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,011,787
|
[aoti] Deduplicate "V.aot_compilation" and "V.graph.aot_mode" flags. [2/n]
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: Following up D68122536 to remove configurable aot_mode for inner_compile
Test Plan: CI
Reviewed By: desertfire
Differential Revision: D68158512
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,795,991,723
|
Test
|
svekars
|
closed
|
[
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,795,968,542
|
[POC] Extend torch function support to ALL arguments, not just scalar type (but not insides of list)
|
ezyang
|
open
|
[
"release notes: fx",
"no-stale"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145089
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
| true
|
2,795,954,655
|
[torchbench] torch._dynamo.exc.Unsupported: Graph break due to unsupported builtin None.morphologyEx
|
IvanKobzarev
|
open
|
[
"oncall: pt2",
"module: dynamo",
"oncall: export",
"pt2-pass-rate-regression"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```
python benchmarks/dynamo/torchbench.py --accuracy --no-translation-validation --inference --amp --export --disable-cudagraphs --device cuda --only doctr_det_predictor
```
```
cuda eval doctr_det_predictor
ERROR:common:
Traceback (most recent call last):
File "/data/users/ivankobzarev/a/pytorch/benchmarks/dynamo/common.py", line 3055, in check_accuracy
optimized_model_iter_fn = optimize_ctx(
File "/data/users/ivankobzarev/a/pytorch/benchmarks/dynamo/common.py", line 1623, in export
ep = torch.export.export(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/__init__.py", line 270, in export
return _export(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 1017, in wrapper
raise e
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 990, in wrapper
ep = fn(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/exported_program.py", line 114, in wrapper
return fn(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 1880, in _export
export_artifact = export_func( # type: ignore[operator]
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 1224, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 1252, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 560, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1432, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 928, in call_function
return self.call_method(tx, "__call__", args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 788, in call_method
return UserMethodVariable(method, self, source=source).call_function(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 385, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 727, in call_function
unimplemented(msg)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 297, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Graph break due to unsupported builtin None.morphologyEx. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
from user code:
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/doctr/models/detection/differentiable_binarization/pytorch.py", line 211, in forward
for preds in self.postprocessor(prob_map.detach().cpu().permute((0, 2, 3, 1)).numpy())
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/doctr/models/detection/core.py", line 90, in __call__
bin_map = [
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/doctr/models/detection/core.py", line 91, in <listcomp>
[
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/doctr/models/detection/core.py", line 92, in <listcomp>
cv2.morphologyEx(bmap[..., idx], cv2.MORPH_OPEN, self._opening_kernel)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
TorchDynamo optimized model failed to run because of following error
fail_to_run
```
### Error logs
_No response_
### Versions
torch main Jan 17
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,795,954,316
|
[MPS] Support includes in metal objects
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps",
"topic: devs"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145092
* __->__ #145087
Useful for code reuse for Metal shader build both for eager mode and MPSInductor, but it requires one to implement `_cpp_embed_headers` tool that, as name suggests, would preprocess and embeds the for shader to be used in dynamic compilation.
Test using:
- `TestMetalLibrary.test_metal_include`
- Moving `i0`/`i1` implementation to `c10/util/metal_special_math.h` and call it from `SpecialOps.metal` shader, which now looks much more compact:
```metal
template <typename T, typename Tout = T>
void kernel
i0(constant T* input,
device Tout* output,
uint index [[thread_position_in_grid]]) {
output[index] = c10::i0(static_cast<Tout>(input[index]));
}
```
| true
|
2,795,951,958
|
`torch.distributions`: replace `numbers.Number` with `torch.types.Number`.
|
randolf-scholz
|
closed
|
[
"module: distributions",
"module: typing",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 7
|
CONTRIBUTOR
|
Fixes #144788 (partial)
cc @fritzo @neerajprad @alicanb @nikitaved @ezyang @malfet @xuzhao9 @gramster
| true
|
2,795,951,069
|
Add support for torch function on dtype arguments
|
ezyang
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: new features"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145085
Along the lines of https://github.com/pytorch/pytorch/issues/119194 although it doesn't actually address the FCD case.
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
| true
|
2,795,932,321
|
[CI] Add xpu linux build into pull workflow
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
To mitigate the XPU build failure risk introduced by non-XPU specific PRs. Refer #144967 & #143803
| true
|
2,795,900,424
|
cpp_wrapper: Move #includes to per-device header files
|
benjaminglass1
|
closed
|
[
"open source",
"ciflow/trunk",
"release notes: releng",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/xpu"
] | 12
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144349
* #144293
* #144002
* __->__ #145083
This prepares us for the next PR in the stack, where we introduce pre-compiled per-device header files to save compilation time.
Reimplements https://github.com/pytorch/pytorch/pull/143909.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Differential Revision: [D68514319](https://our.internmc.facebook.com/intern/diff/D68514319)
| true
|
2,795,829,525
|
partitioner: avoid inserting duplicates into heap
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: composability",
"module: dynamo",
"ciflow/inductor",
"ciflow/inductor-perf-test-nightly"
] | 7
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/145081
This looks like it was a source of quadratic compile times in the torchtitan CP graphs. There's some code in the partitioner that iteratively adds users of a node to a heap, and pops the earliest user. If you have long parallel chains of fusible ops that all eventually feed into some shared ops, then this can result in:
(1) a node getting added to the heap many times
(2) each time we pop that node, we add (duplicates of) each of that node users to the heap
(3) repeat with each user
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145082
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,795,824,628
|
partitioner hangs for some long chains of ops with many users
|
bdhirsh
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
Causing the compile hang / NCCL timeout in https://fb.workplace.com/groups/1075192433118967/posts/1585106652127540/?comment_id=1585174555454083
Here's a min repro, which still hangs for me after several minutes of compiling:
```
import torch
import time
class Mod(torch.nn.Module):
def forward(self, x):
tmps = [x + i for i in range(32)]
tmps = [x + tmp for tmp in tmps]
for i in range(len(tmps) - 4):
tmps[i] = tmps[i].sin().mul(tmps[i])
tmps[i + 1] -= tmps[i]
tmps[i + 2] -= tmps[i]
tmps[i + 3] -= tmps[i]
return sum(tmps)
m = Mod()
m = torch.compile(m, backend="aot_eager_decomp_partition")
x = torch.randn(4, 4, requires_grad=True)
start = time.time()
out = m(x)
end = time.time()
print(end - start)
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @yf225
| true
|
2,795,749,046
|
Remove FFT from stride incorrect ops
|
ezyang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: dynamo"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145080
I gotta say, the FFT implementation is completely insane, there's gotta be a better way to do this than repeatedly inplace restriding the output tensor. Anyway, this is a faithful translation of both the MKL and cuFFT paths to Python.
Fixes https://github.com/pytorch/pytorch/issues/135087
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
| true
|
2,795,731,426
|
list comprehension in SkipFiles are always skipped with no way to override
|
zou3519
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
Proposal: list comprehensions should always be inlined and never markable as skip.
Internal xref: https://fb.workplace.com/groups/1075192433118967/posts/1585141438790728/?comment_id=1585152455456293&reply_comment_id=1586067422031463
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,795,664,212
|
Don't overspecialize float when propagating cache guards to ShapeEnv
|
ezyang
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145078
Fixes https://github.com/pytorch/pytorch/issues/142507
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @SherlockNoMad @EikanWang @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,795,650,611
|
Negative values in stride causing error in `avg_pool2d` (on both CPU and CUDA)
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"actionable",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
Passing a tuple with negative values (such as sym_6) as the stride parameter to the `torch.nn.functional.avg_pool2d` function causes an error on both CPU and CUDA. The function currently checks for zero values but does not handle negative values, leading to unexpected behavior when negative stride values are passed.
For example:
```python
import torch
print(torch.__version__)
sym_0 = (8, 2, 1, 1)
sym_1 = torch.float32
sym_2 = torch.device("cpu")
sym_3 = 0
sym_4 = True
sym_5 = (9223372036854775807, 5868783964474102731)
sym_6 = (-1, 3010182406857593769)
sym_7 = (0,)
sym_8 = True
sym_9 = True
sym_10 = 33554427
var_546 = torch.randn(size=sym_0, dtype=sym_1, device=sym_2)
var_124 = torch.ops.aten.alias(var_546)
var_360 = torch.argmax(var_124, dim=sym_3, keepdim=sym_4)
torch.nn.functional.avg_pool2d(var_360, kernel_size=sym_5, stride=sym_6, padding=sym_7, ceil_mode=sym_8, count_include_pad=sym_9, divisor_override=sym_10)
```
output:
```
2.7.0.dev20250116+cu124
fish: Job 2, 'python3 test.py' terminated by signal SIGFPE (Floating point exception)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
cc @malfet
| true
|
2,795,627,996
|
Add MSVC version condition to "Fix for MSVC problem on Windows Arm64 (#136765)"
|
iremyux
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
This PR adds MSVC version guards around the if block presented on f7e36d8d6f9706ee9b9653538c4c8d2ba375a181. This commit was to provide a workaround for the problem reported here: https://developercommunity.visualstudio.com/t/MSVC-loop-unrolling-problem-194033813-/10720692 .
The issue is fixed now and only appears between versions 19.36 and 19.42.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,795,622,290
|
Downgrade ignored guard to info level
|
ezyang
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145075
Fixes https://github.com/pytorch/pytorch/issues/101265
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,795,611,127
|
AssertionError: increase TRITON_MAX_BLOCK['X'] to 4096 Again!
|
filbeofITK
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 2
|
NONE
|
### 🐛 Describe the bug
I have ran into the compile issue of flax attention modules again, where I get the notorious: `AssertionError: increase TRITON_MAX_BLOCK['X'] to 4096`.
I have read this issue: https://github.com/pytorch/pytorch/issues/135028 and tried this workaround:
`If you set torch._inductor.config.realize_opcount_threshold = 100 (or some other large number), it'll workaround your issue.` But sadly it didn't work.
Neither did setting the environment variable TRITON_MAX_BLOCK_X with os.environ or exporting it within the starting script.
### Error logs
[rank0]: File "/usr/local/lib/python3.11/dist-packages/torch/_dynamo/output_graph.py", line 1465, in _call_user_compiler
[rank0]: raise BackendCompilerFailed(self.compiler_fn, e) from e
[rank0]: torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised:
[rank0]: AssertionError: increase TRITON_MAX_BLOCK['X'] to 4096
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-4.18.0-372.9.1.el8.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 6000
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7352 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 0
BogoMIPS: 4591.50
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 24 MiB (48 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,795,542,185
|
A confusion about Bidirectional GRU
|
guest-oo
|
closed
|
[] | 3
|
NONE
|
Is the output h_n valid data masked by pack?
And the data structure of h_n is independent of whether batch_first=True?
Bidirectional GRU if i choose the final hiddeni state output in both directions is h-f-t, combined with h-b-1?
| true
|
2,795,530,887
|
Segmentation fault when passing an empty tensor to `_local_scalar_dense`
|
WLFJ
|
closed
|
[
"module: crash",
"module: error checking",
"triaged",
"actionable",
"module: empty tensor",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
Passing a tensor with size `(0,)` (empty tensor) to the `torch.ops.aten._local_scalar_dense` function results in a segmentation fault (SIGSEGV).
```python
import torch
print(torch.__version__)
var_199 = torch.rand((0,))
torch.ops.aten._local_scalar_dense(var_199)
```
output:
```
2.7.0.dev20250116+cu124
fish: Job 3, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
cc @malfet
| true
|
2,795,510,053
|
Illegal memory access and segmentation fault due to large `storage_offset` in `as_strided`
|
WLFJ
|
open
|
[
"module: crash",
"triaged",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
Passing a very large value for the `storage_offset` parameter in `torch.as_strided` causes different errors on CPU and CUDA:
* On CPU, it leads to a segmentation fault (SIGSEGV).
* On CUDA, it results in an illegal memory access error when attempting to print or access the result after performing tensor operations.
For example in cuda:
```python
import torch
print(torch.__version__)
sym_0 = (0, 0, 1, 5, 5)
sym_1 = 6.0
sym_2 = torch.long
sym_3 = 'cuda'
sym_4 = (1,)
sym_5 = (1,)
sym_6 = 9223372036854775807
sym_7 = (-1,)
sym_8 = False
var_349 = torch.full(size=sym_0, fill_value=sym_1, dtype=sym_2, layout=None, device=sym_3, pin_memory=None)
var_568 = torch.as_strided(var_349, size=sym_4, stride=sym_5, storage_offset=sym_6)
res = torch.amax(var_568, dim=sym_7, keepdim=sym_8)
print(res)
```
output:
```
2.7.0.dev20250116+cu124
Traceback (most recent call last):
File "/home/yvesw/reborn2-expr/250117-bugs/test.py", line 18, in <module>
print(res)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor.py", line 590, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor_str.py", line 704, in _str
return _str_intern(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor_str.py", line 621, in _str_intern
tensor_str = _tensor_str(self, indent)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor_str.py", line 353, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor_str.py", line 141, in __init__
value_str = f"{value}"
^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor.py", line 1119, in __format__
return self.item().__format__(format_spec)
^^^^^^^^^^^
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
and in cpu:
```python
import torch
print(torch.__version__)
sym_0 = (0, 0, 1, 5, 5)
sym_1 = 6.0
sym_2 = torch.long
sym_3 = 'cpu'
sym_4 = (1,)
sym_5 = (1,)
sym_6 = 9223372036854775807
sym_7 = (-1,)
sym_8 = False
var_349 = torch.full(size=sym_0, fill_value=sym_1, dtype=sym_2, layout=None, device=sym_3, pin_memory=None)
var_568 = torch.as_strided(var_349, size=sym_4, stride=sym_5, storage_offset=sym_6)
res = torch.amax(var_568, dim=sym_7, keepdim=sym_8)
print(res)
```
we got:
```
2.7.0.dev20250116+cu124
fish: Job 3, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
| true
|
2,795,438,332
|
Segment fault on CPU and IndexError on CUDA for `_adaptive_avg_pool2d_backward`
|
WLFJ
|
closed
|
[
"module: crash",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 1
|
NONE
|
### 🐛 Describe the bug
When calling the `torch.ops.aten._adaptive_avg_pool2d_backward` function with mismatched tensor dimensions, it causes a segmentation fault (SIGSEGV) on the CPU, but an `IndexError` on CUDA.
For exmple in CUDA:
```python
import torch
print(torch.__version__)
sym_0 = (1, 3, 8, 3)
sym_1 = torch.strided
sym_2 = 'cuda'
sym_3 = (1, 48)
v0 = torch.randn(size=sym_0, dtype=None, layout=sym_1, device=sym_2)
v1 = torch.rand(size=sym_3, device=sym_2)
torch.ops.aten._adaptive_avg_pool2d_backward(grad_output=v0, self=v1)
```
output:
```
2.7.0.dev20250116+cu124
Traceback (most recent call last):
File "/home/yvesw/reborn2-expr/250117-bugs/test.py", line 12, in <module>
torch.ops.aten._adaptive_avg_pool2d_backward(grad_output=v0, self=v1)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IndexError: Dimension out of range (expected to be in range of [-2, 1], but got -3)
```
and in CPU:
```python
import torch
print(torch.__version__)
sym_0 = (1, 3, 8, 3)
sym_1 = torch.strided
sym_2 = 'cpu'
sym_3 = (1, 48)
v0 = torch.randn(size=sym_0, dtype=None, layout=sym_1, device=sym_2)
v1 = torch.rand(size=sym_3, device=sym_2)
torch.ops.aten._adaptive_avg_pool2d_backward(grad_output=v0, self=v1)
```
output:
```
2.7.0.dev20250116+cu124
fish: Job 3, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
cc @malfet
| true
|
2,795,409,315
|
DISABLED test_sparse_add_cuda_complex64 (__main__.TestSparseCSRCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"module: sparse",
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped"
] | 4
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sparse_add_cuda_complex64&suite=TestSparseCSRCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35768157832).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sparse_add_cuda_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_sparse_csr.py", line 2338, in test_sparse_add
run_test(m, n, index_dtype)
File "/var/lib/jenkins/pytorch/test/test_sparse_csr.py", line 2330, in run_test
self.assertEqual(actual, expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 3 / 15 (20.0%)
Greatest absolute difference: 1028.479736328125 at index (4, 0) (up to 1e-05 allowed)
Greatest relative difference: inf at index (4, 1) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_sparse_csr.py TestSparseCSRCUDA.test_sparse_add_cuda_complex64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_sparse_csr.py`
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr
| true
|
2,795,409,231
|
DISABLED test_autograd_in_attr (__main__.TestPythonDispatch)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: __torch_dispatch__"
] | 3
|
NONE
|
Platforms: asan, linux, rocm, slow, win, windows, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_autograd_in_attr&suite=TestPythonDispatch&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35759714478).
Over the past 3 hours, it has been determined flaky in 26 workflow(s) with 52 failures and 26 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_autograd_in_attr`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
Truncated for length
```
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 287, in _prim_impl
return impl_aten(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2755, in _uniform_aten
a.uniform_(low, high, generator=generator)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2794, in uniform_
return self.copy_(uniform(self, low, high, generator))
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_decomp/decompositions.py", line 2782, in uniform
return prims._uniform_helper(
~~~~~~~~~~~~~~~~~~~~~^
x.shape,
^^^^^^^^
...<4 lines>...
generator=generator,
^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_ops.py", line 758, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 302, in _backend_select_impl
return _prim_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 286, in _prim_impl
meta(*args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/__init__.py", line 2741, in _uniform_meta
strides = utils.make_contiguous_strides_for(shape)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/__init__.py", line 1638, in make_contiguous_strides_for
validate_shape(shape)
~~~~~~~~~~~~~~^^^^^^^
RecursionError: maximum recursion depth exceeded
To execute this test, run the following from the base repo dir:
python test/test_python_dispatch.py TestPythonDispatch.test_autograd_in_attr
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_python_dispatch.py`
cc @clee2000 @wdvr @Chillee @ezyang @zou3519 @albanD @samdow
| true
|
2,795,377,820
|
Fix issue with test/nn/test_convolution:TestConvolutionNNDeviceTypeCUDA.test_conv_large_batch_1_cuda
|
rec
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145067
| true
|
2,795,375,310
|
SIGSEGV error when passing a 0-sized tensor to `_local_scalar_dense`
|
WLFJ
|
closed
|
[
"module: crash",
"triaged",
"module: empty tensor",
"topic: fuzzer"
] | 4
|
NONE
|
### 🐛 Describe the bug
Passing a tensor with size `(0,)` to the `torch.ops.aten._local_scalar_dense` function results in a segmentation fault (SIGSEGV).
```python
import torch
print(torch.__version__)
input = torch.randn(size=(0,))
torch.ops.aten._local_scalar_dense(input)
```
output:
```
2.7.0.dev20250116+cu124
fish: Job 2, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
| true
|
2,795,355,233
|
SIGFPE error when passing very large kernel_size to `avg_pool1d`
|
WLFJ
|
open
|
[
"module: crash",
"triaged",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
Passing a very large value for the kernel_size parameter to the `torch.ops.aten.avg_pool1d` function results in a SIGFPE error.
```python
import torch
print(torch.__version__)
sym_0 = (0, 1)
sym_1 = torch.double
sym_2 = torch.strided
sym_3 = (9223372036854775807,)
sym_4 = (-1,)
sym_5 = (0,)
sym_6 = True
sym_7 = False
var_393 = torch.rand(sym_0, dtype=sym_1, layout=sym_2, device=None, pin_memory=None)
var_773 = torch.ops.aten.avg_pool1d(var_393, kernel_size=sym_3, stride=sym_4, padding=sym_5, ceil_mode=sym_6, count_include_pad=sym_7)
```
output:
```
2.7.0.dev20250116+cu124
fish: Job 2, 'python3 test.py' terminated by signal SIGFPE (Floating point exception)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
| true
|
2,795,259,369
|
`_pdist_forward` causes segmentation fault for 3D tensor with last dimension of size 0
|
WLFJ
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"actionable",
"module: empty tensor",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
When passing a 3D tensor where the last dimension has size 0 to the torch.ops.aten._pdist_forward function, a segmentation fault occurs.
```python
import torch
print(torch.__version__)
input = torch.rand((11, 15, 0))
torch.ops.aten._pdist_forward(input, p=2.0)
```
output:
```
2.7.0.dev20250116+cu124
fish: Job 2, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
cc @malfet
| true
|
2,795,199,578
|
`torch.ops.aten._local_scalar_dense` crashed on empty size tensor
|
WLFJ
|
closed
|
[
"module: crash",
"triaged",
"module: python frontend",
"module: edge cases"
] | 3
|
NONE
|
### 🐛 Describe the bug
`torch.ops.aten._local_scalar_dense` crashed on empty size tensor. For example:
```python
import torch
print(torch.__version__)
input = torch.randn(0)
res = torch.ops.aten._local_scalar_dense(input)
print(res)
```
running result on latest nightly:
```
$ python3 reproduce.py
2.7.0.dev20250116+cu124
fish: Job 1, 'python3 reproduce.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
cc @albanD
| true
|
2,795,137,448
|
[Intel CPU] Fix issue #143489.
|
RanTao123
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fix issue in https://github.com/pytorch/pytorch/issues/143489.
kernel_height * kernel_weight will cause Floating point exception, so we will divide by them one by one.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,795,061,520
|
[Inductor] optimize welford reduction
|
jiayisunx
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145061
Fix https://github.com/pytorch/pytorch/issues/141541.
Fix https://github.com/pytorch/pytorch/issues/142839.
Fix https://github.com/pytorch/pytorch/issues/143182.
**Summary:**
In order to fix the issue that the accuracy of welford reduction is not good enough, we refer to the eager implementation, combine Welford algorithm with cascade sum to improve numerical stability. Specifically:
1. Use Welford algorithm to compute mean and variance.
2. Use cascade summation when computing sum over input for both mean and variance.
I tested Inductor benchmark with this PR on CPU, no performance gains or regressions were seen.
**Example:**
Take https://github.com/pytorch/pytorch/issues/141541 as an example:
```
import torch
import torch.nn as nn
torch.manual_seed(0)
class Model(nn.Module):
def __init__(self):
super().__init__()
self.gn = nn.GroupNorm(num_groups=32, num_channels=32)
def forward(self, x):
return self.gn(x)
model = Model().eval()
c_model = torch.compile(model)
x = torch.randn(1, 32, 128, 128, 128)
with torch.no_grad():
output = model(x)
c_output = c_model(x)
print(torch.max(torch.abs(output - c_output)))
print(torch.allclose(output, c_output, 1.3e-6, 1e-5))
```
**logs**
- before
```
tensor(7.0095e-05)
False
```
- After
```
tensor(9.5367e-07)
True
```
- on CUDA
```
tensor(1.4305e-06, device='cuda:0', grad_fn=<MaxBackward1>)
True
```
**Generated code:**
- before
```
cpp_fused_native_group_norm_0 = async_compile.cpp_pybinding(['const float*', 'const float*', 'const float*', 'float*', 'float*', 'float*'], '''
#include "/tmp/torchinductor_jiayisun/pi/cpicxudqmdsjh5cm4klbtbrvy2cxwr7whxl3md2zzdjdf3orvfdf.h"
extern "C" void kernel(const float* in_ptr0,
const float* in_ptr1,
const float* in_ptr2,
float* out_ptr0,
float* out_ptr1,
float* out_ptr2)
{
{
#pragma GCC ivdep
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(32L); x0+=static_cast<int64_t>(1L))
{
{
Welford<float> tmp_acc0 = Welford<float>();
Welford<at::vec::Vectorized<float>> tmp_acc0_vec = Welford<at::vec::Vectorized<float>>();
Welford<at::vec::Vectorized<float>> masked_tmp_acc0_vec = Welford<at::vec::Vectorized<float>>();
static WeightRecp<at::vec::Vectorized<float>> wrecps0(static_cast<int64_t>(131072L));
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(2097152L); x1+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x1 >= static_cast<int64_t>(0) && x1 < static_cast<int64_t>(2097152L)))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x1 + 2097152L*x0), static_cast<int64_t>(16));
tmp_acc0_vec = welford_combine(tmp_acc0_vec, tmp0, &wrecps0);
}
}
}
tmp_acc0 = welford_combine(tmp_acc0, welford_vec_reduce_all(masked_tmp_acc0_vec));
tmp_acc0 = welford_combine(tmp_acc0, welford_vec_reduce_all(tmp_acc0_vec));
out_ptr0[static_cast<int64_t>(x0)] = static_cast<float>(tmp_acc0.mean);
out_ptr1[static_cast<int64_t>(x0)] = static_cast<float>(tmp_acc0.m2);
}
}
}
{
#pragma GCC ivdep
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(32L); x0+=static_cast<int64_t>(1L))
{
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(2097152L); x1+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x1 >= static_cast<int64_t>(0) && x1 < static_cast<int64_t>(2097152L)))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x1 + 2097152L*x0), static_cast<int64_t>(16));
auto tmp1 = out_ptr0[static_cast<int64_t>(x0)];
auto tmp4 = out_ptr1[static_cast<int64_t>(x0)];
auto tmp12 = in_ptr1[static_cast<int64_t>(x0)];
auto tmp15 = in_ptr2[static_cast<int64_t>(x0)];
auto tmp2 = at::vec::Vectorized<float>(tmp1);
auto tmp3 = tmp0 - tmp2;
auto tmp5 = static_cast<float>(2097152.0);
auto tmp6 = tmp4 / tmp5;
auto tmp7 = static_cast<float>(1e-05);
auto tmp8 = decltype(tmp6)(tmp6 + tmp7);
auto tmp9 = 1 / std::sqrt(tmp8);
auto tmp10 = at::vec::Vectorized<float>(tmp9);
auto tmp11 = tmp3 * tmp10;
auto tmp13 = at::vec::Vectorized<float>(tmp12);
auto tmp14 = tmp11 * tmp13;
auto tmp16 = at::vec::Vectorized<float>(tmp15);
auto tmp17 = tmp14 + tmp16;
tmp17.store(out_ptr2 + static_cast<int64_t>(x1 + 2097152L*x0));
}
}
}
}
}
}
''')
```
- After
```
cpp_fused_native_group_norm_0 = async_compile.cpp_pybinding(['const float*', 'const float*', 'const float*', 'float*', 'float*', 'float*'], '''
#include "/tmp/torchinductor_jiayisun/ln/clnlak27xpvmq3klpqyj6xzyq2thf4ecrezve5ddy4f4xaz4sb7w.h"
extern "C" void kernel(const float* in_ptr0,
const float* in_ptr1,
const float* in_ptr2,
float* out_ptr0,
float* out_ptr1,
float* out_ptr2)
{
{
#pragma GCC ivdep
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(32L); x0+=static_cast<int64_t>(1L))
{
{
Welford<float> tmp_acc0 = Welford<float>();
Welford<at::vec::Vectorized<float>> tmp_acc0_vec = Welford<at::vec::Vectorized<float>>();
Welford<at::vec::Vectorized<float>> masked_tmp_acc0_vec = Welford<at::vec::Vectorized<float>>();
WelfordHelper<at::vec::Vectorized<float>> welford_helper0(static_cast<int64_t>(131072L));
static WelfordHelper<at::vec::Vectorized<float>> masked_welford_helper0(static_cast<int64_t>(0L));
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(2097152L); x1+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x1 >= static_cast<int64_t>(0) && x1 < static_cast<int64_t>(2097152L)))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x1 + 2097152L*x0), static_cast<int64_t>(16));
tmp_acc0_vec = welford_combine(tmp_acc0_vec, tmp0, &welford_helper0);
}
}
}
tmp_acc0_vec = welford_combine(tmp_acc0_vec, &welford_helper0);
masked_tmp_acc0_vec = welford_combine(masked_tmp_acc0_vec, &masked_welford_helper0);
tmp_acc0 = welford_combine(tmp_acc0, welford_vec_reduce_all(masked_tmp_acc0_vec));
tmp_acc0 = welford_combine(tmp_acc0, welford_vec_reduce_all(tmp_acc0_vec));
out_ptr0[static_cast<int64_t>(x0)] = static_cast<float>(tmp_acc0.mean);
out_ptr1[static_cast<int64_t>(x0)] = static_cast<float>(tmp_acc0.m2);
}
}
}
{
#pragma GCC ivdep
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(32L); x0+=static_cast<int64_t>(1L))
{
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(2097152L); x1+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x1 >= static_cast<int64_t>(0) && x1 < static_cast<int64_t>(2097152L)))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x1 + 2097152L*x0), static_cast<int64_t>(16));
auto tmp1 = out_ptr0[static_cast<int64_t>(x0)];
auto tmp4 = out_ptr1[static_cast<int64_t>(x0)];
auto tmp12 = in_ptr1[static_cast<int64_t>(x0)];
auto tmp15 = in_ptr2[static_cast<int64_t>(x0)];
auto tmp2 = at::vec::Vectorized<float>(tmp1);
auto tmp3 = tmp0 - tmp2;
auto tmp5 = static_cast<float>(2097152.0);
auto tmp6 = tmp4 / tmp5;
auto tmp7 = static_cast<float>(1e-05);
auto tmp8 = decltype(tmp6)(tmp6 + tmp7);
auto tmp9 = 1 / std::sqrt(tmp8);
auto tmp10 = at::vec::Vectorized<float>(tmp9);
auto tmp11 = tmp3 * tmp10;
auto tmp13 = at::vec::Vectorized<float>(tmp12);
auto tmp14 = tmp11 * tmp13;
auto tmp16 = at::vec::Vectorized<float>(tmp15);
auto tmp17 = tmp14 + tmp16;
tmp17.store(out_ptr2 + static_cast<int64_t>(x1 + 2097152L*x0));
}
}
}
}
}
}
''')
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov @ColinPeppler
| true
|
2,794,845,154
|
[CI] Add continue through error flag to xpu ci test
|
chuanqi129
|
closed
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/xpu"
] | 4
|
COLLABORATOR
|
Fixes #145048
| true
|
2,794,841,891
|
Fix a number of flexattention issues (cse, cudagraph, etc.)
|
Chillee
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145059
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,794,802,541
|
[Break XPU][Inductor UT] Fix broken XPU CI introduced by community changes
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145058
As title.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,794,798,011
|
[2/N] Remove unnecessary once flag usage
|
cyyever
|
closed
|
[
"oncall: distributed",
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: distributed (c10d)"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,794,763,318
|
Update test_c10d_object_collectives.py with DistributedTestBase class
|
amathewc
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 22
|
CONTRIBUTOR
|
# MOTIVATION
To generalize distributed test cases for non-CUDA devices, we are leveraging the DistributedTestBase class introduced in [PR #138216](https://github.com/pytorch/pytorch/pull/138216). This new class is derived from MultiProcessTestCase and abstracts the creation/deletion of process groups and other functionality for specific devices. In this PR, we extend the scope of these tests to support HPUs.
# CHANGES
Replaced MultiProcessTestCase with the DistributedTestBase class.
Extended test functionality to include support for HPUs.
Utilized instantiate_device_type_tests with targeted attributes to generate device-specific test instances.
Applied the skipIfHPU decorator to skip tests that are not yet compatible with HPU devices.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ankurneog
| true
|
2,794,757,353
|
[Break XPU][qconv] Add torch.int8 as output dtype assertion in qconv2d_pointwise
|
etaf
|
closed
|
[
"open source"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145039
* __->__ #145055
* #145038
* #145037
* #145036
| true
|
2,794,754,135
|
Update c10d_object_collectives using DistributedTestBase
|
amathewc
|
closed
|
[
"oncall: distributed",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
# MOTIVATION
To generalize distributed test cases for non-CUDA devices, we are leveraging the DistributedTestBase class introduced in [PR #138216](https://github.com/pytorch/pytorch/pull/138216). This new class is derived from MultiProcessTestCase and abstracts the creation/deletion of process groups and other functionality for specific devices. In this PR, we extend the scope of these tests to support HPUs.
# CHANGES
Replaced MultiProcessTestCase with the DistributedTestBase class.
Extended test functionality to include support for HPUs.
Utilized instantiate_device_type_tests with targeted attributes to generate device-specific test instances.
Applied the skipIfHPU decorator to skip tests that are not yet compatible with HPU devices.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,794,706,432
|
DISABLED test_nested_optimize_decorator (__main__.MiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
Platforms: linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nested_optimize_decorator&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35754430609).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nested_optimize_decorator`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/dynamo/test_misc.py", line 4045, in test_nested_optimize_decorator
self.assertEqual(cnts3.op_count, 4)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12820448148/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 4 but got 3.
Absolute difference: 1
Relative difference: 0.25
To execute this test, run the following from the base repo dir:
python test/dynamo/test_misc.py MiscTests.test_nested_optimize_decorator
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_misc.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,794,706,211
|
DISABLED test_mps_event_module (__main__.TestMPS)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"module: macos",
"skipped",
"module: mps"
] | 2
|
NONE
|
Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mps_event_module&suite=TestMPS&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35752004692).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mps_event_module`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/test_mps.py", line 8188, in test_mps_event_module
elapsedTime = startEvent.elapsed_time(endEvent)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12820380539/lib/python3.9/site-packages/torch/mps/event.py", line 45, in elapsed_time
return torch._C._mps_elapsedTimeOfEvents(self.__eventId, end_event.__eventId)
RuntimeError: End event 2 was not recorded after start event 1
To execute this test, run the following from the base repo dir:
python test/test_mps.py TestMPS.test_mps_event_module
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_mps.py`
cc @clee2000 @wdvr @malfet @albanD @kulinseth @DenisVieriu97 @jhavukainen
| true
|
2,794,683,541
|
[inductor][1/N] triton support post-#5512, main components
|
davidberard98
|
closed
|
[
"Merged",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145515
* #145348
* __->__ #145051
Triton commit 5220 adds tuple support in Triton (changing the indexing format in AttrsDescriptor) and commit 5512 replaces AttrsDescriptor with raw tuples. This is an initial PR to add support for Triton versions after commit 5512 landed.
The main changes in 5220 and 5512 that need to be supported:
* AttrsDescriptor() gets replaced with a raw dict. The raw dict has the format `{(TUPLES): [["tt.divisibility", 16]]}`, where `(TUPLES)` is a tuple of indices, e.g. `((0,), (1,), (3,))` to indicate that args 0, 1, and 3 are divisible by 16. These indices are, themselves, represented as tuples to support nested inputs (e.g. an argument that's a tuple), but support for tuples is not implemented right now.
* "signature" changes: the signature now contains _all_ args, including constexpr and constant args.
* ASTSource now takes "constexprs" instead of "constants" - for example, equal-to-1 args are constants but not constexprs so we don't need to pass these args as "constants".
What this PR supports:
* Triton versions before Dec 9, 2024, and (partial support for) Triton versions after Jan 1, 2025
* (triton jan 1+) typical inductor-generated triton: updated AttrsDescriptor, signatures, constexpr/constant handling.
What this PR doesn't support (TODO in follow-up PRs):
* Triton versions between Dec 9, 2024 and before Jan 1, 2025
* (triton jan 1+) user-defined triton kernel support (this is implemented already in @anmyachev's patch)
* (triton jan 1+) triton_helper support (failing in triton codegen - needs investigation)
* (triton jan 1+) AOTI / cpp wrapper
thanks to @anmyachev for patches in https://github.com/intel/intel-xpu-backend-for-triton/blob/main/scripts/pytorch.patch, which contains most of these changes already
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,794,665,775
|
Make flex_attention work if `score_mod`'s output doesn't require gradients at all
|
Chillee
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 0
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
See https://github.com/pytorch/pytorch/issues/139548#issuecomment-2597509430
```
import warnings
import numpy as np
import torch
from torch.nn.attention.flex_attention import flex_attention, create_mask, create_block_mask
# import astropy_healpix as hp
hlc = 4
num_healpix_cells = 12 * 4**hlc
print( f'seq_length : {num_healpix_cells}')
# with warnings.catch_warnings(action="ignore"):
# nbours= hp.neighbours( np.arange(num_healpix_cells), 2**hlc, order='nested').transpose()
# build adjacency matrix (smarter ways to do it ...)
nbours_mat = torch.zeros( (num_healpix_cells,num_healpix_cells), dtype=torch.bool, device='cuda')
# for i in range(num_healpix_cells) :
# for j in nbours[i] :
# nbours_mat[i,j] = True if j>=0 else False
hp_adjacency = nbours_mat
# tc_tokens = torch.from_numpy( np.load( 'tc_tokens.npy')).to(torch.float16).to('cuda')
tc_tokens = torch.ones( [204458, 256], dtype=torch.float16, device='cuda', requires_grad=True)
# tcs_lens = torch.from_numpy( np.load( './tcs_lens.npy')).to(torch.int32).to('cuda')
# tcs_lens = torch.ra
# print( f'tc_tokens = {tc_tokens.shape}')
# print( f'tcs_lens = {tcs_lens.shape}')
tc_tokens_cell_idx = torch.zeros(204458, dtype=torch.int, device='cuda')
def sparsity_mask( score, b, h, q_idx, kv_idx):
return hp_adjacency[ tc_tokens_cell_idx[q_idx], tc_tokens_cell_idx[kv_idx] ]
compiled_flex_attention = torch.compile(flex_attention, dynamic=False)
toks = tc_tokens[:,:64].unsqueeze(0).unsqueeze(0)
out = compiled_flex_attention( toks, toks, toks, score_mod=sparsity_mask)
t = torch.zeros_like( out)
mse = torch.nn.MSELoss()
loss = mse( t, out)
loss.backward()
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @drisspg @yanboliang @BoyuanFeng
| true
|
2,794,663,333
|
Fix document for tensorboard
|
leoleoasd
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 5
|
NONE
|
Change 10 items to 10 bytes.
In tensorboard document:
https://github.com/tensorflow/tensorboard/blob/862a9da9b6b8dd5523b829278eb57648cd060e34/tensorboard/summary/writer/event_file_writer.py#L54-L70
and their implementation:
https://github.com/tensorflow/tensorboard/blob/862a9da9b6b8dd5523b829278eb57648cd060e34/tensorboard/summary/writer/event_file_writer.py#L156-L160
the queue size is number of bytes, not items.
| true
|
2,794,641,653
|
[XPU] Keep going jobs of `ciflow/xpu` when case fist failed.
|
etaf
|
closed
|
[
"module: ci",
"triaged",
"module: xpu"
] | 4
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
Currently PyTorch's CI jobs all stop at the first failure. For XPU, we want the `ciflow/xpu` job to continue to run the whole job when it encounters a failure, and get all the failures. The reason is detailed as below:
Since `ciflow/xpu` hasn't been gating for community PRs, we often need to raise an issue when`ciflow/xpu` is blocked by a community commit and fix the problem. During `ciflow/xpu` blocked by such issue, other XPU related PRs can not pass the `ciflow/xpu` check. And there maybe more than one such issues appears on the same period. The blocking time will be 1~3 days, and 1.6 time/week. This is seriously slowing down development.
To unblock the XPU PRs from such non PRs themself related issue, we hope to get all the failures in `ciflow/xpu`. Them we can find if there are real PR related failure.
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @pytorch/pytorch-dev-infra @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,794,619,489
|
prov logging
|
bobrenjc93
|
closed
|
[
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145196
* __->__ #145047
* #143961
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D68349176](https://our.internmc.facebook.com/intern/diff/D68349176)
| true
|
2,794,518,885
|
OpenReg: fix issue of pin_memory
|
Zhenbin-8
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Fix issue of `pin_memory` when rewrapping a storage.
cc @albanD
| true
|
2,794,485,333
|
[pytorch/ncclx] Remove Alltoallv specialization for PTD all_to_all
|
wconstab
|
closed
|
[
"module: c10d",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145045
Summary:
PTD all_to_all uses a list of tensors, while ncclAllToAllv (provided
by NCCLX and RCCL) assumes that a single contiguous buffer is used.
These are fundamentally mismatched. The list of tensors might not be
contiguous or even ordered (buffer addresses might not be in
increasing order).
This patch removes the ncclAllToAllv specialization for PTD
all_to_all, and instead let's it directly call ncclSend/ncclRecv.
Co-authored by @pavanbalaji
| true
|
2,794,447,043
|
DISABLED test_strided_inputs_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 6
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_strided_inputs_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35749033733).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 12 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_strided_inputs_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 7162, in test_strided_inputs
self.assertTrue(same(fn(*inputs), inputs[0] + inputs[1]))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 7154, in fn
@torch.compile(backend="inductor")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1211, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 322, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 671, in inner_fn
outs = compiled_fn(args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 489, in wrapper
return compiled_fn(runtime_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1228, in run
return compiled_fn(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 397, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 427, in cudagraphify
return manager.add_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2255, in add_function
return fn, fn(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1949, in run
out = self._run(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2057, in _run
out = self.run_eager(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2221, in run_eager
return node.run(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 635, in run
check_memory_pool(self.device_index, self.cuda_graphs_pool, refs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1754, in check_memory_pool
if torch._C._cuda_checkPoolLiveAllocations(device, pool_id, unique_storages):
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_strided_inputs_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,794,382,662
|
Bug when using reparameterized model evaluating with DDP
|
NOTGOOOOD
|
open
|
[
"oncall: distributed",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
I usually evaluate my model at the end of each training epoch. But in DDP mode, validation using the reparameterized model reports an error. If a single GPU is used for training and validation neither will report an error.
The error message is given below:
```triple quotes blocks
BUG INFO
[rank0]: Traceback (most recent call last):
[rank0]: File "run.py", line 97, in <module>
[rank0]: main()
[rank0]: File "run.py", line 92, in main
[rank0]: cli.train()
[rank0]: File "/home/xuexufeng/project/psf_cv_framework/trainer/mobileone.py", line 37, in train
[rank0]: self.val_one_epoch()
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/home/xuexufeng/project/psf_cv_framework/trainer/mobileone.py", line 96, in val_one_epoch
[rank0]: self.val_one_batch(batch)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/home/xuexufeng/project/psf_cv_framework/trainer/mobileone.py", line 79, in val_one_batch
[rank0]: predict_cls: torch.Tensor = self.model_eval(batch)["cls"]
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1589, in forward
[rank0]: inputs, kwargs = self._pre_forward(*inputs, **kwargs)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1487, in _pre_forward
[rank0]: self._sync_buffers()
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2129, in _sync_buffers
[rank0]: self._sync_module_buffers(authoritative_rank)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2133, in _sync_module_buffers
[rank0]: self._default_broadcast_coalesced(authoritative_rank=authoritative_rank)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2155, in _default_broadcast_coalesced
[rank0]: self._distributed_broadcast_coalesced(bufs, bucket_size, authoritative_rank)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2070, in _distributed_broadcast_coalesced
[rank0]: dist._broadcast_coalesced(
[rank0]: RuntimeError: !tensors.empty() INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1716905971873/work/torch/csrc/distributed/c10d/reducer.cpp":2090, please report a bug to PyTorch.
```
A sample instance as follow:
```Python
import os
import sys
import torch
import torch.nn as nn
from torch.utils import data
class MyTrainer:
def __init__(self):
self.rank, self.num_gpus = get_dist_info()
self.model: nn.Module = Mobileone("s0")
self.val_data_loader: data.DataLoader = MyDataloader()
@staticmethod
def reparameterize_model(model: nn.Module) -> torch.nn.Module:
""" Refer: Mobileone https://github.com/apple/ml-mobileone
Method returns a model where a multi-branched structure
used in training is re-parameterized into a single branch
for inference.
:param model: MobileOne model in train mode.
:return: MobileOne model in inference mode.
"""
# Avoid editing original graph
model_local = deepcopy(model)
for module in model_local.modules():
if hasattr(module, 'reparameterize'):
module.reparameterize()
return model_local
def train(self,):
# Assuming the training process is complete
# Evaluating only in rank-0
if self.rank == 0:
self.val_one_epoch()
@torch.no_grad()
def val_one_epoch(self, ):
self.model.eval()
self.model_eval: nn.Module = self.reparameterize_model(self.model)
for _, batch in enumerate(self.val_data_loader):
self.val_one_batch(batch)
def val_one_batch(self, data):
predict_cls: torch.Tensor = self.model_eval(data)
```
### Versions
PyTorch version: 2.3.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
GPU 7: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.0.0
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7543 32-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3737.8899
CPU min MHz: 1500.0000
BogoMIPS: 5599.44
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-7,64-71
NUMA node1 CPU(s): 8-15,72-79
NUMA node2 CPU(s): 16-23,80-87
NUMA node3 CPU(s): 24-31,88-95
NUMA node4 CPU(s): 32-39,96-103
NUMA node5 CPU(s): 40-47,104-111
NUMA node6 CPU(s): 48-55,112-119
NUMA node7 CPU(s): 56-63,120-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.18.1
[pip3] onnxsim==0.4.36
[pip3] torch==2.3.1
[pip3] torchaudio==2.3.1
[pip3] torchvision==0.18.1
[pip3] triton==2.3.1
[conda] blas 1.0 mkl conda-forge
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcurand 10.3.6.82 0 nvidia
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 hc2b9512_224
[conda] numpy 1.24.4 py38h59b608b_0 conda-forge
[conda] pytorch 2.3.1 py3.8_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.3.1 py38_cu118 pytorch
[conda] torchtriton 2.3.1 py38 pytorch
[conda] torchvision 0.18.1 py38_cu118 pytorch
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,794,370,161
|
PyTorch VS2022 build Windows binary illegal instruction on AVX2(max ISA level) CPU
|
xuhancn
|
closed
|
[
"module: windows",
"low priority",
"module: cpu",
"triaged"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
# Background
Intel Team found the PyTorch Windows XPU nightly build binary occurred illegal instruction on AVX2(max ISA level) CPU, the original issue is here: https://github.com/intel/torch-xpu-ops/issues/1173
Reproduce steps:
Install the PyTorch Windows XPU binary, and then run it on Intel client CPU(max ISA level is AVX2).
Example, use 2024-12-11 nightly build:
```cmd
python -m pip install https://download.pytorch.org/whl/nightly/xpu/torch-2.6.0.dev20241211%2Bxpu-cp39-cp39-win_amd64.whl
```
Reproduce code:
```python
import torch
class TestClass:
def test_grid_sampler_2d(self):
torch.manual_seed(0)
b = torch.rand(2, 13, 10, 2, dtype=torch.float64)
a = torch.rand(2, 3, 5, 20, dtype=torch.float64)
torch.grid_sampler_2d(a, b, interpolation_mode=0, padding_mode=0, align_corners=False)
```
and it will occur the illegal instruction.
# Debug Note:
1. Intel team tried to build PyTorch Windows XPU binary locally, but we can't reproduce the issue.
2. Intel Team tried to debug the official binary via WinDBG.

WinDBG catched up the issue, it is genarated AVX512 instruction and it is raised illegal instruction on AVX2 max ISA CPU.
But we can't locate the issue to source level. Due to our missing debug symbol (.pdb) files. PyTorch has some issue to genarate the .pdb file.
3. We tried to switch PyTorch Windows CPU(only) build to VS2022: https://github.com/pytorch/pytorch/pull/143791
We tested the PyTorch Windows CPU(only) binary, which build by the PR. The issue can reproduced.
# Conclusion:
1. It is only occurred on PyTorch official build system, and Visual studio version must be 2022.
2. The illegal instruction is caused by compiler genarated AVX512 instruction for AVX2 ISA.
3. Due to item 2, it is only occurred on AVX2 (max ISA) CPU
# Solution
**Option 1**: Fix the pytorch official build system, if we want to switch PyTorch Windows CPU build to VS2022, in the further.
Because of we can't reproduce the issue locally. Suggest involev Microsoft PyTorch team or Microsoft Visual Studio team. The reproduce PR is: https://github.com/pytorch/pytorch/pull/143791
**Option 2**: Intel PyTorch team downgrade PyTorch Windows XPU build to VS2019.
### Versions
NA
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,794,363,819
|
TorchDispatchMode cann't capture the operator which name is aten::_index_put_ impl_
|
yangrudan
|
open
|
[
"triaged",
"module: __torch_dispatch__"
] | 5
|
NONE
|
### 🐛 Describe the bug
In my understanding, **TorchDispatchMode** should capture atens which call actual kernel. When I run the code below, I found it missed **aten::\_index_put_ impl_**.
And I try to print torch ops dispatch stack, and try using profiler. They both can see **aten::\_index_put_ impl_**.
What's the reason for this?
# 0x01 Using TorchDispatchMode
```python
import torch
from tests.unit_tests.test_utilities import Utils
import numpy as np
import torch.distributed as dist
from torch.utils._python_dispatch import TorchDispatchMode
from megatron.core.tensor_parallel.cross_entropy import vocab_parallel_cross_entropy
class PrintingMode(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args=(), kwargs=None):
print(f"{func.__module__}.{func.__name__}")
return func(*args, **kwargs)
def __enter__(self):
# 进入with块时执行的代码
print("Entering PrintingMode")
return super().__enter__()
def __exit__(self, exc_type, exc_value, traceback):
# 退出with块时执行的代码
return super().__exit__(exc_type, exc_value, traceback)
def test_vocab_parallel_cross_entropy():
Utils.initialize_model_parallel(1,1)
# vocab_parallel_logits = torch.range(0,7).repeat(16,4).cuda()
# target = torch.arange(0,32,2).cuda()
vocab_parallel_logits = torch.empty((4096, 1, 32000), dtype=torch.float16, device='cuda:0')
# 设置步长
vocab_parallel_logits = vocab_parallel_logits.as_strided(
(4096, 1, 32000), (32000, 32000, 1))
# 创建 target 张量
target = torch.empty((4096, 1), dtype=torch.int64, device='cuda:0')
# 设置步长
target = target.as_strided((4096, 1), (1, 4096))
print(vocab_parallel_logits.shape)
print(target.shape)
output = vocab_parallel_cross_entropy(vocab_parallel_logits, target)
Utils.destroy_model_parallel()
# 初始化分布式环境
#dist.init_process_group(backend='nccl', init_method='env://', world_size=1, rank=0, device_ids=[0])
with PrintingMode():
test_vocab_parallel_cross_entropy()
# 销毁进程组
dist.destroy_process_group()
```
Outputs is bellow:
> No aten::\_index_put_ impl_
```bash
> python test_cross_entropy.py
Entering PrintingMode
Initializing torch.distributed with rank: 0, world_size: 1
torch._ops.aten.empty.memory_format
torch._ops.c10d.barrier.default
[rank0]:[W117 10:31:55.661054174 ProcessGroupNCCL.cpp:4457] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
torch._ops.aten.empty.memory_format
torch._ops.aten.as_strided.default
torch._ops.aten.empty.memory_format
torch._ops.aten.as_strided.default
torch.Size([4096, 1, 32000])
torch.Size([4096, 1])
torch._ops.aten._to_copy.default
torch._ops.aten.max.dim
torch._ops.c10d.allreduce_.default
torch._ops.aten.unsqueeze.default
torch._ops.aten.sub_.Tensor
torch._ops.aten.lt.Scalar
torch._ops.aten.ge.Scalar
torch._ops.aten.bitwise_or.Tensor
torch._ops.aten.clone.default
torch._ops.aten.sub.Tensor
torch._ops.aten.lift_fresh.default
torch._ops.aten.index_put_.default
torch._ops.aten.view.default
torch._ops.aten.view.default
torch._ops.aten.arange.start
torch._ops.aten.index.Tensor
torch._ops.aten.clone.default
torch._ops.aten.view.default
torch._ops.aten.lift_fresh.default
torch._ops.aten.index_put_.default
torch._ops.aten.exp.out
torch._ops.aten.sum.dim_IntList
torch._ops.c10d.allreduce_.default
torch._ops.c10d.allreduce_.default
torch._ops.aten.log.default
torch._ops.aten.sub.Tensor
torch._ops.aten.unsqueeze.default
torch._ops.aten.div_.Tensor
torch._ops.aten.empty.memory_format
torch._ops.c10d.barrier.default
```
# 0x02 export TORCH_SHOW_DISPATCH_TRACE=1

We can see the line 476, it called **aten::index_put_ impl_**, and seems like using actual kernel.
> Has aten::\_index_put_ impl_

# 0x03 Profiler
While I using profiler, I can also see the **aten::index_put_ impl_**

> Also has aten::\_index_put_ impl_

### Versions
# Version
```bash
> python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0a0+git78543e6
Is debug build: True
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.9.20 (main, Oct 3 2024, 07:27:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA T1000
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz
CPU 系列: 6
型号: 167
每个核的线程数: 2
每个座的核数: 8
座: 1
步进: 1
CPU 最大 MHz: 4900.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4992.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 384 KiB (8 instances)
L1i 缓存: 256 KiB (8 instances)
L2 缓存: 4 MiB (8 instances)
L3 缓存: 16 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] optree==0.13.1
[pip3] torch==2.6.0a0+git78543e6
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2025.0.0 hf2ce2f3_941 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
[conda] mkl-static 2025.0.0 ha770c72_941 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
[conda] numpy 2.0.2 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.6.0a0+git78543e6 dev_0 <develop>
```
cc @Chillee @ezyang @zou3519 @albanD @samdow
| true
|
2,794,354,421
|
Issue: Illegal Memory Access in Backward Pass of `scaled_dot_product_attention` with Custom Attention Mask
|
bolixinyu
|
open
|
[
"triaged",
"module: sdpa"
] | 0
|
NONE
|
### 🐛 Describe the bug
**Bug Description:**
When using a custom attention mask in the `scaled_dot_product_attention` function, an illegal memory access error (`an illegal memory access was encountered`) occurs during the backward pass when the sequence length of `QK` (query-key) is greater than or equal to 65,536.
**reproducing code:**
```python
import torch
import torch.nn.functional as F
def torch_attention(q, k,v, n, ks, ts, upcast=False):
if upcast:
q = q.to(torch.float32)
k = k.to(torch.float32)
v = v.to(torch.float32)
attn_mask = generate_mask(n, ks, max((n-ts)//ks,0), ts, "cuda", dtype=q.dtype)
with torch.backends.cuda.sdp_kernel(enable_flash=True):
attention_torch = F.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask)
return attention_torch
def generate_mask(seq_len, ks, nk, ts, device='cpu', dtype=torch.bfloat16):
row_ind = torch.arange(seq_len, device=device, dtype=torch.long)+1
k_ind = torch.arange(nk, device=device, dtype=torch.long)+1
mask_k = (row_ind.unsqueeze(1)>ts) * (torch.floor_divide(row_ind-ts, ks).unsqueeze(1) >= k_ind.unsqueeze(0))
col_ind = torch.arange(seq_len, device=device, dtype=torch.long)+1
ts = torch.tensor([ts]*seq_len, device=device, dtype=torch.long)
nking_token = torch.maximum(torch.floor_divide(row_ind-ts, ks)*ks, torch.tensor([0]*seq_len, device=device, dtype=torch.long))
remain_num = torch.maximum(row_ind-nking_token-ts, torch.tensor([0]*seq_len, device=device, dtype=torch.long))
ts = ts+remain_num
mask_t = (row_ind.unsqueeze(1)>=col_ind.unsqueeze(0)) * ((row_ind-ts).unsqueeze(1)<col_ind.unsqueeze(0))
bool_mask = torch.concat([mask_k, mask_t], dim=1)
final_mask = torch.zeros((seq_len, seq_len+nk), device=device, dtype=dtype)
final_mask = torch.masked_fill(final_mask, ~bool_mask, -torch.inf)
return final_mask
def test_torch_attn(bz, h, n, d, ks, ts):
print(f"{bz=}, {h=}, {n=}, {d=}, {ks=}, {ts=}")
nk = (n-ts)//ks
torch.manual_seed(20)
q = (torch.empty((bz, h, n, d), dtype=torch.bfloat16, device="cuda").normal_(mean=0.0, std=0.5).requires_grad_())
k = (torch.empty((bz, h, n+nk, d), dtype=torch.bfloat16, device="cuda").normal_(mean=0.0, std=0.5).requires_grad_())
v = (torch.empty((bz, h, n+nk, d), dtype=torch.bfloat16, device="cuda").normal_(mean=0.0, std=0.5).requires_grad_())
do = torch.randn((bz, h, n, d), dtype=torch.bfloat16, device="cuda")
attention_torch = torch_attention(q,k, v, n, ks, ts, upcast=True)
gq_torch, gk_torch, gv_torch = torch.autograd.grad(attention_torch, (q, k, v), do)
print(gq_torch-torch.zeros_like(gq_torch))
test_torch_attn(1,32,1024*64,128, ks=16, ts=1024)
```
**Erros:**
```
Traceback (most recent call last):
File "/debug_report.py", line 47, in test_torch_attn
print(gq_torch-torch.zeros_like(gq_torch))
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor.py", line 461, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 677, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 597, in _str_intern
tensor_str = _tensor_str(self, indent)
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 349, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 387, in get_summarized_data
return torch.stack([get_summarized_data(x) for x in self])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 387, in <listcomp>
return torch.stack([get_summarized_data(x) for x in self])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 385, in get_summarized_data
return torch.stack([get_summarized_data(x) for x in (start + end)])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 385, in <listcomp>
return torch.stack([get_summarized_data(x) for x in (start + end)])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 385, in get_summarized_data
return torch.stack([get_summarized_data(x) for x in (start + end)])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 385, in <listcomp>
return torch.stack([get_summarized_data(x) for x in (start + end)])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 375, in get_summarized_data
return torch.cat(
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
This issue is observed on an **NVIDIA A800 GPU** with **PyTorch 2.2.0**.
| true
|
2,794,344,068
|
[Break XPU][Inductor UT] Skip newly added test_logaddexp as logaddexp_xpu not implemented for ComplexFloat.
|
etaf
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145039
* #145055
* #145038
* #145037
* #145036
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,794,344,004
|
[Break XPU][Inductor UT] Generalize device bias code newly added in test_async_compile.py
|
etaf
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145039
* #145055
* __->__ #145038
* #145037
* #145036
As title.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,794,343,664
|
[Break XPU][Inductor UT] Skip newly added test case `test__int_mm` for XPU
|
etaf
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145039
* #145055
* #145038
* __->__ #145037
* #145036
Skip as int mm not implemented for XPU.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,794,343,612
|
[Break XPU][Inductor UT] Generalize device-bias code in test_fuzzer.py.
|
etaf
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145039
* #145055
* #145038
* #145037
* __->__ #145036
As title.
| true
|
2,794,324,756
|
[mps/inductor] Introduce is_mps_backend/skip_if_mps decorators.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,794,306,935
|
Use GiB on the axis of memory viz to match how we textually print it.
|
zdevito
|
closed
|
[
"Stale"
] | 3
|
CONTRIBUTOR
|
I didn't do this before because d3 doesn't really support it. However, I argued with an LLM for awhile to get it to reproduce basically what d3's nice axis behavior is but have it work for 2^(10c) multiples.
| true
|
2,794,268,866
|
Pass stack trace to generated split graph module during model splitting
|
zijianshen
|
closed
|
[
"fb-exported",
"release notes: fx",
"fx"
] | 8
|
NONE
|
Summary: For debugging purpose, we may want to maintain the stack trace in the nodes of graph module.
Test Plan: More test plans in D68132673
Reviewed By: faran928
Differential Revision: D68302850
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,794,265,479
|
Inference super slow with torchvision model fasterrcnn_mobilenet_v3_large_fpn
|
felipequentino
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
I have a code that realize inference of COCO classes using the fastercnn_mobilenet_v3_large_fpn model. I'm using this code for more than 5 months, but suddenly the speed of inference has slowed too much, processing less than 1 frame per second. I already reinstalled CUDA, cuDNN and PyTorch drivers to different versions and nothing seems to work.
Here is a generic code that makes inference using the model mentioned, he has the same performance that my code. I'm totally lost, idk what to do.
Reinforcing: I hasn't changed anything in the source code.
```
import cv2
import torch
from torchvision import transforms
from torchvision.models.detection import fasterrcnn_mobilenet_v3_large_fpn
from torchvision.ops import nms
def main():
cap = cv2.VideoCapture(0)
model = fasterrcnn_mobilenet_v3_large_fpn(weights="DEFAULT").eval()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
t = transforms.ToTensor()
while True:
ret, frame = cap.read()
if not ret:
break
x = t(frame).unsqueeze(0).to(device)
with torch.no_grad():
o = model(x)[0]
b, l, s = o["boxes"], o["labels"], o["scores"]
i = (l == 1)
b, s = b[i], s[i]
k = nms(b, s, 0.7)
b, s = b[k].cpu().numpy(), s[k].cpu().numpy()
for box, score in zip(b, s):
box = box.astype(int)
cv2.rectangle(frame, (box[0], box[1]), (box[2], box[3]), (0,255,0), 2)
cv2.putText(frame, str(round(score.item(), 2)), (box[0], box[1]-10),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,255,0), 2)
cv2.imshow("Detection", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
main()
```
Here is a benchmark that chatgpt gave to me, CUDA is being used:
```{python}
import torch
print("------- PyTorch Information -------")
print(f"PyTorch Version: {torch.__version__}")
print(f"CUDA Available: {torch.cuda.is_available()}")
print(f"CUDA Version: {torch.version.cuda}")
print(f"cuDNN Version: {torch.backends.cudnn.version()}")
if torch.cuda.is_available():
print(f"GPU Name: {torch.cuda.get_device_name(0)}")
print(f"Available CUDA Devices: {torch.cuda.device_count()}")
print(f"Current Device: {torch.cuda.current_device()}")
# Create a random tensor
x = torch.rand(5, 3)
print(f"Tensor Device: {x.device}")
# Move to GPU
x = x.cuda()
print(f"New Tensor Device: {x.device}")
# Enable benchmark mode
torch.backends.cudnn.benchmark = True
# Check memory usage
print(f"Allocated Memory: {torch.cuda.memory_allocated(0) / 1024**2:.2f} MB")
print(f"Reserved Memory: {torch.cuda.memory_reserved(0) / 1024**2:.2f} MB")
# Clear the cache
torch.cuda.empty_cache()
```
Output:
```
------- PyTorch Information -------
PyTorch Version: 2.4.0+cu121
CUDA Available: True
CUDA Version: 12.1
cuDNN Version: 90100
GPU Name: NVIDIA GeForce GTX 1650
Available CUDA Devices: 1
Current Device: 0
Tensor Device: cpu
New Tensor Device: cuda:0
Allocated Memory: 0.00 MB
Reserved Memory: 2.00 MB
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 4 2024, 08:53:37) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Arquitetura: x86_64
Modo(s) operacional da CPU: 32-bit, 64-bit
Ordem dos bytes: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 12
Lista de CPU(s) on-line: 0-11
Thread(s) per núcleo: 2
Núcleo(s) por soquete: 6
Soquete(s): 1
Nó(s) de NUMA: 1
ID de fornecedor: GenuineIntel
Família da CPU: 6
Modelo: 141
Nome do modelo: 11th Gen Intel(R) Core(TM) i5-11400H @ 2.70GHz
Step: 1
CPU MHz: 2700.000
CPU MHz máx.: 4500,0000
CPU MHz mín.: 800,0000
BogoMIPS: 5376.00
Virtualização: VT-x
cache de L1d: 288 KiB
cache de L1i: 192 KiB
cache de L2: 7,5 MiB
cache de L3: 12 MiB
CPU(s) de nó0 NUMA: 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Opções: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.0
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.20
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] Could not collect
| true
|
2,794,260,981
|
[Easy] Replace paper description with link to make a concise description.
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 16
|
CONTRIBUTOR
|
Description in [Transformer,](https://pytorch.org/docs/main/generated/torch.nn.Transformer.html), [TransformerEncoderLayer](https://pytorch.org/docs/main/generated/torch.nn.TransformerEncoderLayer.html), [TransformerDecoderLayer](https://pytorch.org/docs/main/generated/torch.nn.TransformerDecoderLayer.html) pages contain authors and paper details seems redundant for users who want to know how to use it, replace with a link to paper content, users can go to the paper detail if they want to learn more.
**Test Result**
**Before**



**After**



| true
|
2,794,189,309
|
[draft export] count how many times a data-dep error shows up
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Summary: maybe this is helpful?
Test Plan: draft_export
Differential Revision: D68303934
| true
|
2,794,171,803
|
Moved .all() checks for distributions to _is_all_true
|
Chillee
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145024
* #145059
* __->__ #145029
| true
|
2,794,164,253
|
[aoti] Remove torch.ops.aten._assert_tensor_metadata.default in post_grad_pass
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary:
Remove torch.ops.aten._assert_tensor_metadata.default in post_grad_pass because this op is blocking fusion.
This should not have any affect on the result, because the op would not show up in the final aoti compiled model anyway (the assertion has no effect).
An real example where this improves performance:
In the example below, the post grad graph would contain `torch.ops.aten._assert_tensor_metadata.default`, because of PR https://github.com/pytorch/pytorch/pull/142420. This op is added when functionalizing aten.to.
We want the `add` node from `linear` to be fused with the rest of the pointwise ops, instead of fused with the `mm` from `linear`.
```
class Model(torch.nn.Module):
def __init__(self, input_dim, hidden_dim):
super(Model, self).__init__()
self.linear = nn.Linear(input_dim, hidden_dim).half()
self.rms_norm = nn.RMSNorm(hidden_dim)
def forward(self, x):
linear_458 = self.linear(x) # Linear layer with weights'
# mimic the torchtune rms norm: /torchtune/torchtune/modules/rms_norm.py
linear_458 = linear_458.to(torch.float32)
rms_norm_34 = self.rms_norm(linear_458) # RMS Normalization
sigmoid_168 = torch.sigmoid(rms_norm_34) # Sigmoid activation function
mul_168 = sigmoid_168 * rms_norm_34 # Element-wise multiplication
return mul_168
def main():
with torch.no_grad():
input_dim = 512
hidden_dim = 256
batch_size = 32
model = Model(input_dim, hidden_dim).to("cuda")
example_inputs = (
torch.randn(batch_size, input_dim).to("cuda").to(torch.float16),
)
ep = torch.export.export(model, example_inputs)
package_path = torch._inductor.aoti_compile_and_package(ep)
```
Test Plan:
CI
Differential Revision: D68303114
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,794,160,648
|
Implement a storage reader and writer for HuggingFace
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported"
] | 5
|
CONTRIBUTOR
|
Summary: Currently for torchtune users have to download and upload to hugging face manually since our existing storage reader/writer only reads locally. This new storage writer can work with hugging face directly
Test Plan:
N6381603 shows the functionality working of saving and loading a checkpoint working end to end with the hugging face reader and writer.
buck2 test fbcode//mode/opt fbcode//caffe2/test/distributed/checkpoint:test_hugging_face_storage
File changed: fbcode//caffe2/test/distributed/checkpoint/test_hugging_face_storage.py
Buck UI: https://www.internalfb.com/buck2/4f13eb8c-8171-47cc-bfcf-07694204ad49
Test UI: https://www.internalfb.com/intern/testinfra/testrun/844425328401897
Network: Up: 0B Down: 0B (reSessionID-694b7fb8-00cc-4902-b3ef-6402eba81677)
Executing actions. Remaining 0/2 0.1s exec time total
Command: test. Finished 1 local
Time elapsed: 33.1s
Tests finished: Pass 2. Fail 0. Fatal 0. Skip 0. Build failure 0
Differential Revision: D67407067
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.