id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2 values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4 values | body stringlengths 7 62.5k ⌀ | is_title bool 1 class |
|---|---|---|---|---|---|---|---|---|
2,836,267,931 | [inductor][idea] Defer realize/inline decisions | jansel | open | [
"triaged",
"enhancement",
"oncall: pt2",
"module: inductor"
] | 0 | CONTRIBUTOR | ## Background
Currently, inductor lowering has the concept of a realized versus unrealized tensor. Suppose you have:
```py
def example(a, b):
x = a + b
y = torch.sin(x)
```
`x` will get mapped to:
```py
def inner_fn_x(index):
tmp0 = ops.load("a", index[0])
tmp1 = ops.load("b", index[0])
tmp2 = ops.add(tmp0, tmp1)
return tmp2
```
but `x` will be *unrealized* so when we generate `y` we will inline the body of `x` into `y`:
```py
def inner_fn_y(index): # with x unrealized
tmp0 = ops.load("a", index[0])
tmp1 = ops.load("b", index[0])
tmp2 = ops.add(tmp0, tmp1)
tmp3 = ops.sin(tmp2)
return tmp3
```
in contrast, if `x` is *realized* (either because it got too big, or if it is needed by an op that requires realized inputs), you will get a different IR:
```py
def inner_fn_y(index): # with x realized
tmp0 = ops.load("x", index[0])
tmp1 = ops.sin(tmp2)
return tmp1
```
In the realized case, these two kernels will then get fused in the scheduler. While in the unrealized case, we will never create an `ir.Buffer` for `x` and the scheduler will not see `x`.
## Idea
Rather than making this realize/unrealized decision at lowering time (or changing our mind partway through lowering by calling `x.realize()`) we could defer it by always generating an IR like:
```py
def inner_fn_y(index):
tmp0 = ops.inline_or_load_from("x", index)
tmp1 = ops.sin(tmp2)
return tmp1
```
this allows us to change our mind about realize vs inline decisions later on in the compile process. If you call `.realize()` after lowering, then we could change the prior kernels to load from the realized buffer retroactively.
In the case where the load from `x` is preventing fusion (doesn't happen in this example, but could happen if there is slicing), we could decide to inline `x` at fusion time. @eellison said he saw an example of this.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @aakhundov | true |
2,836,257,174 | Dynamo should consider tensor mutation when reconstructing generator | guilhermeleobas | open | [
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-side-effects"
] | 0 | COLLABORATOR | In PR #145223, we added support for reconstructing a generator only when no side effects are present. However, we do not currently account for tensor mutations. This issue tracks the missing support for detecting tensor mutations.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,836,236,891 | [BE]: Inline special functions for MPS | Skylion007 | closed | [
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | These header functions should be inlined for consistency and to avoid translation unit / symbol issues. | true |
2,836,225,749 | [inductor] Improve type annotations in _inductor/pattern_matcher.py | rec | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: inductor",
"ciflow/inductor",
"suppress-api-compatibility-check",
"suppress-bc-linter"
] | 5 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146626
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,836,213,182 | Move capture_provenance to make_node_impl | angelayi | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 15 | CONTRIBUTOR | Previously we were only logging `make_user_impl` implementations, which only gets triggered for operations done on python SymInts, not cpp SymInts. Instead `make_node_impl` will get triggered for both python and cpp SymInt operations.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,836,167,700 | [Flex Attention] Cannot determine truth value of Relational | alexdremov | closed | [
"triaged",
"oncall: pt2",
"module: pt2-dispatcher",
"module: flex attention"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
Flex attention autotune causes `Cannot determine truth value of Relational`
To reproduce, run this benchmark: https://gist.github.com/alexdremov/0f143fd30168588b13ed07a2363c7cb4
### Versions
PyTorch version: 2.7.0.dev20250206+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.210-39.1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
Nvidia driver version: 550.54.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
...
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250206+cu124
[pip3] torchaudio==2.6.0.dev20250206+cu124
[pip3] torchvision==0.22.0.dev20250206+cu124
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,836,139,920 | bug fix: ensure 4d input in _scaled_dot_product_attention_math_mps | hellopahe | closed | [
"triaged",
"open source",
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 7 | CONTRIBUTOR | This pr addresses the issue in the MPS backend for `_scaled_dot_product_attention_math_mps` where a 3d input like (num_heads, seq_len, query_dim) cannot be automatically treated as (1, num_heads, seq_len, query_dim), which can be inferred on cpu or cuda, which can be circumvented by adding a util function to ensure a 4d shape.
The issue was found in https://github.com/hiyouga/LLaMA-Factory/issues/6835, in [transformers qwen2_vl](https://github.com/huggingface/transformers/blob/1590c664306766f32ba68c50e67f14d61b16925d/src/transformers/models/qwen2_vl/modeling_qwen2_vl.py#L373C14-L373C93), 3d q/k/v were passed into sdpa function, which lead to an error.
Considering consistency, since this pattern might pop up elsewhere in the transformers codebase, I think it makes more sense to maintain the same intuition across all platforms.
---
reproduce code:
```
import torch
import torch.nn.functional as F
head_num, seq_len, embed_dim = 16, 16, 80
bsz = 1
q = torch.randn(head_num, seq_len, embed_dim)
k = torch.randn(head_num, seq_len, embed_dim)
v = torch.randn(head_num, seq_len, embed_dim)
attention_mask = torch.ones(1, seq_len, seq_len)
oo_cpu = F.scaled_dot_product_attention(
q.to("cpu"),
k.to("cpu"),
v.to("cpu"),
attention_mask.to("cpu"),
dropout_p=0.0
)
if torch.backends.mps.is_available():
oo_mps = F.scaled_dot_product_attention(
q.to("mps"),
k.to("mps"),
v.to("mps"),
attention_mask.to("mps"),
dropout_p=0.0
)
assert torch.allclose(oo_cpu, oo_mps.to("cpu"), atol=1e-5)
```
error outputs:
```
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniconda/base/envs/torch-dev/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3577, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-5169b8d2c5dd>", line 21, in <module>
oo_mps = F.scaled_dot_product_attention(
IndexError: Dimension out of range (expected to be in range of [-3, 2], but got 3)
```
hardware and envs:
```
torch 2.6.0
apple m3 max
```
---
| true |
2,836,128,108 | Fix inductor non-stable argsort/sort test | nicholasw-gc | open | [
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 15 | CONTRIBUTOR | - Prevent the inductor test for argsort/sort from wrongly failing when the argsort/sort output with stable=False differs from pytorch but is still a valid argsort output.
- Add functionality to allow alternative assert_equal functions in inductor tests for future cases.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,836,119,518 | aten op full_like has kwarg that prepare_pt2e does not expect | Erik-Lundell | closed | [
"oncall: quantization"
] | 1 | NONE | ### 🐛 Describe the bug
When quantizing a torch.full_like() op, it gets stuck when calling prepare_pt2e in ```_maybe_insert_input_observers_for_node```. There is an assert that checks aten ops (except a few) don't have kwargs, but aten.full_like does. I therefore get the following error message:
```
# Clone has a memory_format kwarg, zeros_like has a pin_memory kwarg, and
# gelu has a has an approximate kwarg that persist in exported graph.
# This is just a work around for these.
assert (
node.target == torch.ops.aten.clone.default
or node.target == torch.ops.aten.zeros_like.default
or node.target == torch.ops.aten.gelu.default
> or len(node.kwargs) == 0
), " expecting kwargs for aten op IR to be empty"
E AssertionError: expecting kwargs for aten op IR to be empty
```
I guess the easy solution would be to add full_like to the list, have I missed something to consider?
Minimal exampel to reproduce:
torch version: 2.7.0.dev20250131+cpu
torchao version: 0.8.0+git11333ba2
```
import torch
from torch.ao.quantization.quantize_pt2e import prepare_pt2e
from torch.ao.quantization.quantizer import Quantizer, QuantizationSpec
from torch.ao.quantization.quantizer.utils import (
_annotate_input_qspec_map,
_annotate_output_qspec,
)
from torch.ao.quantization.observer import HistogramObserver
class FullLike(torch.nn.Module):
def forward(self, t : torch.Tensor):
return torch.full_like(t, 1.)
class MyQuantizer(Quantizer):
def annotate(self, model: torch.fx.GraphModule) -> torch.fx.GraphModule:
qspec = QuantizationSpec(torch.int8, HistogramObserver,qscheme=torch.per_tensor_symmetric)
for node in model.graph.nodes:
for input_node in node.all_input_nodes:
_annotate_input_qspec_map(node,input_node, qspec)
_annotate_output_qspec(node, qspec)
return model
def validate(self, model: torch.fx.GraphModule) -> None:
pass
model = FullLike()
exported_model = torch.export.export(model, (torch.randn(10),))
prepared_model = prepare_pt2e(exported_model.graph_module, MyQuantizer())
```
Thanks!
### Versions
[pip3] executorch==0.6.0a0+a9595f9
[pip3] flake8==6.1.0
[pip3] flake8-breakpoint==1.1.0
[pip3] flake8-bugbear==24.4.26
[pip3] flake8-comprehensions==3.14.0
[pip3] flake8-plugin-utils==1.3.3
[pip3] flake8-pyi==23.5.0
[pip3] mypy==1.14.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch_models==0.0.1
[pip3] torch==2.7.0.dev20250131+cpu
[pip3] torchao==0.8.0+git11333ba2
[pip3] torchaudio==2.6.0.dev20250131+cpu
[pip3] torchsr==1.0.4
[pip3] torchvision==0.22.0.dev20250131+cpu
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | true |
2,836,087,855 | Enable qint8 and quint8 add for AArch64 using ACL directly | davsva01 | closed | [
"module: cpu",
"triaged",
"open source",
"release notes: quantization",
"release notes: releng",
"arm priority"
] | 5 | NONE | This enables qint8 and quint8 add for AArch64 through Arm Compute Library (ACL) directly.
It’s based on changes in PR #145942 which enables the use of ACL directly in ATen.
Relative performance improvement using OMP_NUM_THREADS=1 is ~15x, using OMP_NUM_THREADS=32 it’s ~5.4x.
Script to benchmark quantised add performance:
```
import torch
import torch.profiler as profiler
a_f32 = torch.rand((400, 3456),dtype=torch.float)
b_f32 = torch.rand((400, 3456),dtype=torch.float)
a_q = torch.quantize_per_tensor(a_f32, 1.2, 0, torch.qint8)
b_q = torch.quantize_per_tensor(b_f32, 1.7, 5, torch.qint8)
with profiler.profile(with_stack=True, profile_memory=False, record_shapes=True) as prof:
for i in range(1000):
_ = torch.ops.quantized.add(a_q, b_q, 1.3, 2)
print(prof.key_averages(group_by_input_shape=True).table(sort_by='self_cpu_time_total', row_limit=50))
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,836,067,606 | [mps] Remove a stale comment. | dcci | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 6 | MEMBER | The implementation of the function was moved to a shader, but the comment was left there.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,836,035,338 | Fix for special.zeta nan handling - follow-up PR #138653 | vladimirrotariu | open | [
"triaged",
"module: special"
] | 0 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
Continuing [PR #138653](https://github.com/pytorch/pytorch/pull/138653).
I hereby attach the suggestion of Albert Steppi (@steppi):
Now that we have this background out of the way. I think my preference in SciPy would be to change zeta(x, q) to be nan and to codify this as a recommendation in a special function array API extension as considered https://github.com/data-apis/array-api/issues/725. I'm not sure what the downstream implications of this change might be though, and would be happy to hear feedback.
If there's no interest for PyTorch extending zeta to x < 1, then having zeta(1, q) return +inf makes sense, and by the principle guiding such special cases in the C99 standard, having zeta(1, nan) return +inf also makes sense in my opinion. However, through the work we are doing in SciPy discussed here, https://github.com/scipy/xsf/issues/1, it would become straightforward for PyTorch to extend zeta if we extend it in SciPy, by using the xsf library as a shared dependency (although you would lose the ability to test against SciPy as an independent reference).
@rgommers @janeyx99 @mruberry
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @kshitij12345 | true |
2,835,967,169 | Generate test reports for pytest when option is given | Flamefire | closed | [
"triaged",
"open source",
"topic: not user facing"
] | 2 | COLLABORATOR | The argument needs to be appended when test reports should be generated. `IS_CI` is not necessarily set, so rather check `TEST_SAVE_XML` instead as in other places where test reports are conditionally enabled.
See also https://github.com/pytorch/pytorch/issues/126523 | true |
2,835,945,976 | [don't merge] test baseline | xuhancn | closed | [
"open source",
"topic: not user facing",
"ciflow/binaries_wheel",
"ciflow/xpu"
] | 7 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,835,945,905 | [BE][Ez]: Enable ruff rule banning print in assert | Skylion007 | closed | [
"triaged",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | COLLABORATOR | Enables a few ruff rules
* Ban print statements within asserts (likely bugs)
* ~Use string for Decimal literal to prevent loss of precision~
* ~Do not use default args for __post__init__ in dataclasses, they likely were meant to go into the factory method, the __init__, or somewhere else. The default values are useless here.~
Wait until ruff upgrade for the last 2
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi | true |
2,835,921,583 | [CD] Add python 3.13t build for xpu | chuanqi129 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 9 | COLLABORATOR | Fixes #146451
| true |
2,835,899,790 | Description of `input` in `torch.addbmm()` should be a `Parameter` | ILCSFNO | closed | [
"module: docs",
"triaged",
"actionable",
"topic: docs",
"module: python frontend"
] | 1 | CONTRIBUTOR | ### 📚 The doc issue
The doc of [`torch.addbmm()`](https://pytorch.org/docs/stable/generated/torch.addbmm.html#torch-addbmm) shows its `Parameters` and `Kw Arguments` as below:
https://github.com/pytorch/pytorch/blob/8a4dd763b87478d01ae327ec439632212b8a3357/torch/_torch_docs.py#L409-L417
But for `input`, which is now a `kw argument` in doc, it should be a `parameter` instead.
Take some similar funcs as example:
<details><summary>Func Examples</summary>
#### [`torch.addmm()`](https://pytorch.org/docs/stable/generated/torch.addmm.html#torch-addmm)
https://github.com/pytorch/pytorch/blob/8a4dd763b87478d01ae327ec439632212b8a3357/torch/_torch_docs.py#L556-L564
#### [`torch.addmv()`](https://pytorch.org/docs/stable/generated/torch.addmv.html#torch-addmv)
https://github.com/pytorch/pytorch/blob/8a4dd763b87478d01ae327ec439632212b8a3357/torch/_torch_docs.py#L667-L675
#### [`torch.baddbmm()`](https://pytorch.org/docs/stable/generated/torch.baddbmm.html#torch-baddbmm)
https://github.com/pytorch/pytorch/blob/8a4dd763b87478d01ae327ec439632212b8a3357/torch/_torch_docs.py#L1334-L1342
</details>
### Suggest a potential alternative/fix
I suggest that the description of `input` may be moved to `Parameters`, instead of `Kw Arguments`. That is,
from:
```txt
Args:
batch1 (Tensor): the first batch of matrices to be multiplied
batch2 (Tensor): the second batch of matrices to be multiplied
Keyword args:
beta (Number, optional): multiplier for :attr:`input` (:math:`\beta`)
input (Tensor): matrix to be added
alpha (Number, optional): multiplier for `batch1 @ batch2` (:math:`\alpha`)
{out}
```
to:
```txt
Args:
input (Tensor): matrix to be added
batch1 (Tensor): the first batch of matrices to be multiplied
batch2 (Tensor): the second batch of matrices to be multiplied
Keyword args:
beta (Number, optional): multiplier for :attr:`input` (:math:`\beta`)
alpha (Number, optional): multiplier for `batch1 @ batch2` (:math:`\alpha`)
{out}
```
Thanks!
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @albanD | true |
2,835,860,360 | [WIP] BaseSubclass | IvanKobzarev | open | [
"Stale"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146612
| true |
2,835,854,469 | Fix ignore description in `torch.addbmm()`, `torch.addmm()`, `torch.addmv()` and `torch.baddbmm()` | ILCSFNO | closed | [
"module: docs",
"triaged",
"actionable",
"module: python frontend"
] | 0 | CONTRIBUTOR | ### 📚 The doc issue
Seen from #146399, I notice some similar situations in [`torch.addbmm()`](https://pytorch.org/docs/stable/generated/torch.addbmm.html#torch-addbmm), [`torch.addmm()`](https://pytorch.org/docs/stable/generated/torch.addmm.html#torch-addmm), [`torch.addmv()`](https://pytorch.org/docs/stable/generated/torch.addmv.html#torch-addmv) and [`torch.baddbmm()`](https://pytorch.org/docs/stable/generated/torch.baddbmm.html#torch-baddbmm)
<details><summary>Doc Details</summary>
#### torch.addbmm()
https://github.com/pytorch/pytorch/blob/ed309b9156bb5ef4b44a4d211cf5c30fbb1bc1e8/torch/_torch_docs.py#L398-L399
#### torch.addmm()
https://github.com/pytorch/pytorch/blob/ed309b9156bb5ef4b44a4d211cf5c30fbb1bc1e8/torch/_torch_docs.py#L539-L540
#### torch.addmv()
https://github.com/pytorch/pytorch/blob/8a4dd763b87478d01ae327ec439632212b8a3357/torch/_torch_docs.py#L660-L661
#### torch.baddbmm()
https://github.com/pytorch/pytorch/blob/8a4dd763b87478d01ae327ec439632212b8a3357/torch/_torch_docs.py#L1323-L1324
</details>
These details show that when `beta` is set to 0, the `input` will be ignored, which means that whether `input` is an expected matrix or not, there shouldn't be an error happened for the misuse of `input`.
Instead, it should ignore this just like `input` doesn't exist.
But now when `beta` is set to 0, with `input` of unexpected size, it will raise error like below:
### Minified Repro
```python
import torch
import numpy as np
x1 = torch.tensor(np.random.randn(10, 10))
x2 = torch.tensor(np.random.randn(10))
vec1 = torch.tensor(np.random.randn(100, 3, 4))
vec2 = torch.tensor(np.random.randn(100, 4, 5))
vec3 = torch.tensor(np.random.randn(3, 4))
vec4 = torch.tensor(np.random.randn(4, 5))
vec5 = torch.tensor(np.random.randn(4))
## Below shows 4 usage in 4 funcs with: beta==0 && input of unexpected size
out1 = torch.addbmm(x1, vec1, vec2, beta=0) # (1) torch.addbmm()
# out2 = torch.baddbmm(x1, vec1, vec2, beta=0) # (2) torch.baddbmm()
# out3 = torch.addmm(x1, vec3, vec4, beta=0) # (3) torch.addmm()
# out4 = torch.addmv(x2, vec3, vec5, beta=0) # (4) torch.addmv()
```
### Output (A sample for torch.addbmm example)
```txt
RuntimeError: The expanded size of the tensor (5) must match the existing size (10) at non-singleton dimension 1. Target sizes: [3, 5]. Tensor sizes: [10, 10]
```
### Suggest a potential alternative/fix
I suggest that the description of docs may be changed to ignore the content of the `input`, instead of `input` itself. That is,
from:
> If :attr:`beta` is 0, then :attr:`input` will be ignored, and `nan` and `inf` in it will not be propagated.
to:
> If :attr:`beta` is 0, then the content of :attr:`input` will be ignored, and `nan` and `inf` in it will not be propagated.
Thanks!
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @albanD | true |
2,835,853,813 | Remove some NOLINT | cyyever | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,835,852,006 | [BE][Ez]: Enable some additional pylint ruff warnings | Skylion007 | closed | [
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | Some additional code hardening with some pylint warnings in ruff that usually indicate bugs. All code currently conforms nicely to them, but this will ensure these errors can be detected statically before running / creating tests.
The follow rules:
* Ban walrus operators where they would have no effect over regular assignment; making intention more clear.
* Statically check for the common error of forgetting to put parens after the `super` call, which will cause an attribute error
* Ban bad string literal args to builtins `open` | true |
2,835,728,588 | Gh/lucasllc/1/head | LucasLLC | closed | [
"oncall: distributed"
] | 1 | CONTRIBUTOR | Seeing how many errors I get when I delete this function
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,835,629,358 | [ROCm][Windows] Remove external linkage from an anonymous namespace | m-gallus | closed | [
"oncall: jit",
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: jit",
"topic: not user facing"
] | 7 | CONTRIBUTOR | Fixes a clang-cl compiler error related to attempt to export a symbol that doesn't have any external linkage, since its declared within a local anonymous namespace.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,835,612,881 | [ROCm][Windows] Fix unrecognized _BitScanReverse intrinsic | m-gallus | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7 | CONTRIBUTOR | Since PyTorch with ROCm on Windows is built with clang-cl and not MSVC, the intrinsics used are different and hence an attempt to compile with `_BitScanReverse` fails. However, a call to `__builtin_clz` which follows in the subsequent preprocessor branch is correctly recognized by the clang-cl compiler.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,835,594,615 | [ROCm][Windows] Fix isnan integer overload errors on MS STL | m-gallus | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"topic: not user facing"
] | 7 | CONTRIBUTOR | Microsoft's STL has a problem with integer overloads of std::fpclassify used by std::isnan and std::isinf. These functions need a cast to double to function correctly. Otherwise, the call fails with "ambiguous call to overloaded function" error.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,835,576,394 | [Profiler] Enable CUPTI teardown to reduce profiler overhead | mgmtea | open | [
"triaged",
"open source",
"oncall: profiler",
"topic: not user facing"
] | 11 | NONE | The problem is that the profiler slowed down
training by roughly 10-20% even after completion
because cuptiFinalize was not called in Kineto due to TEARDOWN_CUPTI=0. Disabling CUPTI teardown was a workaround for crashes which occured when CUDA graphs were used. This issue was fixed in CUDA 12.6. Also there is no point in disabling CUPTI teardown if CUDA Graphs are not used.
Fixes #144455
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | true |
2,835,570,712 | Regression: Multiple OpenMP runtimes linked to libtorch_cpu.so | vinithakv | closed | [
"module: performance",
"module: build",
"triaged",
"module: POWER"
] | 4 | CONTRIBUTOR | ### 🐛 Describe the bug
Hi,
Running the granite model on ppc64le Linux machine with latest PyTorch built from sources, shows a regression in performance.
Testing with OpenBLAS 3.29.
The libtorch_cpu.so seems to have picked up libomp.so and libgomp.so as dependencies, when compared to PyTorch-2.5.
With PyTorch 2.6 main
```
(torch26) $ ldd ./torch/lib/libtorch_cpu.s
linux-vdso64.so.1 (0x00007fff911e0000)
libc10.so => /home/ptuser/torch26/lib/python3.12/site-packages/./torch/lib/libc10.so (0x00007fff884d0000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fff91170000)
libopenblas.so.0 => /home/ptuser/torch26/lib64/python3.12/site-packages/openblas/lib/libopenblas.so.0 (0x00007fff86e00000)
**libomp.so => /lib64/libomp.so (0x00007fff86cc0000)**
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007fff86800000)
libm.so.6 => /lib64/glibc-hwcaps/power10/libm.so.6 (0x00007fff86bc0000)
libc.so.6 => /lib64/glibc-hwcaps/power10/libc.so.6 (0x00007fff86400000)
/lib64/ld64.so.2 (0x00007fff911f0000)
libnuma.so.1 => /lib64/libnuma.so.1 (0x00007fff91140000)
**libgomp.so.1 => /lib64/libgomp.so.1 (0x00007fff88450000)**
libatomic.so.1 => /lib64/libatomic.so.1 (0x00007fff91110000)
```
In PyTorch 2.5
```
(torch25) $ ldd ./python3.12/site-packages/torch/lib/[libtorch_cpu.so](http://libtorch_cpu.so/)
linux-vdso64.so.1 (0x00007fffb8360000)
libprotobuf.so.25.3.0 => not found libc10.so => /home/ptuser/torch25/lib/./python3.12/site-packages/torch/lib/libc10.so
0x00007fffb81f0000) libopenblas.so.0 => not found libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fffb81b0000)
libm.so.6 => /lib64/glibc-hwcaps/power10/libm.so.6
**0x00007fffafb00000) libgomp.so.1 => /lib64/libgomp.so.1**
(0x00007fffafa80000) libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007fffaf600000)
libc.so.6 => /lib64/glibc-hwcaps/power10/libc.so.6
(0x00007fffaf200000) /lib64/ld64.so.2 (0x00007fffb8370000)
libnuma.so.1 => /lib64/libnuma.so.1 (0x00007fffb8180000)
libatomic.so.1 => /lib64/libatomic.so.1 (0x00007fffafa50000)
```
The profile show libomp.so functions with new PyTorch as one of the hot functions and regression is observed.
Seems to be after this commit
https://github.com/pytorch/pytorch/commit/0d5f0a81c5766c18970c9b3019e5d3165a2b05f4 .
When the libomp.so is removed from the list of dependent library, this regression is not observed.
Regards,
VInitha
### Versions
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0a0+git1eba9b3
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.5 (Plow) (ppc64le)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: 18.1.8 (Red Hat, Inc. 18.1.8-3.el9)
CMake version: version 3.31.4
Libc version: glibc-2.34
Python version: 3.12.5 (main, Dec 3 2024, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-2)] (64-bit runtime)
Python platform: Linux-5.14.0-503.19.1.el9_5.ppc64le-ppc64le-with-glibc2.34
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: ppc64le
Byte Order: Little Endian
CPU(s): 320
On-line CPU(s) list: 0-319
Model name: POWER10 (architected), altivec supported
Model: 2.0 (pvr 0080 0200)
Thread(s) per core: 8
Core(s) per socket: 10
Socket(s): 4
Hypervisor vendor: pHyp
Virtualization type: para
L1d cache: 2.5 MiB (80 instances)
L1i cache: 3.8 MiB (80 instances)
L2 cache: 80 MiB (80 instances)
L3 cache: 320 MiB (80 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-79
NUMA node1 CPU(s): 80-159
NUMA node2 CPU(s): 160-239
NUMA node3 CPU(s): 240-319
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1: Mitigation; __user pointer sanitization, ori31 speculation barrier enabled
Vulnerability Spectre v2: Mitigation; Software count cache flush (hardware accelerated), Software link stack flush
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1+opence
[pip3] torch==2.6.0a0+git1eba9b3
[conda] Could not collect
cc @msaroufim @malfet @seemethere | true |
2,835,448,596 | DISABLED test_tmp_not_defined_issue3_dynamic_shapes_cpu (__main__.DynamicShapesCpuTests) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tmp_not_defined_issue3_dynamic_shapes_cpu&suite=DynamicShapesCpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36770936980).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tmp_not_defined_issue3_dynamic_shapes_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 9766, in test_tmp_not_defined_issue3
self.common(forward, [], kwargs=kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 418, in check_model
eager_result = model(*ref_inputs, **ref_kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 9723, in forward
_embedding_bag = torch.ops.aten._embedding_bag.default(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 756, in __call__
return self._op(*args, **kwargs)
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesCpuTests.test_tmp_not_defined_issue3_dynamic_shapes_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,835,251,733 | Enabling efficient model-level redistribution between FSPD-TP | SalmanMohammadi | open | [
"oncall: distributed"
] | 9 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
The age of RL-based LLM finetuning is upon is. For many RL training paradigms, there exists a step where a (large) model is used for inference (i.e. autoregressive sampling under `no_grad`), followed by a training step with the same model. After this training step, the updated model weights are used once more for inference, and so on. This generation step is often a large bottleneck in the overall training procedure.
For single-device RL finetuning, this is fairly straightforward to accomplish. However, in distributed training settings, we would like to optimally parallelize the model for inference using e.g. PP/TP, whilst for training we'd like to leverage FSDP to train large models. In the ideal setting, we would like zero-redundancy - we don't wan't to store two copies of the model at once, and we'd like this switch between inference-and-training modes to involve minimal overhead.
I've considered a few options:
- Gather the full state-dict on CPU, and re-initialize a new model instance with the desired parallel strategy. This seems prohibitively slow for large models, though I'm definitely open to suggestions as this may be the path of least resistance.
- Use `DTensor.redistribute` on the [weights of a model](https://github.com/SalmanMohammadi/torch-redistribute/blob/main/torch_redistribute/style.py#L40) sharded with `fully_shard` *and*:
- Supporting some un-wrapping API for a model sharded with `fully_shard` - temporarily removing the pre/post-forward hooks, and then re-sharding the model weights to the original desired placements. This would allow us to use the same model instance. Alternatively,
- Re-sharding the weights of a model sharded with `fully_shard` directly into another instance of the model initialized on meta device. I believe [veRL](https://github.com/volcengine/verl) may do something similar here (cc @eric-haibin-lin), but this results in 2x parameters in memory unless parameters are offloaded for both resharding operations (FSDP->TP, and TP->FSDP).
Any advice on the best way to accomplish this would be appreciated. Thanks for reading : )
(cc @weifengpy @ebsmothers @joecummings @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o )
### Alternatives
_No response_
### Additional context
_No response_ | true |
2,835,243,849 | Fixed a typo in dataset.py | Zhou32 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: dataloader",
"topic: not user facing"
] | 7 | CONTRIBUTOR | Changed word 'Mult' to 'Multi'. | true |
2,835,243,688 | [Windows][ROCm] Fix c10 hip tests | m-gallus | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7 | CONTRIBUTOR | - Solves a problem related to .hip source files being ignored by the build system when HIP language is not enabled in CMake.
- Also ensures that the test executables link to an appropriate CRT Runtime Library and hence have access to all the necessary symbols. Previously, there were many problems related to linkage errors.
- Moves part of Linux-related hipBLASLt changes in `LoadHIP.cmake` under the UNIX conditional branch, as these aren't supported on Windows yet.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,835,230,255 | [t.compile][Functools] Cache decorator support for dynamo | mieshkiwrk | open | [
"high priority",
"triaged",
"actionable",
"module: correctness (silent)",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 2 | NONE | ### 🐛 Describe the bug
Given below example, `eager` handles cached function as expected, `t.compile` treats it normally, set guards for z and recompiles with each new call.
I haven't found any information about supporting cache in case of dynamo, with [this commit ](https://github.com/pytorch/pytorch/commit/53fc921ce2bcfd29b0adc42b72b86a982a690e30) it has removed `functools` from `BUILTIN_SKIPLIST` so it's not explicitly unsupported.
Looking at dynamo's behavior it makes sense why it behaves like that, but is there plan support such caching in future?
Or is it expected to be like that due to it's architecture and won't be ever supported?
[eager]
1. z=0
2. z=1
3. z=1
[t.compile]
1. z=0
2. z=1
3. z=2
```
[0/1] [__recompiles] triggered by the following guard failure(s):
[0/1] [__recompiles] - 0/0: G['__import___main__'].z == 0
(...)
[0/2] [__recompiles] triggered by the following guard failure(s):
[0/2] [__recompiles] - 0/1: G['__import___main__'].z == 1
[0/2] [__recompiles] - 0/0: G['__import___main__'].z == 0
```
```python
import torch
from functools import cache
z = 0
@cache
def cached_fn(x):
global z
z = z + 1
return x + 1
def test():
global z
cached_fn_call = torch.compile(cached_fn)
#cached_fn_call = cached_fn
t1 = torch.randn(2,2)
print(f'1. z={z}')
cached_fn_call (t1)
print(f'2. z={z}')
cached_fn_call (t1)
print(f'3. z={z}')
test()
```
### Versions
PT2.6
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames | true |
2,835,220,077 | [ARM] Fix bug in _ref_test_helper in test_ops and fix failing test on Aarch64 | robert-hardwick | closed | [
"triaged",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"arm priority"
] | 7 | COLLABORATOR | We have a failing unit test on Aarch64
```
Exception: Caused by reference input at index 34: SampleInput(input=Tensor[size=(5, 5, 4), device="cpu", dtype=torch.complex64, contiguous=False], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=34 python test/test_ops.py TestCommonCPU.test_python_ref__refs_square_cpu_complex64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
After debugging it I found that `ex` variable is not being reset to None on each loop inside _ref_test_helper. Which after fixing, highlighted another expectedFailure to reenable - `nn.functional.hinge_embedding_loss` which was incorrectly being skipped due to the same problem.
https://github.com/pytorch/pytorch/blob/4a545eb85d6ba06079787a83f8ab1a8c8f67c76f/test/test_ops.py#L546
ex variable is not reset after this for next loop iteration
cc @malfet @snadampal @milpuz01 | true |
2,835,200,186 | separate f16 vectorized class from bf16 | Ryo-not-rio | closed | [
"module: cpu",
"triaged",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"arm priority"
] | 16 | COLLABORATOR | Separating the f16 vectorized class into a different file from the bf16 vectorized class in order to be able to add a new bf16 SVE vectorized class in https://github.com/pytorch/pytorch/pull/143666. This is required as we would need to exclude the current bf16 class in order to use the sve bf16 class but still include the current f16 vectorized class.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01 | true |
2,835,174,041 | skip test_torch_dynamo_codegen_pow if CPU backend is not cpp | GeorgeWigley | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 12 | CONTRIBUTOR | The test asserts that `aten.pow` is not present in the generated kernel code. When using a CPU backend other than cpp, the kernel contains comments referencing the aten ops that produced the kernel in this case `aten.pow`.
This PR skips that test case if the CPU backend is not cpp.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,835,134,343 | torch.library.infer_schema should support list[...] in addition to typing.List[...] | lw | closed | [] | 2 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
Since Python 3.9, using `typing.List` (and all other types of that kind) is deprecated and the built-in `list` type should just be used instead. See https://docs.python.org/3/library/stdtypes.html#types-genericalias and https://peps.python.org/pep-0585/.
When using Python 3.11 and PyTorch 2.6.0, however, we get this error:
```
ValueError: infer_schema(func): Parameter a_ has unsupported type list[torch.Tensor]. The valid types are: dict_keys([<class 'torch.Tensor'>, typing.Optional[torch.Tensor], typing.Sequence[torch.Tensor], typing.List[torch.Tensor], typing.Sequence[typing.Optional[torch.Tensor]], typing.List[typing.Optional[torch.Tensor]], <class 'int'>, typing.Optional[int], typing.Sequence[int], typing.List[int], typing.Optional[typing.Sequence[int]], typing.Optional[typing.List[int]], <class 'float'>, typing.Optional[float], typing.Sequence[float], typing.List[float], typing.Optional[typing.Sequence[float]], typing.Optional[typing.List[float]], <class 'bool'>, typing.Optional[bool], typing.Sequence[bool], typing.List[bool], typing.Optional[typing.Sequence[bool]], typing.Optional[typing.List[bool]], <class 'str'>, typing.Optional[str], typing.Union[int, float, bool], typing.Union[int, float, bool, NoneType], typing.Sequence[typing.Union[int, float, bool]], typing.List[typing.Union[int, float, bool]], <class 'torch.dtype'>, typing.Optional[torch.dtype], <class 'torch.device'>, typing.Optional[torch.device]]). Got func with signature (a_: list[torch.Tensor], b_: list[torch.Tensor], group_name: str, shard_dim: int) -> list[torch.Tensor])
```
### Alternatives
We could use the deprecated types, but our CI would complain and we'd need to silence this.
We could also provide the schema explicitly but that is redundant, brittle, and defeats the purpose of the new custom_op library.
### Additional context
_No response_ | true |
2,835,127,166 | [NOT FOR LANDING] experimental NVSHMEM integration | yifuwang | open | [
"oncall: distributed",
"open source",
"release notes: distributed (c10d)",
"no-stale"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146593
* #146592
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,835,127,000 | clang-format CUDASymmetricMemory.cu | yifuwang | open | [
"oncall: distributed",
"open source",
"Stale",
"release notes: distributed (c10d)"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146593
* __->__ #146592
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,835,097,224 | Unable to export a large model to ONNX (exceeds 2GB limit) with custom attention layer | david-666-maker | closed | [
"module: onnx",
"triaged"
] | 5 | NONE | ### 🐛 Describe the bug
I’m encountering an issue while exporting a large text-encoder model to ONNX. The model is fairly large (over 2 GiB when serialized) and triggers the following error:
`RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
`
Although ONNX’s external data format should allow files larger than 2 GiB by splitting weights into separate .data files, the export still fails. I have confirmed that I’m passing an actual file path (not an in-memory buffer) to torch.onnx.export. Even explicitly setting use_external_data_format=True does not seem to help.
In addition, before reaching that error message, there is another issue related to shape inference:
```
# in torch/onnx/utils.py
if GLOBALS.onnx_shape_inference:
_C._jit_pass_onnx_graph_shape_type_inference(
graph, params_dict, GLOBALS.export_onnx_opset_version
)
```
I notice that I have to manually enable GLOBALS.onnx_shape_inference = True to proceed further. Otherwise, I get stuck at the shape inference stage.
Below is a simplified snippet of my code:
```
import torch
from diffusers import SanaPipeline
from torch import nn
# 1. Load the model
pipe = SanaPipeline.from_pretrained(
"Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers",
variant="bf16",
torch_dtype=torch.bfloat16,
).to("cuda")
text_encoder = pipe.text_encoder.eval().to("cuda")
tokenizer = pipe.tokenizer
prompt_text = ["star war, 8K, good"]
tokens = tokenizer(
prompt_text,
max_length=77,
padding="max_length",
truncation=True,
return_tensors="pt",
).input_ids.to("cuda")
# 2. Wrap the encoder
class TextEncoderWrapper(nn.Module):
def __init__(self, encoder):
super().__init__()
self.encoder = encoder
def forward(self, input_ids):
outputs = self.encoder(input_ids, return_dict=False)
# Return last_hidden_state
return outputs[0]
wrapped_encoder = TextEncoderWrapper(text_encoder).eval().to("cuda")
# 3. Export to ONNX
onnx_path = "/path/to/text_encoder.onnx"
torch.onnx.export(
wrapped_encoder,
(tokens,),
onnx_path,
do_constant_folding=True,
input_names=["input_ids"],
output_names=["last_hidden_state"],
# opset_version=...
# use_external_data_format=True, # tried enabling this as well
)
```
### Versions
I’ve traced the issue to a custom attention layer **(Gemma2Attention)** in **Gemma2DecoderLayer()** from the model’s internal code **(modeling_gemma2.py)**. When that attention code path is involved, the ONNX export graph grows significantly, and we seem unable to split it properly into external data.
Environment:
PyTorch version: 2.5.1+cu124
Python version: 3.10.9
Transformers version: 4.48.2
diffusers version: 0.33.0.dev0
OS: Ubuntu 20.04
| true |
2,834,991,057 | Signature should be extended for `torch.hamming_window()` | ILCSFNO | open | [
"triaged",
"module: python frontend"
] | 4 | CONTRIBUTOR | ### 🐛 Describe the bug
Seen from #145371, I notice some similar situations in [`torch.hamming_window()`](https://pytorch.org/docs/stable/generated/torch.hamming_window.html):
https://github.com/pytorch/pytorch/blob/4a545eb85d6ba06079787a83f8ab1a8c8f67c76f/torch/_torch_docs.py#L12380-L12381
### Minified Repro
```python
import torch
import random
window_length = random.randint(1, 100)
periodic = True ## choice: True, False
alpha = random.uniform(0, 1)
beta = 1 - alpha
## Below shows 4 possibilities of the combination of arguments
window = torch.hamming_window((window_length + 2), alpha=alpha, beta=beta) # (1) Error for: torch.hamming_window(window_length=int, alpha=float, beta=float)
# window = torch.hamming_window((window_length + 2), alpha=alpha) # (2) Error for: torch.hamming_window(window_length=int, alpha=float)
# window = torch.hamming_window((window_length + 2), beta=beta) # (3) Error for: torch.hamming_window(window_length=int, beta=float)
# window = torch.hamming_window((window_length + 2), periodic=periodic, beta=beta) # (4) Error for: torch.hamming_window(window_length=int, periodic=bool, beta=float)
```
### Output
```txt
TypeError: hamming_window() received an invalid combination of arguments - got (int, beta=float, alpha=float), but expected one of:
* (int window_length, *, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (int window_length, bool periodic, *, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (int window_length, bool periodic, float alpha, *, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (int window_length, bool periodic, float alpha, float beta, *, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
```
These four combinations are actually valid, but except.
### Suggestions
I suggest that the behavior of unspecified `periodic`, `alpha`, `beta` and specified `periodic=True`, `alpha=0.54`, `beta=0.46` to be identical. That is, as a fix, the signature
> (int window_length, *, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
ought to be extended with `periodic=True`, `alpha=0.54` and `beta=0.46`,
with other three signatures been merged to this signature.
### Versions
pytorch==2.5.0
torchvision==0.20.0
torchaudio==2.5.0
pytorch-cuda=12.1
cc @albanD | true |
2,834,937,123 | [DDP] Use NCCL allocated memory for gradient bucket | kwen2501 | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"release notes: distributed (ddp)"
] | 20 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146589
So that NVLink SHARP comes with zero-copy on H100+ platforms, for DDP applications.
Less SM usage, less memory contention between NCCL kernel and compute kernels.
Added env `DDP_DISABLE_COMM_MEM` as a back-out option:
```
An environment variable to disable comm-optimized memory pool.
Default is 0, which means comm-optimized memory pool is enabled.
Users can set it to 1 in case of seeing regression or OOM (because this
comm MemPool may not share space with regular compute MemPool).
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
Differential Revision: [D69297766](https://our.internmc.facebook.com/intern/diff/D69297766) | true |
2,834,926,688 | [METAL] inline bfloat min/max | Isalia20 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7 | COLLABORATOR | After a recent commit 36c6e09528a7e071edecde083254da70cba26c95 , building from source with `python setup.py develop` leads to an error due to multiple symbols for min/max:
```
FAILED: caffe2/aten/src/ATen/kernels_bfloat.metallib /Users/Irakli_Salia/Desktop/pytorch/build/caffe2/aten/src/ATen/kernels_bfloat.metallib
cd /Users/Irakli_Salia/Desktop/pytorch/build/caffe2/aten/src/ATen && xcrun metallib -o kernels_bfloat.metallib BinaryKernel_31.air Bucketization_31.air CrossKernel_31.air FusedOptimizerOps_31.air Gamma_31.air HistogramKernel_31.air Im2Col_31.air Indexing_31.air LinearAlgebra_31.air Quantized_31.air RMSNorm_31.air RenormKernel_31.air Repeat_31.air SpecialOps_31.air TriangularOps_31.air UnaryKernel_31.air UnfoldBackward_31.air UpSample_31.air
LLVM ERROR: multiple symbols ('_ZN3c105metal3minIDF16bEEN5metal9enable_ifIXgssr5metalE19is_floating_point_vIT_EES4_E4typeES4_S4_')!
```
This PR fixes that.
@malfet | true |
2,834,906,851 | [Dynamo] Allow dynamo to handle `str.xxx()` | shink | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 19 | CONTRIBUTOR |
Fixes #146350
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,834,850,314 | CI related issues | swgu98 | closed | [
"module: ci",
"triaged"
] | 2 | NONE | I would like to ask if PyTorch's CI has a rerun mechanism, for example, if a workflow fails to run unexpectedly, the developer needs to rerun it manually.
Is there an exemption mechanism, for example, if a workflow fails, but there is no time to wait or it fails unexpectedly, can it be merged directly?
cc @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,834,821,629 | Create aa | swgu98 | closed | [
"open source",
"topic: not user facing"
] | 12 | NONE | Fixes #ISSUE_NUMBER
| true |
2,834,805,789 | Create aa.yml | swgu98 | closed | [
"open source",
"topic: not user facing"
] | 1 | NONE | Fixes #ISSUE_NUMBER
| true |
2,834,790,185 | [symbolic shapes] Log symnode id | angelayi | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 9 | CONTRIBUTOR | We want to log the symnode id which will help us with provenance tracking between expressions created.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,834,773,750 | [Partitioner] Reduce time consuming of partitions merger | lingzhiz1998 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx"
] | 19 | CONTRIBUTOR | This patch optimize maybe_merge_partition func through 3-ways:
Remove unnecessary copy https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/infra/partitioner.py#L99. The number of copied nodes is large if we can merge all of the nodes of graph into one partition.
Record users of each partition to avoid duplicate iteration over nodes https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/infra/partitioner.py#L133. The trip count of this loop maybe very large.
The nodes number of each partitions maybe not balance https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/infra/partitioner.py#L145. We always encounter one issue: one partition has n nodes, but the other has one node. Merge the smaller partition into the larger can help to reduce time consuming.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,834,771,925 | Clarify that compile(module) only affects the forward method | zeshengzong | open | [
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Fixes #141616
## Changes
- Add `Note` to Clarify how compile works with `nn.Module`
- Optimize plain url address with clickable description
## Test Result
### Before


### After


| true |
2,834,767,288 | [Partitioner] Remove unnecessary upstream nodes in dependency viewer | lingzhiz1998 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"keep-going"
] | 16 | CONTRIBUTOR | We iterate upstream nodes to update partition map. But actually did nothing due to we iterate nodes with reversed topological order https://github.com/pytorch/pytorch/pull/136608/files#diff-f2f9dd3903fd99955732eb694941fea0cb7301a58d59554787f3311d417e5615L193 so that there exists no upstream nodes in assignment. Remove it to reduce for-loop overhead which up to O(N * N) complexity.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,834,751,082 | Pytorch model (.pth format) not loading in KServe environment | VidhyaPandi | open | [
"needs reproduction",
"module: serialization",
"triaged"
] | 2 | NONE | ### 🐛 Describe the bug
I am performing binary classification on the Titanic dataset using a deep neural network. Here is the model-preprocessing and training code.
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from collections import Counter
from sklearn.utils import shuffle
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
df_train = pd.read_csv('titanic/train.csv')
df_test = pd.read_csv('titanic/test.csv')
df_train.drop(['Name','Ticket','Cabin'],axis=1,inplace=True)
df_test.drop( ['Name','Ticket','Cabin'],axis=1,inplace=True)
sex = pd.get_dummies(df_train['Sex'],drop_first=True)
embark = pd.get_dummies(df_train['Embarked'],drop_first=True)
df_train = pd.concat([df_train,sex,embark],axis=1)
df_train.drop(['Sex','Embarked'],axis=1,inplace=True)
sex = pd.get_dummies(df_test['Sex'],drop_first=True)
embark = pd.get_dummies(df_test['Embarked'],drop_first=True)
df_test = pd.concat([df_test,sex,embark],axis=1)
df_test.drop(['Sex','Embarked'],axis=1,inplace=True)
df_train.fillna(df_train.mean(),inplace=True)
df_test.fillna(df_test.mean(),inplace=True)
Scaler1 = StandardScaler()
Scaler2 = StandardScaler()
train_columns = df_train.columns
test_columns = df_test.columns
df_train = pd.DataFrame(Scaler1.fit_transform(df_train))
df_test = pd.DataFrame(Scaler2.fit_transform(df_test))
df_train.columns = train_columns
df_test.columns = test_columns
features = df_train.iloc[:,2:].columns.tolist()
target = df_train.loc[:, 'Survived'].name
X_train = df_train.iloc[:,2:].values
y_train = df_train.loc[:, 'Survived'].values
import torch
import torch.nn as nn
from torch.nn import functional as F
from torch.autograd import Variable
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(8, 512)
self.fc2 = nn.Linear(512, 512)
self.fc3 = nn.Linear(512, 2)
self.dropout = nn.Dropout(0.2)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.dropout(x)
x = self.fc3(x)
return x
model = Net()
print(model)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
batch_size = 64
n_epochs = 500
batch_no = len(X_train) // batch_size
train_loss = 0
train_loss_min = np.Inf
# MLflow tracking
with mlflow.start_run():
for epoch in range(n_epochs):
for i in range(batch_no):
start = i * batch_size
end = start + batch_size
x_var = Variable(torch.FloatTensor(X_train[start:end]))
y_var = Variable(torch.LongTensor(y_train[start:end]))
optimizer.zero_grad()
output = model(x_var)
loss = criterion(output,y_var)
loss.backward()
optimizer.step()
values, labels = torch.max(output, 1)
num_right = np.sum(labels.data.numpy() == y_train[start:end])
train_loss += loss.item()*batch_size
train_loss = train_loss / len(X_train)
if train_loss <= train_loss_min:
print("Validation loss decreased ({:6f} ===> {:6f}). Saving the model...".format(train_loss_min,train_loss))
torch.save(model.state_dict(), "model.pth")
train_loss_min = train_loss
if epoch % 200 == 0:
print('')
print("Epoch: {} \tTrain Loss: {} \tTrain Accuracy: {}".format(epoch+1, train_loss,num_right / len(y_train[start:end]) ))
# Log the model
mlflow.pytorch.log_model(model, "model")
print('Training Ended! ')
**Here is the config.properties:**
inference_address=http://0.0.0.0:8085
management_address=http://0.0.0.0:8086
metrics_address=http://0.0.0.0:8082
grpc_inference_port=7070
grpc_management_port=7071
enable_envvars_config=true
install_py_dep_per_model=true
enable_metrics_api=true
metrics_mode=Prometheus
NUM_WORKERS=1
number_of_netty_threads=4
job_queue_size=10
model_store=/mnt/models/model-store
model_snapshot={"name":"startup.cfg","modelCount":1,"models":epytrochmodel1:{"1.0":{"defaultVersion":True,"marName":"epytrochmodel1.mar","minWorkers":1,"maxWorkers":3,"batchSize":1,"maxBatchDelay":200,"responseTimeout":300}}}}
**Command used to generate the MAR file:**
torch-model-archiver \
--model-name epytrochmodel1 \
--version 1.0 \
--serialized-file data/model.pth \
--model-file model.py \
--handler handler.py \
--extra-files "conda.yaml,MLmodel,data/pickle_module_info.txt,python_env.yaml,requirements.txt" \
--export-path model-store
Model.py:
# model.py
import torch
import torch.nn as nn
from torch.nn import functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(8, 512)
self.fc2 = nn.Linear(512, 512)
self.fc3 = nn.Linear(512, 2)
self.dropout = nn.Dropout(0.2)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.dropout(x)
x = self.fc3(x)
return x
handler.py:
import torch
import torch.nn.functional as F
import logging
import os
import subprocess # Import subprocess for running shell commands
from ts.torch_handler.base_handler import BaseHandler
from model import Net # Import your model
logger = logging.getLogger(__name__)
class CustomHandler(BaseHandler):
def initialize(self, context):
"""Load the model, install requirements, and initialize the handler."""
try:
# Install dependencies from requirements.txt
model_dir = context.system_properties.get("model_dir")
requirements_path = os.path.join(model_dir, "requirements.txt")
if os.path.exists(requirements_path):
try:
logger.info(f"Installing dependencies from {requirements_path}...")
subprocess.run(
["pip", "install", "cloudpickle==3.0.0"],
check=True
)
logger.info("Dependencies installed successfully.")
except subprocess.CalledProcessError as e:
logger.error(f"Error installing dependencies: {e}")
raise e
else:
logger.info(f"No requirements.txt found at {requirements_path}. Skipping installation.")
# Load the model
logger.info(f"Model directory: {model_dir}")
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_path = os.path.join(model_dir, "model.pth")
logger.info(f"Model file path: {model_path}")
if not os.path.exists(model_path):
raise ValueError(f"Model file not found at {model_path}")
logger.info("Initializing model...")
self.model = Net()
logger.info("Model architecture initialized.")
logger.info(f"Model state_dict keys: {list(self.model.state_dict().keys())}")
self.model.load_state_dict(torch.load(model_path, map_location=self.device))
logger.info("Model weights loaded.")
self.model.to(self.device)
logger.info("Model moved to device.")
self.model.eval()
logger.info("Model loaded successfully.")
except Exception as e:
logger.error(f"Failed to initialize handler: {str(e)}")
raise e
def preprocess(self, data):
"""Process input data into a tensor."""
logger.info("Inside preprocess")
input_data = data[0]["body"]
if not isinstance(input_data, list):
raise ValueError(f"Expected list data, got {type(input_data)}")
tensor = torch.tensor(input_data, dtype=torch.float32)
return tensor.to(self.device)
def inference(self, inputs):
"""Run inference on the model."""
with torch.no_grad():
outputs = self.model(inputs)
return outputs
def postprocess(self, inference_output):
"""Convert model output to a JSON-serializable format."""
if isinstance(inference_output, torch.Tensor):
return inference_output.cpu().numpy().tolist()
else:
raise ValueError(f"Expected tensor output, got {type(inference_output)}")
Now once the mar file is genereated with the above files, i deploy the model in Kserve environment .While deployment we are facing the below issue:
2025-02-05T05:55:23,882 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - Failed to initialize handler: code() takes at most 16 arguments (18 given)
2025-02-05T05:55:23,882 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - Backend worker process died.
2025-02-05T05:55:23,883 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - Traceback (most recent call last):
2025-02-05T05:55:23,883 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.9/site-packages/ts/model_service_worker.py", line 258, in <module>
2025-02-05T05:55:23,883 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - worker.run_server()
2025-02-05T05:55:23,883 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.9/site-packages/ts/model_service_worker.py", line 226, in run_server
2025-02-05T05:55:23,883 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - self.handle_connection(cl_socket)
2025-02-05T05:55:23,883 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.9/site-packages/ts/model_service_worker.py", line 189, in handle_connection
2025-02-05T05:55:23,883 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - service, result, code = self.load_model(msg)
2025-02-05T05:55:23,883 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.9/site-packages/ts/model_service_worker.py", line 131, in load_model
2025-02-05T05:55:23,884 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - service = model_loader.load(
2025-02-05T05:55:23,884 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.9/site-packages/ts/model_loader.py", line 135, in load
2025-02-05T05:55:23,884 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - initialize_fn(service.context)
2025-02-05T05:55:23,884 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - File "/home/model-server/tmp/models/3ce4806b087c40afae527fd10c5f816e/handler.py", line 54, in initialize
2025-02-05T05:55:23,884 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - raise e
2025-02-05T05:55:23,884 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - File "/home/model-server/tmp/models/3ce4806b087c40afae527fd10c5f816e/handler.py", line 45, in initialize
2025-02-05T05:55:23,884 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - self.model.load_state_dict(torch.load(model_path, map_location=self.device))
2025-02-05T05:55:23,885 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.9/site-packages/torch/serialization.py", line 1014, in load
2025-02-05T05:55:23,885 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - return _load(opened_zipfile,
2025-02-05T05:55:23,885 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.9/site-packages/torch/serialization.py", line 1422, in _load
2025-02-05T05:55:23,885 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - result = unpickler.load()
2025-02-05T05:55:23,885 [INFO ] W-9000-epytorchmodel1_1.0-stdout MODEL_LOG - TypeError: code() takes at most 16 arguments (18 given)
Here is the complete log from the container:
[logs.txt](https://github.com/user-attachments/files/18685871/logs.txt)
### Versions
torch:2.1.0
torch-model-archiver: 0.12.0
TorchServe: 0.9.0
Kserve : 0.13
Knative: 1.15.2
Istio : 1.18.0
kubeadm version:1.29
Kubernetes version:v1.28.0
OS: centos
VERSION=8
cc @mruberry @mikaylagawarecki | true |
2,834,727,889 | add `torch.float4_e2m1fn_x2` to PyTorch | vkuzo | closed | [
"release notes: quantization"
] | 4 | CONTRIBUTOR | Summary:
Adds the `torch.float4_e2m1fn_x2` dtype to PyTorch, as detailed in
https://github.com/pytorch/pytorch/issues/146414 . Please see the issue for a detailed definition of the format.
Note that I decided to keep the casts out of this to significantly simplify the code, as defining casting between packed and unpacked formats will be tricky using the existing casting machinery.
Example of basic functionality:
```python
import torch
# creation with empty
x0 = torch.empty(4, 4, dtype=torch.float4_e2m1fn_x2)
# printing, prints the uint8 representation of the stored values
print(x0)
# view as other dtype
x0.view(torch.uint8).view(torch.float4_e2m1fn_x2)
```
Done in this PR:
* tensor creation and tensor printing works (no other ops defined)
For future PRs:
* torch._scaled_mm
* PT2
* various cleanups (detailed in comments with issue numbers)
Test Plan:
```
pytest test/quantization/core/experimental/test_floatx.py -s
```
Reviewers:
Subscribers:
Tasks:
Tags:
cc @yanbing-j @albanD @kadeng @penguinwu @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,834,691,963 | Clean up op BC check list | houseroad | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12 | MEMBER | Summary: Remove the expired ones
Test Plan: ci
Differential Revision: D69226556
| true |
2,834,690,487 | Torch Dynamo Export Failed on RetinaNet from Torchvison | YixuanSeanZhou | open | [
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 1 | NONE | ### 🐛 Describe the bug
Torch Dynamo Export Failed.
```python
import torch
import torchvision
from torch._export import capture_pre_autograd_graph
m = torchvision.models.detection.retinanet_resnet50_fpn(weights=torchvision.models.detection.RetinaNet_ResNet50_FPN_Weights.DEFAULT).eval()
capture_pre_autograd_graph(m, args) # failed
```
Stack trace:
```
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/base/py/app.py", line 125, in run
ret = main(remaining_args)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.py", line 138, in main
quantize_and_export_model(model_name, workdir)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.py", line 110, in quantize_and_export_model
m = capture_pre_autograd_graph(m, args)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_export/__init__.py", line 151, in capture_pre_autograd_graph
m = torch._dynamo.export(
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/eval_frame.py", line 1311, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/eval_frame.py", line 451, in _fn
return fn(*args, **kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/convert_frame.py", line 921, in catch_errors
return callback(frame, cache_entry, hooks, frame_state, skip=1)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/convert_frame.py", line 400, in _convert_frame_assert
return _compile(
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/external/local_config_python/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/convert_frame.py", line 676, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/utils.py", line 262, in time_wrapper
r = func(*args, **kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/convert_frame.py", line 535, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/bytecode_transformation.py", line 1036, in transform_code_object
transformations(instructions, code_options)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/convert_frame.py", line 165, in _fn
return fn(*args, **kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/convert_frame.py", line 500, in transform
tracer.run()
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 2149, in run
super().run()
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 810, in run
and self.step()
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 773, in step
getattr(self, inst.opname)(inst)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 489, in wrapper
return inner_fn(self, inst)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 1219, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 674, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/variables/nn_module.py", line 336, in call_function
return tx.inline_user_function_return(
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 680, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 2285, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 2399, in inline_call_
tracer.run()
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 810, in run
and self.step()
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 773, in step
getattr(self, inst.opname)(inst)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 489, in wrapper
return inner_fn(self, inst)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 1260, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 674, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/variables/functions.py", line 335, in call_function
return super().call_function(tx, args, kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/variables/functions.py", line 289, in call_function
return super().call_function(tx, args, kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 680, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 2285, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 2399, in inline_call_
tracer.run()
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 810, in run
and self.step()
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 773, in step
getattr(self, inst.opname)(inst)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 489, in wrapper
return inner_fn(self, inst)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 1219, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 674, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/variables/functions.py", line 335, in call_function
return super().call_function(tx, args, kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/variables/functions.py", line 289, in call_function
return super().call_function(tx, args, kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 680, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 2285, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 2399, in inline_call_
tracer.run()
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 810, in run
and self.step()
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 773, in step
getattr(self, inst.opname)(inst)
File "/home/yixzhou/.cache/bazel/_bazel_yixzhou/70f13699ad2ff0eae5287e6db36af3df/execroot/nuro/bazel-out/k8-opt/bin/experimental/yixzhou/quantization/torchxla/quantize_torchvision_export.runfiles/nuro/external/pypi__torch_2_3_0_cu121_x86_64/torch/_dynamo/symbolic_convert.py", line 1325, in STORE_ATTR
assert (
AssertionError: Mutating module attribute cell_anchors during export.
```
### Versions
It is a bit complicated to run this script due to we are running our own bazel build system. But here are the relvent packages:
```
torch==2.3.0+cu121
torchvision==0.18.0+cu121
torchaudio==2.3.0+cu121
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,834,635,373 | How to pip3 torch==2.1.0.dev20230822+cu118 | minhphi1712 | closed | [
"module: binaries",
"triaged"
] | 1 | NONE |
> I’ve tried installing this specific version multiple times, but the issue keeps occurring.
pip3 install torch==2.1.0.dev20230822+cu118
```
ERROR: Could not find a version that satisfies the requirement torch==2.1.0.dev20230822+cu118 (from versions: 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0)
ERROR: No matching distribution found for torch==2.1.0.dev20230822+cu118
```
> PLEASE HELP ME A GUILD TO SOVLE THIS ISSUE <3
### Suggest a potential alternative/fix
_No response_
cc @seemethere @malfet @osalpekar @atalman | true |
2,834,630,334 | [ROCm][TunableOp] Close offline tuning results file when offline tuning is disabled. | naromero77amd | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 9 | COLLABORATOR | This PR is to fix UT breakage that has been reported internally and is considered high priority. When `tunable.record_untuned_enable(False)` is invoked, we flush the results of the untuned gemm file.
Offline tuning I/O currently doesn't have a set untuned results filename member function or untuned results write to file member function. When performing back-to-back unit tests, the same ofstream ends up getting reused between UTs. Due to the way the UT are executed, this can lead to unexpected failures.
cc: @jfactory07
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang | true |
2,834,622,398 | add python root bin to windows load path. | xuhancn | closed | [
"module: windows",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel",
"intel"
] | 28 | COLLABORATOR | This PR is extend python root bin path to dll load list.
It makes PyTorch robust and compatible to more dependency libraries, such as `intel-pti`.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,834,615,137 | RuntimeError: PyTorch is not linked with support for new_one devices | xiangxinhello | closed | [
"triaged",
"module: PrivateUse1"
] | 8 | NONE | ### 🐛 Describe the bug
```python
import torch
import pdb
class _OpenRegNewOne:
pass
torch.utils.rename_privateuse1_backend("new_one")
torch._register_device_module('new_one', _OpenRegNewOne())
unsupported_dtype = [torch.quint8, torch.quint4x2, torch.quint2x4, torch.qint32, torch.qint8]
torch.utils.generate_methods_for_privateuse1_backend(for_tensor=True, for_module=True, for_storage=True,
unsupported_dtype=unsupported_dtype)
a1 = torch.Tensor(3,4).to("new_one")
```
```
Traceback (most recent call last):
File "/workspace/mnt/storage/pytorch_2.5/test.py", line 13, in <module>
a1 = torch.rand(1,4).to("new_one")
RuntimeError: PyTorch is not linked with support for new_one devices
```
### Versions
PyTorch version: 2.5.0a0+gita8d6afb
Is debug build: True
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] optree==0.13.1
[pip3] torch==2.5.0a0+gita8d6afb
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.2.1 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.0a0+gita8d6afb dev_0
cc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens | true |
2,834,554,283 | [Dynamo][autograd.Function] Relax backward speculation strict mode a bit | yanboliang | closed | [
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146742
* #146741
* __->__ #146571
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,834,547,926 | inconsistency in geometric_ on CPU and GPU | alionapi | open | [
"triaged",
"module: edge cases"
] | 0 | NONE | ### 🐛 Describe the bug
Inconsistency in `geometric_` on CPU and GPU
```
import torch
self = torch.tensor([[[[float('inf')]]]], dtype=torch.float16)
generator = None
self_cuda = self.cuda()
p = 1e-8
result_cpu = self.geometric_(p)
result_gpu = self_cuda.geometric_(p)
print("CPU result:\n", result_cpu)
print("GPU result:\n", result_gpu)
inconsistent = not torch.allclose(result_cpu, result_gpu.cpu(), atol=1e-05, rtol=1e-08)
print(f"inconsistency with atol=1e-05 and rtol=1e-08: {inconsistent}")
```
Output:
```
CPU result:
tensor([[[[inf]]]], dtype=torch.float16)
GPU result:
tensor([[[[-inf]]]], device='cuda:0', dtype=torch.float16)
inconsistency with atol=1e-05 and rtol=1e-08: True
```
### Versions
(executed on Google Colab)
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.5.3.2
[pip3] nvidia-cuda-cupti-cu12==12.5.82
[pip3] nvidia-cuda-nvrtc-cu12==12.5.82
[pip3] nvidia-cuda-runtime-cu12==12.5.82
[pip3] nvidia-cudnn-cu12==9.3.0.75
[pip3] nvidia-cufft-cu12==11.2.3.61
[pip3] nvidia-curand-cu12==10.3.6.82
[pip3] nvidia-cusolver-cu12==11.6.3.83
[pip3] nvidia-cusparse-cu12==12.5.1.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.5.82
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] optree==0.14.0
[pip3] pynvjitlink-cu12==0.5.0
[pip3] torch==2.5.1+cu124
[pip3] torchaudio==2.5.1+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect | true |
2,834,520,879 | torch.compile with fullgraph=True causes overwritten variable error in versions later than torch==2.5.1 | FurtherAI | open | [
"high priority",
"triaged",
"module: regression",
"oncall: pt2",
"module: inductor"
] | 5 | NONE | ### 🐛 Describe the bug
## Minimal Error
Basically, a single forward backward pass through this simple module with `fullgraph=True` will run fine, but on the second pass, it will throw an error for overwriting a variable.
Here is the minimal reproducer:
```python
import torch
from torch import nn
class MLP(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(1024, 4 * 1024)
def forward(self, x):
out = self.lin(x)
return out
mlp = MLP().cuda().train()
mlp = torch.compile(mlp, fullgraph=True, backend="inductor", mode='reduce-overhead')
x = torch.rand((1, 1024), device='cuda')
o = mlp(x)
l1 = o.sum()
l1.backward()
x2 = torch.rand((1, 1024), device='cuda')
o2 = mlp(x2)
l2 = o2.sum()
l2.backward()
```
With `torch==2.7.0.dev20250205+cu124` or `torch==2.6.0` will throw an error
But with `torch==2.5.1` it runs fine. Maybe the code isn't supposed supposed to work, but I don't see why the error would occur here.
Adding `torch.compiler.cudagraph_mark_step_begin()`
```python
import torch
from torch import nn
class MLP(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(1024, 4 * 1024)
def forward(self, x):
out = self.lin(x)
return out
mlp = MLP().cuda().train()
mlp = torch.compile(mlp, fullgraph=True, backend="inductor", mode='reduce-overhead')
torch.compiler.cudagraph_mark_step_begin()
x = torch.rand((1, 1024), device='cuda')
o = mlp(x)
l1 = o.sum()
l1.backward()
torch.compiler.cudagraph_mark_step_begin()
x2 = torch.rand((1, 1024), device='cuda')
o2 = mlp(x2)
l2 = o2.sum()
l2.backward()
```
Doesn't change anything either.
## Ablations
Only fails with `backend="inductor"`. `dynamic=None` didn't change anything.
### Error logs
```
Traceback (most recent call last):
File "/data/rankness/rankness/min_error.py", line 24, in <module>
l2.backward()
File "/home/further/miniconda3/envs/rank/lib/python3.12/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/home/further/miniconda3/envs/rank/lib/python3.12/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/home/further/miniconda3/envs/rank/lib/python3.12/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File "/data/rankness/rankness/min_error.py", line 10, in forward
out = self.lin(x). To prevent overwriting, clone the tensor outside of torch.compile() or call torch.compiler.cudagraph_mark_step_begin() before each model invocation.
```
[dedicated_log_torch_trace_wocd9j2t.log](https://github.com/user-attachments/files/18683115/dedicated_log_torch_trace_wocd9j2t.log)
### Versions
`torch==2.7.0.dev20250205+cu124`
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250205+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.120
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i5-11600K @ 3.90GHz
CPU family: 6
Model: 167
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 1
CPU max MHz: 4900.0000
CPU min MHz: 800.0000
BogoMIPS: 7824.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
L1d cache: 288 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250205+cu124
[pip3] torchaudio==2.6.0.dev20250205+cu124
[pip3] torchvision==0.22.0.dev20250205+cu124
[pip3] triton==3.1.0
[conda] numpy 2.2.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250205+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250205+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250205+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
`torch==2.5.1`
```
...
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 2.2.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @amjames @desertfire @aakhundov | true |
2,834,447,145 | Matmul Triton Template with epilogue fusion can not speed up on XPU. | etaf | open | [
"triaged",
"module: xpu"
] | 2 | COLLABORATOR | ### 🐛 Describe the bug
The matmul triton template is designed and tuned for CUDA, and we found that with epilogue fusion , eg MM + ReLU, the generated fused triton kernel can never speed up on XPU. The root cause is register spill.
This is not reasonable, we're investigating the solution.
### Versions
PyTorch version: 2.7.0a0+git95b52f7
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
cc @gujinghui @EikanWang @fengyuan14 @guangyey | true |
2,834,405,704 | [copy-for-import][inductor] Refactor op handlers part 2 | jansel | closed | [
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8 | CONTRIBUTOR | Copy of #146252 for import into fbcode testing
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,834,401,401 | [DCP] Allow for rank-specific tensors with duplicate keys | cassanof | open | [
"triaged",
"oncall: distributed checkpointing"
] | 3 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
My understanding of DCP is that it assumes either DTensor, or fully replicated tensors in the state dict. I have some custom sharding implementation that doesn't use DTensor, and I needed to write a custom SavePlanner class that gathers the shard before saving.
The logic for loading is even uglier, as I need to modify the metadata object. For some other tensors, it's even worse because it's not clear how to gather them (e.g. torchao's `TorchAOBaseTensor`, used for AdamWFp8). I haven't found a workaround for this.
It would be great if there was an option to save a checkpoint with some tensors being specific to some ranks, that don't need to be gathered.
### Alternatives
_No response_
### Additional context
_No response_
cc @LucasLLC @pradeepfn @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,834,394,751 | use register_full_backward_hook with c10d in torch.compile and raise error | yangxiaorun | open | [
"oncall: distributed"
] | 0 | NONE | ### 🐛 Describe the bug
The following script will trigger "torch._dynamo.exc.TorchRuntimeError: Failed running call_method copy_(*(FakeTensor(..., size=(1, 2), grad_fn=<AsStridedBackward0>), FakeTensor(..., size=(1, 2), grad_fn=<WarnNotImplemented>)), **{}):", I don't understand how AsStridedBackward0 is generated in the backward direction, because I don't do anything in the hook (except simple print). I think this may be a bug.
Some additional information:
1. I found that register_forward_pre_hook, register_forward_hook, tensor.register_hook can work well.
2. When I comment out register_full_backward_hook, the following warning will be generated:
```
/home/yxr/miniconda3/lib/python3.12/site-packages/torch/autograd/graph.py:825: UserWarning: _c10d_functional::wait_tensor: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:62.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
```
I know this is because _c10d_functional has no backward. But how should I look at this? Register an empty backward implementation for it, or should I not call the c10d api in the forward pass at all (but should call it in a hook).
# Example
```
import os
import torch
import torch.nn as nn
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.fc1 = nn.Linear(2, 2)
self.fc2 = nn.Linear(2, 1)
def forward(self, x):
x = self.fc1(x)
torch.distributed.all_reduce(x)
x = self.fc2(x)
return x
def fn(rank, world_size):
print(f"Running basic example on rank {rank}.")
torch.distributed.init_process_group(backend="gloo", rank=rank, world_size=world_size)
model = SimpleModel()
def hook_func(name, module, called_position):
if called_position == "hook_after_full_backward":
def hook_after_full_backward(module, grad_input, grad_output):
print(f"==== layer {name}, {called_position} grad_input {grad_input}, grad_output {grad_output} ===")
return hook_after_full_backward
else:
raise ValueError(f"called_position {called_position} is not supported")
for name, module in model.named_modules():
if name == "":
name = "model"
module.register_full_backward_hook(hook_func(f"{name}", module, "hook_after_full_backward"))
ddp_model = model
inputs = torch.randn(1, 2)
ddp_model = torch.compile(ddp_model, backend='eager', fullgraph=False)
for i in range(2):
outputs = ddp_model(inputs)
outputs.sum().backward()
def mp():
world_size = 2
torch.multiprocessing.spawn(fn, nprocs=world_size, args=(world_size,), join=True)
if __name__ == '__main__':
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "29000"
mp()
```
# ERROR message
```
~/sample/note-wiki$ /home/yxr/miniconda3/bin/python /home/yxr/sample/note-wiki/test_pytorch/pytorch_hook_hcom_report_bug.py
Running basic example on rank 1.
Running basic example on rank 0.
W0206 10:23:58.690000 26219 site-packages/torch/multiprocessing/spawn.py:160] Terminating process 26240 via signal SIGTERM
Traceback (most recent call last):
File "/home/yxr/sample/note-wiki/test_pytorch/pytorch_hook_hcom_report_bug.py", line 50, in <module>
mp()
File "/home/yxr/sample/note-wiki/test_pytorch/pytorch_hook_hcom_report_bug.py", line 44, in mp
torch.multiprocessing.spawn(fn, nprocs=world_size, args=(world_size,), join=True)
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/multiprocessing/spawn.py", line 328, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method="spawn")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/multiprocessing/spawn.py", line 284, in start_processes
while not context.join():
^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/multiprocessing/spawn.py", line 203, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/multiprocessing/spawn.py", line 90, in _wrap
fn(i, *args)
File "/home/yxr/sample/note-wiki/test_pytorch/pytorch_hook_hcom_report_bug.py", line 39, in fn
outputs = ddp_model(inputs)
^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1844, in _call_impl
return inner()
^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1790, in inner
result = forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/sample/note-wiki/test_pytorch/pytorch_hook_hcom_report_bug.py", line 12, in forward
def forward(self, x):
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 866, in call_function
return self.replacement_var.call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/variables/misc.py", line 1024, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/variables/tensor.py", line 535, in call_method
return wrap_fx_proxy(
^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 2037, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 2124, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 2082, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 2017, in get_fake_value
ret_val = wrap_fake_exception(
^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 1574, in wrap_fake_exception
return fn()
^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 2018, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 2150, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/_dynamo/utils.py", line 2134, in run_node
return getattr(args[0], node.target)(*args[1:], **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.TorchRuntimeError: Failed running call_method copy_(*(FakeTensor(..., size=(1, 2), grad_fn=<AsStridedBackward0>), FakeTensor(..., size=(1, 2), grad_fn=<WarnNotImplemented>)), **{}):
Output 0 of AsStridedBackward0 is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.
from user code:
File "/home/yxr/sample/note-wiki/test_pytorch/pytorch_hook_hcom_report_bug.py", line 14, in torch_dynamo_resume_in_forward_at_13
torch.distributed.all_reduce(x)
File "/home/yxr/miniconda3/lib/python3.12/site-packages/torch/distributed/_functional_collectives.py", line 1068, in all_reduce_inplace
return tensor.copy_(all_reduce(tensor, op, group, tag))
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
```
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton-nightly==3.0.0.post20240716052845
[conda] numpy 2.2.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton-nightly 3.0.0.post20240716052845 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,834,352,508 | [inductor] Fix test error test_force_cutlass_backend_aoti_cexpr_codegen | jansel | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | Test Plan:
```
buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:cutlass_backend -- --exact 'caffe2/test/inductor:cutlass_backend - test_force_cutlass_backend_aoti_cexpr_codegen (caffe2.test.inductor.test_cutlass_backend.TestCutlassBackend)'
```
Differential Revision: D69219873
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,834,317,008 | Fix broken meta function for flex-attention backwards | drisspg | closed | [
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 14 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146563
# Summary
Fixes https://github.com/pytorch/pytorch/issues/146377
So what was the original problem: we were codegening a really weird epilogue:
```Python
# first compute broadcasted dk of shape [Bq, Hkv, KV_LEN, V_HEAD_DIM]
# then reduce to dk of shape [Bkv, Hkv, KV_LEN, V_HEAD_DIM]
xindex = index_k + 64*index_n + 64*off_hkv*ks2 + 128*off_zq*ks2
tl.store(out_ptr0 + (tl.broadcast_to(index_k + 64*index_n + off_hkv*ks1, dk.shape)), dk, mask)
x5 = (xindex % ks3)
tmp2 = tl.load(out_ptr0 + (x5 + ks1*off_hkv), mask, eviction_policy='evict_last')
tl.store(out_ptr1 + (tl.broadcast_to(xindex, dk.shape)), tmp2, mask)
```
This epilogue was writing and then reading from overlapping regions of memory causing a race condition.
### Why were we generating this epilgoue
During the lowering we created a buffer w/ a different size/stride from the expected return strides. I :think this added an implicit node (for doing the permutation of this wrongly strided output to the the expected one from the meta func. The scheduler for some reason thought it was okay to fuse this into the epilogue, tbh I dont know why.
This fixes the broken meta func and the original repro. I will add a test but it is hard to pop, better than nothing
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov @Chillee @yanboliang @BoyuanFeng | true |
2,834,312,859 | [pt2d] Add reorder_comms_preserving_peak_memory pass | wconstab | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146562
* #146561
* #152060
This is a new pass to replace the pre-existing passes. It has the same
basic goal, to achieve communication overlap (latency hiding), but also
constrains the solution to not increase peak memory.
The principles of operation are detailed in code comments, but
summarized here:
- never reorder collectives relative to each other (TBD if we should
relax this later)
- before performing reordering, push all comm and wait nodes as late as possible, respecting data dependencies
- estimate peak memory and current memory at each scheduler node
- move collective nodes forward one position at a time, if the move does
not increaes curr memory beyond peak memory
The pass logs a summary table for each graph to TORCH_LOGS=overlap.
e.g. (exact format may have been tweaked but this shows the idea).
```
rank0]:[rank0]:I0210 17:24:28.494000 2711253 torch/_inductor/comms.py:195] [0/0] [__overlap] Collective node initial exposed final exposed improvement limiting factor moves
[rank0]:[rank0]:I0210 17:24:28.494000 2711253 torch/_inductor/comms.py:195] [0/0] [__overlap] ----------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------- --------------- ------------- ------------------- -------
[rank0]:[rank0]:I0210 17:24:28.494000 2711253 torch/_inductor/comms.py:195] [0/0] [__overlap] ExternKernelSchedulerNode(name='op2') (torch.ops._c10d_functional.all_gather_into_tensor.default) (size=[2256, 256], stride=[256, 1]) (buf2) (12142 ns) 12141.6 6514.53 5627.08 prefetch limit 75
[rank0]:[rank0]:I0210 17:24:28.494000 2711253 torch/_inductor/comms.py:195] [0/0] [__overlap] ExternKernelSchedulerNode(name='op6') (torch.ops._c10d_functional.reduce_scatter_tensor.default) (size=[282, 256], stride=[256, 1]) (buf7) (32266 ns) 32265.8 28429.2 3836.61 data dependency 78
[rank0]:[rank0]:I0210 17:24:28.494000 2711253 torch/_inductor/comms.py:195] [0/0] [__overlap] ExternKernelSchedulerNode(name='op9') (torch.ops._c10d_functional.all_gather_into_tensor.default) (size=[256], stride=[1]) (buf11) (10801 ns) 10800.6 10732.3 68.254 peak memory 1
[rank0]:[rank0]:I0210 17:24:28.494000 2711253 torch/_inductor/comms.py:195] [0/0] [__overlap] ExternKernelSchedulerNode(name='op14') (torch.ops._c10d_functional.reduce_scatter_tensor.default) (size=[32], stride=[1]) (buf17) (10810 ns) 10809.5 10809.5 0 data dependency 4
[rank
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @yf225 | true |
2,834,312,455 | Include CollectiveKernel in inductor debug visualization | wconstab | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146558
* #146562
* __->__ #146561
* #152060
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,834,312,351 | enable reorder | wconstab | closed | [
"module: inductor",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146562
* #146561
* __->__ #146560
* #146559
* #146558
| true |
2,834,312,235 | Apply changes from https://github.com/pytorch/pytorch/commit/211847de3c1c3d6cbd299e14a001b794eabf2a2d | wconstab | closed | [
"oncall: distributed",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146562
* #146561
* #146560
* __->__ #146559
* #146558
| true |
2,834,312,010 | [not for land] temp changes to enable 'simple_fsdp' | wconstab | open | [
"oncall: distributed",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146558
* #146562
* #146561
* #152060
Cherry-picked several unlanded changes from the simple-fsdp workstream
- [dtensor] support mixed precision for redistribute (#20)
- also Apply changes from https://github.com/pytorch/pytorch/commit/211847de3c1c3d6cbd299e14a001b794eabf2a2d | true |
2,834,276,608 | Add fqn_modifier at loading_state_dict and unit test | mori360 | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | CONTRIBUTOR | In Fusion model, users might change the state_dict keys by state_dict_hook
The load_state_dict APIs here won't call model.state_dict() so that the hooks won't be called to change the keys, causing the mismatch between fqn and state_dict keys.
The PR here suggests users to add how they would change the state_dict key prefix (they can name it, here we call "fqn_modifiers") by default
During loading state_dict, we have the prefix change during getting fqn so that they can be processed same as through state_dict hook.
For example:
There's a state_dict_hook:
```
def _state_dict_hook(self, destination, prefix, keep_vars):
"""Remove "embedding" from the original embedding in the state_dict
name. This keeps the orginal state dict name for the embedding
from before fusing with the FusionEmbedding.
[!Note] This update changes the order of the OrderedDict
"""
key = prefix + "embedding.weight"
new_key = prefix + "weight"
destination[new_key] = destination[key]
del destination[key]
```
In the dsd after this PR, we would skip "embedding." before "weight" if find the "fqn_modifiers" attribute at that module
```
def fqn_modifiers(self) -> Dict[str, str]:
return {
"weight": "embedding",
}
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,834,259,984 | Add fqn_modifier at loading_state_dict and unit test | mori360 | closed | [
"oncall: distributed",
"topic: not user facing"
] | 1 | CONTRIBUTOR | In Fusion model, users might change the state_dict keys by state_dict_hook
The load_state_dict APIs here won't call model.state_dict() so that the hooks won't be called to change the keys, causing the mismatch between fqn and state_dict keys.
The PR here suggests users to add how they would change the state_dict key prefix (they can name it, here we call "fqn_modifiers") by default
During loading state_dict, we have the prefix change during getting fqn so that they can be processed same as through state_dict hook.
For example:
There's a state_dict_hook:
```
def _state_dict_hook(self, destination, prefix, keep_vars):
"""Remove "embedding" from the original embedding in the state_dict
name. This keeps the orginal state dict name for the embedding
from before fusing with the FusionEmbedding.
[!Note] This update changes the order of the OrderedDict
"""
key = prefix + "embedding.weight"
new_key = prefix + "weight"
destination[new_key] = destination[key]
del destination[key]
```
In the dsd after this PR, we would skip "embedding." before "weight" if find the "fqn_modifiers" attribute at that module
```
def fqn_modifiers(self) -> Dict[str, str]:
return {
"weight": "embedding",
}
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,834,252,590 | distributed/serialization: add experimental streaming torch.save/load methods | d4l3k | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13 | MEMBER | Summary:
This is intended for use with torchft when we need to do a streaming state dict transfer. This is strictly superior to the prior streaming method in torchft as this supports all tensor subclasses such as DTensor.
This supports 100% of the inputs to torch.save/load but is not wire compatible nor intended to have any backwards compatibility.
Security wise this fully supports weights_only and defaults to True. It does use pickle for some metadata but uses weights_only for the metadata.
Adapted from:
https://github.com/pytorch/torchft/pull/101
https://github.com/pytorch/torchft/pull/54
Test Plan:
pytest test/distributed/test_serialization.py
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o | true |
2,834,251,670 | [cutlass backend] Set no fallback to aten, disabled a few broken tests, default to test on H100 | henrylhtsang | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8 | CONTRIBUTOR | This PR does a few things:
* set fall back to aten to False for most tests. Without this, a lot of tests would fail silently since they just use aten
* Disable two subprocess related broken tests. They would crash in subprocess. More investigation needed.
* remove/disable the tests on A100. Let me elaborate a bit more.
There are two types of A100 tests.
* normal tests that also test A100. e.g., mm, addmm, bmm. However, since the shift to cutlass 3x, they don't work anymore. GenerateSM80 would generate ops that use cutlass 2x, but they get filtered out since they are of GemmKind.Universal but only GemmKind.Universal3x are supported in the 3x template.
* tests for A100 only. The mixed mm and sparse semi structure tests are failing due to "TypeError: can't multiply sequence by non-int of type 'str'" for a while. Disabled them for now. Do let us know if you are about them @alexsamardzic
Differential Revision: D69209929
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,834,233,373 | [dynamo] Remove the suggestion to use suppress_errors on compiler error | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146550
* __->__ #146553
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,834,225,496 | [MPSInductor] Fix min/max for bfloat16 | malfet | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146552
By introducing a full specialization that upcasts everything to float, as bfloat does not have a native min/max
Test by runing `test_min_max_reduction`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,834,223,149 | libtorch_cuda_linalg.so: undefined symbol: mkl_lapack_dsbrdbn on a source built PyTorch 2.6.0 with USE_STATIC_MKL=1 on CUDA platform | filbranden | open | [
"module: build",
"triaged",
"module: mkl",
"module: regression",
"actionable"
] | 4 | NONE | ### 🐛 Describe the bug
I'm seeing this error on a source built PyTorch 2.6.0 with USE_STATIC_MKL=1 on a CUDA platform
Using the following code snippet to reproduce the issue:
```python
import torch
A = torch.randn(2, 2, dtype=torch.complex128)
A = A + A.T.conj()
torch.cuda.init()
torch.linalg.eigh(A.to("cuda"))
```
The error is:
```
Traceback (most recent call last):
File "error_reproducer.py", line 12, in <module>
torch.linalg.eigh(A.to("cuda"))
RuntimeError: Error in dlopen: venv/lib/python3.12/site-packages/torch/lib/libtorch_cuda_linalg.so: undefined symbol: mkl_lapack_dsbrdbn
```
Looking at the symbols, I can see that the definition for `mkl_lapack_dsbrdbn` (and all other `mkl_lapack_*` symbols) is missing from `libtorch_cpu.so` which I believe is where they should be:
```
$ objdump -T lib/python3.12/site-packages/torch/lib/libtorch_cuda_linalg.so | grep mkl_lapack_ | head -5
0000000000000000 D *UND* 0000000000000000 Base mkl_lapack_dsbrdbn
0000000000000000 D *UND* 0000000000000000 Base mkl_lapack_zlaeh2
0000000000000000 D *UND* 0000000000000000 Base mkl_lapack_claeh2
0000000000000000 D *UND* 0000000000000000 Base mkl_lapack_slaeh2
0000000000000000 D *UND* 0000000000000000 Base mkl_lapack_zhbrdbn
$ objdump -T lib/python3.12/site-packages/torch/lib/libtorch_cpu.so | grep mkl_lapack_ | head -5
(empty)
```
On a separate setup with PyTorch 2.2.1, also built from source using `USE_STATIC_MKL=1`, that doesn't seem to be the case:
```
$ objdump -T lib/python3.10/site-packages/torch/lib/libtorch_cuda_linalg.so | grep mkl_lapack_ | head -5
0000000000000000 DF *UND* 0000000000000000 Base mkl_lapack_clarfb
0000000000000000 DF *UND* 0000000000000000 Base mkl_lapack_zungqr
0000000000000000 DF *UND* 0000000000000000 Base mkl_lapack_zlaset
0000000000000000 DF *UND* 0000000000000000 Base mkl_lapack_sgerdb
0000000000000000 DF *UND* 0000000000000000 Base mkl_lapack_zlacrm
$ objdump -T lib/python3.10/site-packages/torch/lib/libtorch_cpu.so | grep mkl_lapack_ | head -5
000000000a42f770 g DF .text 0000000000000160 Base mkl_lapack_ps_dgetrs_small
0000000006cf6750 g DF .text 0000000000000d10 Base mkl_lapack_dden2band
000000000f1ad020 g DF .text 0000000000001e30 Base mkl_lapack_ps_avx2_dsyr2_nb
0000000006c17fa0 g DF .text 0000000000000130 Base mkl_lapack_ps_spotrf_l_small
0000000006c6b5e0 g DF .text 0000000000000290 Base mkl_lapack_cdfirstval
$ objdump -T lib/python3.10/site-packages/torch/lib/libtorch_cpu.so | grep mkl_lapack_clarfb
0000000006d088a0 g DF .text 00000000000035a0 Base mkl_lapack_clarfb
0000000006d42410 g DF .text 00000000000021a0 Base mkl_lapack_clarfb_team
```
So not sure what kind of regression could have caused that.
Dropping `USE_STATIC_MKL=1` produces a wheel that works, kind of, it needs the `intel-oneapi-mkl-devel` installed for runtime and I also needed to create a few additional symlinks, e.g. `ln -s libmkl_gnu_thread.so /usr/lib/x86_64-linux-gnu/libmkl_gnu_thread.so.2` (might be poor quality packages for dynamic loading), in any case, it seems related to the static build be missing or not exporting some symbols.
I found a somewhat related issue #72653, though that's for an older version of PyTorch. Also I found this workaround https://github.com/pytorch/pytorch/blob/v2.6.0/caffe2/CMakeLists.txt#L1445-L1472 related to `mkl_lapack_*` symbols, so maybe my build needs a few more of those to be explicitly mentioned?
### Versions
PyTorch version: 2.6.0+cu12.filbranden1
Is debug build: False
CUDA used to build PyTorch: 12.5
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.31
Python version: 3.12.7 (main, Jan 31 2025, 18:19:02) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.43-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
Versions of relevant libraries:
[pip3] faiss==1.8.0+numpy2.1
[pip3] flake8==7.0.0
[pip3] gpytorch==1.13
[pip3] msgpack-numpy==0.4.8
[pip3] mypy==1.10.1
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.6.0
[pip3] numpy==2.0.2
[pip3] numpyro==0.15.0
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.6.0+cu12.filbranden1
[pip3] torchmetrics==1.6.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @malfet @seemethere | true |
2,834,217,824 | [dynamo] Actionable message on recompilations for fullgraph=True | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146550
* #146553
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,834,211,288 | [BE][Metal] Fix signed unsigned comparison warning | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3 | CONTRIBUTOR | I wish I knew how to extract Metal warnings during JIT compilation but https://developer.apple.com/documentation/metal/mtldevice/makelibrary(source:options:)?changes=_7&language=objc is a lie as `error:` stays `nil` unless shader compilation fails. But when it does following warnings are thrown
```
program_source:666:26: warning: comparison of integers of different signs: 'int' and 'unsigned int' [-Wsign-compare]
for (auto idx = 1; idx < size; ++idx) {
~~~ ^ ~~~~
program_source:677:26: warning: comparison of integers of different signs: 'int' and 'unsigned int' [-Wsign-compare]
for (auto idx = 1; idx < size; ++idx) {
~~~ ^ ~~~~
program_source:688:26: warning: comparison of integers of different signs: 'int' and 'unsigned int' [-Wsign-compare]
for (auto idx = 1; idx < size; ++idx) {
~~~ ^ ~~~~
program_source:699:26: warning: comparison of integers of different signs: 'int' and 'unsigned int' [-Wsign-compare]
for (auto idx = 1; idx < size; ++idx) {
~~~ ^ ~~~~
program_source:710:26: warning: comparison of integers of different signs: 'int' and 'unsigned int' [-Wsign-compare]
for (auto idx = 1; idx < size; ++idx) {
~~~ ^ ~~~~
program_source:723:26: warning: comparison of integers of different signs: 'int' and 'unsigned int' [-Wsign-compare]
for (auto idx = 1; idx < size; ++idx) {
```
| true |
2,834,209,809 | [ROCm][TunableOp] Future proof TunableOp unit test. | naromero77amd | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3 | COLLABORATOR | TunableOp UT will fail because the regular expression in the test will not work for future versions of ROCm.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang | true |
2,834,205,687 | [BE][MPS]Reduce number BitwiseOps parameters to 1 | malfet | closed | [
"release notes: mps",
"ciflow/mps"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146547
* #146522
To mimic the behavior of CPU and CUDA
TODO: Add TensorIterator to cast to the same dtype if needed (but want to see if we have tests for it already) | true |
2,834,184,814 | [export] Fix tensor variants to scalar variants. | zhxchen17 | closed | [
"fb-exported",
"ciflow/trunk",
"release notes: quantization",
"release notes: export"
] | 3 | CONTRIBUTOR | Summary:
Ensure that when we construct an ExportedProgram, instead of having patterns like
```
torch.ops.aten.add.Tensor(tensor, scalar)
```
we will always fix it to become
```
torch.ops.aten.add.Scalar(tensor, scalar)
```
Test Plan: CI
Differential Revision: D69212362
| true |
2,834,181,942 | Update test.sh to run a greater set of unit tests on aarch64 | christinaburge | open | [
"module: ci",
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 4 | NONE | expanded set of unit tests that run on aarch64 to be the entire set of tests that can be run by run_test.py
cc @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,834,172,628 | [dynamo] fix dynamo_compile logging on RecompileLimitExceeded | xmfan | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146544
Logging branches based on RecompileLimitExceeded or not. If we exceed the limit, we fallback to eager before even trying to analyze the frame. We handle RecompileLimitExceeded outside of the try/catch/finally that edits the metrics context:
https://github.com/pytorch/pytorch/blob/72405b0c0f40a5427656038adfdd4b3efe50d028/torch/_dynamo/convert_frame.py#L908-L935.
dynamo_config and recompile_reason are both known before we raise the RecompileLimitExceeded, so we can add them with the rest of the "common" metrics. which are logged on metric_context decorator exit and is always called
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,834,172,282 | Update local_timer.py to improve queue handling | christinaburge | open | [
"oncall: distributed",
"triaged",
"open source",
"Stale",
"release notes: distributed (torchelastic)"
] | 3 | NONE | - Switched from `multiprocessing.Queue` to `torch.multiprocessing.Queue`
- Wrapped `qsize()` in `try-except` to prevent `NotImplementedError`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,834,172,227 | DeepSeek: mixed precision optimizers (BF16AdamW) | ngimel | open | [
"module: optimizer",
"triaged"
] | 6 | COLLABORATOR | DeepSeek mentions that they keep optimizer states in bf16, this is something that afaik our optimizers don't support.
Similarly, one could imagine computing/accumulating gradients in less than fp32 accuracy and adding them to fp32 params, something that's also not supported today, when `param` and `param.grad` are mandated to have the same dtype, and by default optimizers operate on `param` and `param.grad`. That can be worked around with functional invocation, but we don't have convenient constructs to do that.
Insufficient optimizer flexibility might be one of the reasons a lot of projects start by copying optimizer wholesale and modifying it for their needs.
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar | true |
2,834,171,573 | onnx.export: When a quantized model is exported using onnx.export, the convolution result has discrepency with the original quantized model. | ZiyaoAtAiZip | open | [
"module: onnx",
"triaged"
] | 6 | NONE | ### 🐛 Describe the bug
As the title, when using onnx to export a quantized convolution layer, the outcome will have plus or minus one difference in some positions with the quantized convolution layer.
The sample code below can stably re-generate this problem
```python
import torch
import torch.nn as nn
import torch.quantization
import numpy as np
import onnxruntime
class SingleConvModel(nn.Module):
def __init__(self):
super(SingleConvModel, self).__init__()
# QuantStub will quantize the input.
self.quant = torch.quantization.QuantStub()
# A single 2D convolution layer.
self.conv = nn.Conv2d(in_channels=3, out_channels=3, kernel_size=3, padding=1)
# Note: We intentionally do NOT use a final DeQuantStub so that the output remains quantized.
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
return x
model = SingleConvModel()
model.eval() # Set to evaluation mode
with torch.no_grad():
model.conv.weight.copy_(torch.randn_like(model.conv.weight))
if model.conv.bias is not None:
model.conv.bias.copy_(torch.randn_like(model.conv.bias))
print("Random weights assigned to the model.")
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(model, inplace=True)
dummy_input = torch.randn(1, 3, 150, 150)
_ = model(dummy_input)
torch.quantization.convert(model, inplace=True)
print("Model has been quantized.")
torch.save(model.state_dict(), "quantized_model_weights.pth")
print("Quantized model weights saved as 'quantized_model_weights.pth'.")
model_loaded = SingleConvModel()
state_dict = torch.load("quantized_model_weights.pth")
new_state_dict = {}
for key, value in state_dict.items():
# Instead of torch.is_quantized(value), check via the instance attribute.
if hasattr(value, "is_quantized") and value.is_quantized:
new_state_dict[key] = value.dequantize() # convert quantized tensor to float
else:
new_state_dict[key] = value
missing_keys, unexpected_keys = model_loaded.load_state_dict(new_state_dict, strict=False)
print("Loaded state_dict. Missing keys:", missing_keys)
print("Unexpected keys:", unexpected_keys)
model_loaded.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(model_loaded, inplace=True)
_ = model_loaded(dummy_input)
torch.quantization.convert(model_loaded, inplace=True)
print("Loaded model has been re-quantized.")
print("Loaded quantized model:")
print(model_loaded)
model_loaded = SingleConvModel()
state_dict = torch.load("quantized_model_weights.pth")
new_state_dict = {}
for key, value in state_dict.items():
# Instead of torch.is_quantized(value), check via the instance attribute.
if hasattr(value, "is_quantized") and value.is_quantized:
new_state_dict[key] = value.dequantize() # convert quantized tensor to float
else:
new_state_dict[key] = value
missing_keys, unexpected_keys = model_loaded.load_state_dict(new_state_dict, strict=False)
print("Loaded state_dict. Missing keys:", missing_keys)
print("Unexpected keys:", unexpected_keys)
model_loaded.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(model_loaded, inplace=True)
_ = model_loaded(dummy_input)
torch.quantization.convert(model_loaded, inplace=True)
print("Loaded model has been re-quantized.")
print("Loaded quantized model:")
print(model_loaded)
with torch.no_grad():
pytorch_output = model_loaded(dummy_input)
if pytorch_output.is_quantized:
pytorch_int_output = pytorch_output.int_repr().cpu().numpy()
else:
raise ValueError("Expected a quantized tensor output from the PyTorch model.")
session = onnxruntime.InferenceSession(onnx_filename)
ort_inputs = {session.get_inputs()[0].name: dummy_input.cpu().numpy()}
ort_output = session.run(None, ort_inputs)[0] # Expecting an integer array
diff = (pytorch_int_output.flatten().astype(int) - ort_output.flatten().astype(int))
print(diff.max())
print(diff.min())
np.testing.assert_allclose(pytorch_int_output, ort_output, rtol=1e-3, atol=1e-3)
print("Success: The quantized outputs from the loaded PyTorch model and the ONNX model match!")`
```
The code should output:
```
1
-1
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[5], [line 31](vscode-notebook-cell:?execution_count=5&line=31)
[21](vscode-notebook-cell:?execution_count=5&line=21) print(diff.min())
[23](vscode-notebook-cell:?execution_count=5&line=23) # print(pytorch_int_output)
[24](vscode-notebook-cell:?execution_count=5&line=24) # print(ort_output)
[25](vscode-notebook-cell:?execution_count=5&line=25)
(...)
[29](vscode-notebook-cell:?execution_count=5&line=29)
[30](vscode-notebook-cell:?execution_count=5&line=30) # # Compare the integer representations.
---> [31](vscode-notebook-cell:?execution_count=5&line=31) np.testing.assert_allclose(pytorch_int_output, ort_output, rtol=1e-3, atol=1e-3)
[32](vscode-notebook-cell:?execution_count=5&line=32) print("Success: The quantized outputs from the loaded PyTorch model and the ONNX model match!")
[... skipping hidden 1 frame]
File ~/anaconda3/envs/onnx_test/lib/python3.11/site-packages/numpy/testing/_private/utils.py:885, in assert_array_compare(comparison, x, y, err_msg, verbose, header, precision, equal_nan, equal_inf, strict, names)
[880](https://vscode-remote+ssh-002dremote-002btype10.vscode-resource.vscode-cdn.net/home/ziyao/pytorch_onnx_test/~/anaconda3/envs/onnx_test/lib/python3.11/site-packages/numpy/testing/_private/utils.py:880) err_msg += '\n' + '\n'.join(remarks)
[881](https://vscode-remote+ssh-002dremote-002btype10.vscode-resource.vscode-cdn.net/home/ziyao/pytorch_onnx_test/~/anaconda3/envs/onnx_test/lib/python3.11/site-packages/numpy/testing/_private/utils.py:881) msg = build_err_msg([ox, oy], err_msg,
[882](https://vscode-remote+ssh-002dremote-002btype10.vscode-resource.vscode-cdn.net/home/ziyao/pytorch_onnx_test/~/anaconda3/envs/onnx_test/lib/python3.11/site-packages/numpy/testing/_private/utils.py:882) verbose=verbose, header=header,
[883](https://vscode-remote+ssh-002dremote-002btype10.vscode-resource.vscode-cdn.net/home/ziyao/pytorch_onnx_test/~/anaconda3/envs/onnx_test/lib/python3.11/site-packages/numpy/testing/_private/utils.py:883) names=names,
[884](https://vscode-remote+ssh-002dremote-002btype10.vscode-resource.vscode-cdn.net/home/ziyao/pytorch_onnx_test/~/anaconda3/envs/onnx_test/lib/python3.11/site-packages/numpy/testing/_private/utils.py:884) precision=precision)
--> [885](https://vscode-remote+ssh-002dremote-002btype10.vscode-resource.vscode-cdn.net/home/ziyao/pytorch_onnx_test/~/anaconda3/envs/onnx_test/lib/python3.11/site-packages/numpy/testing/_private/utils.py:885) raise AssertionError(msg)
[886](https://vscode-remote+ssh-002dremote-002btype10.vscode-resource.vscode-cdn.net/home/ziyao/pytorch_onnx_test/~/anaconda3/envs/onnx_test/lib/python3.11/site-packages/numpy/testing/_private/utils.py:886) except ValueError:
[887](https://vscode-remote+ssh-002dremote-002btype10.vscode-resource.vscode-cdn.net/home/ziyao/pytorch_onnx_test/~/anaconda3/envs/onnx_test/lib/python3.11/site-packages/numpy/testing/_private/utils.py:887) import traceback
AssertionError:
Not equal to tolerance rtol=0.001, atol=0.001
Mismatched elements: 122 / 67500 (0.181%)
Max absolute difference among violations: 1
Max relative difference among violations: 0.03448276
ACTUAL: array([[[[ 74, 99, 96, ..., 54, 62, 81],
[ 61, 57, 55, ..., 83, 74, 87],
[ 75, 41, 69, ..., 55, 87, 83],...
DESIRED: array([[[[ 74, 99, 96, ..., 54, 62, 81],
[ 61, 57, 55, ..., 83, 74, 87],
[ 75, 41, 69, ..., 55, 87, 83],...
```
This error is minimal, but when a convolution based block, such as GRU block, are called multiple times for a time relevant task (video), the error accumulates exponentially.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.2
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 1
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 4399.97
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 10 MiB (40 instances)
L3 cache: 100 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi | true |
2,834,162,827 | [export] Draft export custom streamer | angelayi | closed | [
"release notes: export"
] | 1 | CONTRIBUTOR | * Instead of using tlparse's StreamHandler, draft-export will use its own, which will capture the logs, filter them, and only output the relevant ones to the log file.
* To do this, the CaptureStructuredTrace logger will use a `LogRecord` which is basically a dictionary with a custom hash function based on what is being logged. This allows us to deduplicate logs which represent the same thing, such as:
* "missing_fake_kernel" logs with the same operator
* "mismatched_fake_kernel" logs with the same operator and reasoning
* "propagate_real_tensor", "create_unbacked_symbol", and "guard_added" logs occurring on lines with the same stacktrace | true |
2,834,162,723 | [mps] Implement support for sinc() operator (inductor and eager). | dcci | closed | [
"Merged",
"module: mps",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,834,142,080 | Dynamo unsupported call_function BuiltinVariable(or_) [ConstDictVariable(), ConstDictVariable()] {} | zou3519 | closed | [
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks",
"dynamo-dicts"
] | 0 | CONTRIBUTOR | ```py
import torch
# works
@torch.compile(fullgraph=True)
def f():
a = {"one": torch.ones(5)}
a.update({"two": torch.ones(4)})
return a
f()
# doesn't work, raises
# Unsupported: call_function BuiltinVariable(or_) [ConstDictVariable(), ConstDictVariable()] {}
@torch.compile(fullgraph=True)
def f():
return {"one": torch.ones(5)} | {"two": torch.ones(4)}
f()
```
This is common internally; also reported by user at https://discuss.pytorch.org/t/why-does-dynamo-graph-break-on-or-operator-between-two-dicts/206591
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,834,139,950 | Log graph breaks | Raymo111 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"dynamo-logging"
] | 7 | MEMBER | Graph breaks currently aren't logged to dynamo_compile and pt2_compile_events. We want to log them.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,834,135,934 | vmap x compile silently incorrect | zou3519 | closed | [
"high priority",
"triaged",
"actionable",
"module: correctness (silent)",
"oncall: pt2",
"module: functorch",
"dynamo-triage-jan2025"
] | 6 | CONTRIBUTOR | ```py
import torch
from torch import Tensor
lib = torch.library.Library('mylib', 'FRAGMENT')
@torch.library.custom_op('mylib::vquantile', mutates_args=())
def vquantile(x: Tensor, q: Tensor, dim: int = -1) -> Tensor:
return torch.quantile(x, q, dim)
@torch.library.register_fake('mylib::vquantile')
def _(x, q, dim=-1):
n = q.numel()
x = x.index_select(dim, torch.zeros(n, dtype=int) ).squeeze(dim)
return torch.empty_like(x)
@torch.library.register_vmap('mylib::vquantile')
def quantile_vmap(info, in_dims, x, q, dim=-1):
x = vquantile(x.movedim(in_dims[0], -1), q, dim % (x.ndim - 1) )
return x, x.ndim - 1
a = torch.arange(10.).reshape(2, 5)
q = torch.tensor([.2, .5, .8])
f1 = torch.vmap(lambda x: vquantile(x, q, -1))
f2 = torch.compile(f1)
r1 = f1(a)
r2 = f2(a)
print(f'a: \n{a}')
print(f'vmapped quantiles: \n{r1}')
print(f'compiled & vmapped quantiles: \n{r2}')
```
originally reported at https://discuss.pytorch.org/t/compile-and-vmap-in-custom-op-with-quantile/213389
cc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu @Chillee @samdow @kshitij12345 | true |
2,834,120,240 | [wip] disable decorator for ca | xmfan | closed | [
"Stale",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 3 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146535
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @yf225 | true |
2,834,114,798 | [export] Add additional tlparse logging | angelayi | closed | [
"Merged",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146939
* #146955
* #146859
* #146858
* __->__ #146534
* #146533
* #146532
Added some additional logging so we can also run tlparse on generic export errors
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,834,114,714 | [export] Use custom stream logger in draft-export | angelayi | closed | [
"Merged",
"ciflow/inductor",
"release notes: export"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146939
* #146955
* #146859
* #146858
* #146534
* __->__ #146533
* #146532
Using a custom logger so that we can store our own buffer to dedup logs that look the same. The schema for deduping is as follows:
```python
if key == "missing_fake_kernel":
return hash((key, data["op"])) # Same ops get deduped
elif key == "mismatched_fake_kernel":
return hash((key, data["op"], data["reason"])) # Same op and reason for errors get deduped
elif key == "propagate_real_tensors":
return hash((key, json.dumps(data["stack"]))) # Guards appearing on the same stacktrace get deduped
elif key == "create_unbacked_symbol":
return hash((key, json.dumps(data["stack"]))) # Unbacked symbols appearing on the same stacktrace get deduped
```
Notably, guards appearing on the same stacktrace get deduped. This is because there are some cases in PT2I models where a piece of code which creates a new unbacked symint + runs into a DDE gets called 800 times, causing 800 new symints to be created, and 800 propagate_real_tensor errors that are all the same expression. This is hard to look at, so we should just deduplicate this.
The con of this is that if there exists multiple DDE on the same stacktrace, we will only show the first issue. | true |
2,834,114,628 | [symbolic shapes] Log SymNode id for provenance | angelayi | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146939
* #146955
* #146859
* #146858
* #146534
* #146533
* __->__ #146532
We can use the SymNode id to point us back to how previous expressions were created, and construct this nice tree in tlparse:
<img width="761" alt="image" src="https://github.com/user-attachments/assets/531b03e8-4398-4d0a-bd11-16078256041c" />
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,834,094,578 | [MPS] linalg solve implementation | Isalia20 | closed | [
"open source",
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 4 | COLLABORATOR | Fixes #98222
| true |
2,834,087,624 | [aoti] Add a Tracing Context with FakeTensorMode to AOT Inductor Lowering | yushangdi | open | [
"fb-exported",
"Stale",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export",
"module: aotinductor"
] | 12 | CONTRIBUTOR | Summary:
Fixes https://github.com/pytorch/pytorch/issues/118304
In the issue, we have problem with unbacked symints because the fake tensor mode is not detected by AOTI when there's no input.
In the current implementation, AOTI can only detect fake mode from node.meta["val"] for placeholder nodes, which is problematic when there's no inputs.
The solution is we also try to detect fake mode from other nodes' node.meta["val"] or node.meta["example_value"], and we add a tracing context with the detected fake mode around the AOTI lowering path.
After adding the tracing context, we also need to make some changes so AOTI goes into the right branches, specifically for freezing=True config. Previously we don't have a tracing context for AOTI, so we used the`tracing_context := torch._guards.TracingContext.try_get()` check to separate the AOT and JIT inductor path when freezing the graph module. Now we change it to either explicitly guard on `V.aot_compilation` when `V` is available, or guard on `tracing_context.params_flat_unwrap_subclasses is not None` when `V` is not available.
To summarize, the fake mode detection logic in inductor is now like this:
- In AOT inductor, we get fake_mode from placeholder node meta, val or example_values. When there's no user inputs, we look at other nodes' meta to determine the fake_mode.
- In JIT inductor, we get fake_mode from tracing context. In addition, if we are in cpp_wrapper mode and trying to create fake_tensor, we need to convert inputs to use the same fake_mode as the tracing context fake mode. The inputs might have a different fake_mode than the tracing context fake_mode if they used the fake_mode [here](https://github.com/pytorch/pytorch/blob/v2.6.0/torch/_dynamo/output_graph.py#L1373-L1379) or [here](https://github.com/pytorch/pytorch/blob/v2.6.0/torch/fx/experimental/proxy_tensor.py#L412).
Test Plan:
```
buck run @fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r unbacked_arg
buck run @fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r test_buffer_mutation_and_force_mmap_weights_cpu
buck run @fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r test_no_args
buck run @fbcode//mode/dev-nosan //caffe2/test/inductor:test_inductor -- -r test_functionalize_rng_wrappers_cpu
buck run @fbcode//mode/dev-nosan //caffe2/test/inductor:torchinductor_dynamic_shapes -- -r aoti_eager_with_scalar_dynamic_shapes
buck run @fbcode//mode/dev-nosan //caffe2/test/inductor:test_inductor -- -r test_aoti_eager
python test/inductor/test_cpu_cpp_wrapper.py TestCppWrapper.test_adding_tensor_offsets_cpu_cpp_wrapper
```
Differential Revision: D69158049
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @desertfire @benjaminglass1 @yf225 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.