id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,936,020,314
|
Symintify transpose_
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Fixes https://github.com/pytorch/pytorch/issues/148702
| true
|
2,936,010,520
|
[Intel GPU][PT2E] bugfix: use zero-point to decide conv src zp mask
|
pytorchbot
|
closed
|
[
"module: cpu",
"open source"
] | 1
|
COLLABORATOR
|
# Motivation
The PR fix a bug that wrongly decides the zero-point mask setting. Specifically, it deems zero-point is always not zeros due to scale is used for judgement. Fortunately, the bug only affects the performance. The accuracy is not affected.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149473
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,935,953,063
|
[cherry-pick] Update ExecuTorch pin update (#149539)
|
mergennachin
|
closed
|
[
"release notes: releng",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Cherry-picking https://github.com/pytorch/pytorch/pull/149539
Fixes ExecuTorch CI in the release branch, so that subsequent cherry-picks into the release branch can test ExecuTorch CI successfully. https://hud.pytorch.org/hud/pytorch/pytorch/release%2F2.7/1?per_page=50&name_filter=executorch
| true
|
2,935,928,319
|
[test] Turn on StaticCudaLauncher
|
jamesjwu
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149629
* #149657
* #149054
Default flag to on (this lets me run benchmarks)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,935,919,553
|
DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int32 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int32&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39095932255).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 327, in test_binary_op_with_scalar_self_support
self._binary_test(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 263, in _binary_test
actual = op(inputs, self.is_cuda, is_fastpath)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_pow', keys=('aten::_foreach_pow', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 1: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.int32], Tensor[size=(19, 19), device="cuda:0", dtype=torch.int32], Tensor[size=(18, 18), device="cuda:0", dtype=torch.int32], Tensor[size=(17, 17), device="cuda:0", dtype=torch.int32], Tensor[size=(16, 16), device="cuda:0", dtype=torch.int32], Tensor[size=(15, 15), device="cuda:0", dtype=torch.int32], Tensor[size=(14, 14), device="cuda:0", dtype=torch.int32], Tensor[size=(13, 13), device="cuda:0", dtype=torch.int32], Tensor[size=(12, 12), device="cuda:0", dtype=torch.int32], Tensor[size=(11, 11), device="cuda:0", dtype=torch.int32], Tensor[size=(10, 10), device="cuda:0", dtype=torch.int32], Tensor[size=(9, 9), device="cuda:0", dtype=torch.int32], Tensor[size=(8, 8), device="cuda:0", dtype=torch.int32], Tensor[size=(7, 7), device="cuda:0", dtype=torch.int32], Tensor[size=(6, 6), device="cuda:0", dtype=torch.int32], Tensor[size=(5, 5), device="cuda:0", dtype=torch.int32], Tensor[size=(4, 4), device="cuda:0", dtype=torch.int32], Tensor[size=(3, 3), device="cuda:0", dtype=torch.int32], Tensor[size=(2, 2), device="cuda:0", dtype=torch.int32], Tensor[size=(1, 1), device="cuda:0", dtype=torch.int32]], args=(10), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,935,919,414
|
DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39097709474).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,935,860,995
|
`Segmentation Fault` in `torch.lstm_cell`
|
vwrewsge
|
open
|
[
"module: crash",
"module: rnn",
"triaged",
"bug",
"module: empty tensor",
"topic: fuzzer"
] | 3
|
NONE
|
### 🐛 Describe the bug
The following code snippet causes a `segmentation fault` when running torch.lstm_cell:
```
import torch
inp = torch.full((0, 8), 0, dtype=torch.float)
hx = torch.full((0, 9), 0, dtype=torch.float)
cx = torch.full((0, 9), 0, dtype=torch.float)
w_ih = torch.full((1, 8), 1.251e+12, dtype=torch.float)
w_hh = torch.full((1, 9), 1.4013e-45, dtype=torch.float)
b_ih = None
b_hh = None
torch.lstm_cell(inp, (hx, cx), w_ih, w_hh, b_ih, b_hh)
```
### Versions
torch 2.6.0
cc @mikaylagawarecki
| true
|
2,935,760,679
|
Some `Improve Error Message` Bugs
|
vwrewsge
|
open
|
[
"module: cuda",
"module: error checking",
"triaged",
"better-engineering",
"actionable",
"module: fft"
] | 1
|
NONE
|
### 🐛 Describe the bug
# Bug 1
When calling `torch.fft.ihfft2` on the output of `torch.fft.rfft2`, the following error is raised:
```
RuntimeError: Expected self.is_floating_point() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
To Reproduce:
```
import torch
torch.manual_seed(0)
h, w = 32, 32
x = torch.randn(h, w, device='cuda', dtype=torch.float64)
fft_forward = torch.fft.rfft2(x)
ifft_result_1 = torch.fft.ihfft2(fft_forward, s=(h, w))
```
# Bug 2
When calling `torch.fft.ihfftn` on the output of `torch.fft.rfftn`, the following error is raised:
```
RuntimeError: Expected self.is_floating_point() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
To Reproduce:
```
import torch
device = "cpu"
dtype = torch.float64
norm_mode = "ortho"
shape = (6, 8, 10)
expect = torch.randn(*shape, device=device, dtype=dtype)
half_complex = torch.fft.rfftn(expect, dim=tuple(range(len(shape))), norm=norm_mode)
actual = torch.fft.ihfftn(half_complex, s=shape, dim=tuple(range(len(shape))), norm=norm_mode)
```
Here are some similar bugs.
# Bug 3
Code:
```
import sys
import subprocess
import signal
code = r'''
import torch
@torch.compile
def f(*args):
return torch.randn(*args)
f(-1, 3)
'''
proc = subprocess.run([sys.executable, "-c", code])
```
Output:
```
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method randn of type object at 0x7fffef41ff00>(*(-1, 3), **{}):
Expected cond to be True, but got False. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
# Bug 4
Code:
```
import torch
def buggy_fn():
a = torch.ones(-1, 3)
return a.sum()
compiled_fn = torch.compile(buggy_fn)
result = compiled_fn()
```
Output:
```
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method ones of type object at 0x7fffef41ff00>(*(-1, 3), **{}):
Expected cond to be True, but got False. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
# Bug 5
Code:
```
import subprocess
import sys
def test_bug():
code = r"""
import torch
from torch.onnx.operators import reshape_from_tensor_shape
def func(x):
x.add_(1)
reshaped = reshape_from_tensor_shape(x, torch.randn(3, 4))
return reshaped
x = torch.randn(2, 3)
compiled_func = torch.compile(func)
compiled_func(x)
"""
try:
proc = subprocess.run([sys.executable, "-c", code],
capture_output=True, text=True)
if proc.returncode != 0:
stderr_lower = proc.stderr.lower()
if (proc.returncode < 0 or proc.returncode == 139):
print("1")
else:
print("Other error: " + proc.stderr.strip())
else:
print("No bug detected")
except Exception as e:
print(f"Other error: {str(e)}")
if __name__ == "__main__":
test_bug()
```
Output:
```
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method _reshape_from_tensor of type object at 0x7fffef41ff00>(*(FakeTensor(..., size=(2, 3)), FakeTensor(..., size=(3, 4))), **{}):
Expected shape_tensor.dim() == 1 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
### Versions
torch 2.6.0
cc @ptrblck @msaroufim @eqy @malfet @mruberry
| true
|
2,935,750,567
|
[cherry-pick] Modify cuda aarch64 install for cudnn and nccl. Cleanup aarch64 cuda 12.6 docker #149540
|
atalman
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Cherry-Pick of https://github.com/pytorch/pytorch/pull/149540 to release branch
| true
|
2,935,703,015
|
`Segmentation Fault` When Using `@torch.jit.script` with a list Attribute in a Scripted Class
|
vwrewsge
|
open
|
[
"oncall: jit",
"module: crash"
] | 1
|
NONE
|
### 🐛 Describe the bug
When using @torch.jit.script for TorchScript compilation, defining a scripted class (e.g., Ignored) where the __init__ method includes a dynamic Python structure like list causes a Segmentation fault at runtime.
Code:
```
import torch
import torch.nn as nn
@torch.jit.script
class Ignored(object):
def __init__(self):
self.count: int = 0
self.items: list = []
```
Output:
```
Segmentation Fault
```
### Versions
torch 2.6.0
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,935,495,265
|
`Segmentation fault` in `torch.nn.utils.rnn.pad_packed_sequence` and `torch.nn.utils.rnn.unpack_sequence`
|
vwrewsge
|
open
|
[
"module: crash",
"module: error checking",
"triaged",
"module: empty tensor",
"topic: fuzzer"
] | 1
|
NONE
|
### 🐛 Describe the bug
Bug Code 1:
```
import torch
from torch.nn.utils.rnn import pad_packed_sequence, PackedSequence
empty_data = torch.randn(0, 5)
empty_batch_sizes = torch.tensor([], dtype=torch.int64)
empty_packed = PackedSequence(empty_data, empty_batch_sizes, None, None)
pad_packed_sequence(empty_packed, batch_first=True)
```
Output for Bug Code 1:
```
Segmentation fault
```
Bug Code 2:
```
import torch
from torch.nn.utils.rnn import PackedSequence, unpack_sequence
empty_data = torch.tensor([])
empty_batch_sizes = torch.tensor([], dtype=torch.int64)
packed = PackedSequence(data=empty_data, batch_sizes=empty_batch_sizes)
unpack_sequence(packed)
```
Output for Bug Code 2:
```
Segmentation fault
```
### Versions
torch 2.6.0
cc @malfet
| true
|
2,935,359,390
|
`torch.cuda.manual_seed` ignored
|
vwrewsge
|
open
|
[
"triaged",
"module: random",
"oncall: pt2"
] | 3
|
NONE
|
### 🐛 Describe the bug
When using torch.compile, torch.cuda.manual_seed/torch.cuda.manual_seed_all/torch.cuda.random.manual_seed do not seem to properly enforce reproducibility across multiple calls to a compiled function.
# torch.cuda.manual_seed
Code:
```python
import torch
import torch._inductor.config
torch._inductor.config.fallback_random = True
@torch.compile
def foo():
# Set the GPU seed
torch.cuda.manual_seed(3)
# Create a random tensor on the GPU.
# If a CUDA device is available, the tensor will be created on CUDA.
return torch.rand(4, device='cuda' if torch.cuda.is_available() else 'cpu')
# Call the compiled function twice
print("cuda.is_available:", torch.cuda.is_available())
result1 = foo()
result2 = foo()
print(result1)
print(result2)
```
Output:
```
cuda.is_available: True
tensor([0.2501, 0.4582, 0.8599, 0.0313], device='cuda:0')
tensor([0.3795, 0.0543, 0.4973, 0.4942], device='cuda:0')
```
# `torch.cuda.manual_seed_all`
Code:
```
import torch
import torch._inductor.config
torch._inductor.config.fallback_random = True
@torch.compile
def foo():
# Reset CUDA seeds
torch.cuda.manual_seed_all(3)
# Generate a random tensor on the GPU
return torch.rand(4, device='cuda')
# Call the compiled function twice
result1 = foo()
result2 = foo()
print(result1)
print(result2)
```
Output:
```
tensor([0.0901, 0.8324, 0.4412, 0.2539], device='cuda:0')
tensor([0.5561, 0.6098, 0.8558, 0.1980], device='cuda:0')
```
# torch.cuda.random.manual_seed
Code
```
import torch
import torch._inductor.config
torch._inductor.config.fallback_random = True
# Ensure a CUDA device is available.
if not torch.cuda.is_available():
print("CUDA is not available on this system.")
@torch.compile
def foo():
# Reset GPU random seed
torch.cuda.random.manual_seed(3)
# Generate a random tensor on GPU
return torch.rand(4, device='cuda')
# Call the compiled function twice
result1 = foo()
result2 = foo()
print(result1)
print(result2)
```
Output:
```
tensor([8.1055e-01, 4.8494e-01, 8.3937e-01, 6.7405e-04], device='cuda:0')
tensor([0.4365, 0.5669, 0.7746, 0.8702], device='cuda:0')
```
### Versions
torch 2.6.0
cc @pbelevich @chauhang @penguinwu
| true
|
2,935,299,008
|
`vmap` not working on `torch.range`
|
vwrewsge
|
open
|
[
"triaged",
"module: vmap",
"module: functorch"
] | 0
|
NONE
|
### 🐛 Describe the bug
Code:
```
import torch
from functools import partial
batch_range = torch.vmap(partial(torch.range, step=1))
start = torch.tensor([1., 2., 3.])
end = torch.tensor([25., 26., 27.])
batch_range(start, end)
```
Output:
```
RuntimeError: vmap: It looks like you're calling .item() on a Tensor. We don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. If error is occurring somewhere inside PyTorch internals, please file a bug report.
```
### Versions
torch 2.6.0
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,935,264,280
|
Unexpected return type from `torch.split` under `@torch.jit.script` decorator
|
vwrewsge
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
When using the torch.split function inside a TorchScript function (decorated with @torch.jit.script), the return type is unexpectedly a list, not a tuple. This deviates from the expected behavior where torch.split should return a tuple of tensors.
```
import torch
@torch.jit.script
def split_func(x):
return x.split(2, dim=1)
x = torch.rand(2, 8)
result = split_func(x)
print(type(result))
```
### Versions
torch 2.6.0
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,935,141,692
|
Optimize `torch.equal` description
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 4
|
CONTRIBUTOR
|
Fixes #149222
## Test Result

cc @zou3519
| true
|
2,935,012,852
|
Torch nightly `torch-2.8.0.dev20250320` breaks torchcodec
|
NicolasHug
|
closed
|
[
"module: custom-operators",
"bug",
"oncall: pt2",
"module: pt2-dispatcher"
] | 1
|
MEMBER
|
(copy/pasting https://github.com/pytorch/torchcodec/issues/579):
The TorchCodec main branch is [green](https://github.com/pytorch/torchcodec/commit/ae19a7882752823e5cd9a8c580f01150dbc6e3ec) and relies on `torch-2.8.0.dev20250319` (yesterday's nightly).
New PRs on TorchCodec relying on ``torch-2.8.0.dev20250320`` (today's nightlies) are [red](https://github.com/pytorch/torchcodec/pull/578) with a custom ops error:
```
Traceback (most recent call last):
File "/Users/runner/work/torchcodec/torchcodec/test/decoders/manual_smoke_test.py", line 14, in <module>
decoder = torchcodec.decoders._core.create_from_file(
File "/Users/runner/miniconda3/envs/test/lib/python3.9/site-packages/torch/_ops.py", line 756, in __call__
torchcodec.__version__ = '0.3.0.dev20250320'
return self._op(*args, **kwargs)
NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema torchcodec_ns::create_from_file. This usually means that this function requires a non-empty list of Tensors, or that you (the operator writer) forgot to register a fallback function. Available functions are [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:7[8](https://github.com/pytorch/torchcodec/actions/runs/13967589953/job/39101625713?pr=578#step:11:9) [backend fallback]
Meta: registered at /dev/null:214 [kernel]
BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:1[9](https://github.com/pytorch/torchcodec/actions/runs/13967589953/job/39101625713?pr=578#step:11:10)4 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:479 [backend fallback]
Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:349 [backend fallback]
Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:[10](https://github.com/pytorch/torchcodec/actions/runs/13967589953/job/39101625713?pr=578#step:11:11)0 [backend fallback]
AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:67 [backend fallback]
AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:75 [backend fallback]
AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:83 [backend fallback]
AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:91 [backend fallback]
AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback]
AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:104 [backend fallback]
AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:87 [backend fallback]
AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:79 [backend fallback]
AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:95 [backend fallback]
Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:322 [backend fallback]
AutocastMTIA: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:466 [backend fallback]
AutocastXPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:504 [backend fallback]
AutocastMPS: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:209 [backend fallback]
AutocastCUDA: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:[16](https://github.com/pytorch/torchcodec/actions/runs/13967589953/job/39101625713?pr=578#step:11:17)5 [backend fallback]
FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:731 [backend fallback]
BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:758 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:27 [backend fallback]
Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:208 [backend fallback]
PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:202 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:475 [backend fallback]
PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:206 [backend fallback]
PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:[19](https://github.com/pytorch/torchcodec/actions/runs/13967589953/job/39101625713?pr=578#step:11:20)8 [backend fallback]
```
cc @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,934,954,730
|
intermittent toch.compiler failures when running gemma model
|
taoye9
|
closed
|
[
"module: cpu",
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
### 🐛 Describe the bug
Hi, i'm trying to fix a intermittent torch.compiler failures with cpp wrapper when running gemma model and wonder if someone can help providing some clue for debug or get a minial reproducer? The error is not specific to arm but also reproducible on intel machines.
```TORCHINDUCTOR_CPP_WRAPPER=1 \
TORCHINDUCTOR_FREEZING=1 \
ONEDNN_DEFAUL_FPMATH_MODE=BF16 \
OMP_NUM_THREADS=16 \
IDEEP_CACHE_MATMUL_REORDERS=1 \
LRU_CACHE_CAPACITY=256 \`
model.forward = torch.compile(model.forward, backend='inductor',
dynamic=True, fullgraph=False)
```
when compiling the generated c++ code which show some variable is not declared.
```
error: ‘s9’ was not declared in this scope; did you mean ‘s1’?
2163 | const int64_t int_array_34[] = {1L, 4L, s9, 256L};
```
the following is what i try to debug:
setting `TORCH_COMPILE_DEBUG=1`, it seems there are something wrong in the generated `torchinductor/model__2_inference_2.2/fx_graph_readable.py`. in short, the kv cache tensors are marked as self._frozen_param in fx graph and corresponding c++ code for `torch.ops.aten.sym_size.int` is not gerneated in `output_code.py`.
the corresponding code in python file is :
```
if self.key_cache[layer_idx].device.type == "meta":
self.key_cache[layer_idx] = torch.zeros_like(self.key_cache[layer_idx], device=key_states.device)
self.value_cache[layer_idx] = torch.zeros_like(self.value_cache[layer_idx], device=value_states.device)
```
in the fx_graph_readable.py
```
# No stacktrace found for following nodes
arg0_1: "bf16[256000, 2304]" = self._frozen_param0
arg292_1: "bf16[1, 4, s9, 256]" = self._frozen_param292
arg296_1: "bf16[1, 4, s13, 256]" = self._frozen_param296
arg300_1: "bf16[1, 4, s17, 256]" = self._frozen_param300
arg304_1: "bf16[1, 4, s21, 256]" = self._frozen_param304
arg308_1: "bf16[1, 4, s25, 256]" = self._frozen_param308
arg312_1: "bf16[1, 4, s29, 256]" = self._frozen_param312
arg316_1: "bf16[1, 4, s33, 256]" = self._frozen_param316
arg320_1: "bf16[1, 4, s37, 256]" = self._frozen_param320
arg324_1: "bf16[1, 4, s41, 256]" = self._frozen_param324
arg328_1: "bf16[1, 4, s45, 256]" = self._frozen_param328
arg332_1: "bf16[1, 4, s49, 256]" = self._frozen_param332
arg336_1: "bf16[1, 4, s53, 256]" = self._frozen_param336
arg340_1: "bf16[1, 4, s57, 256]" = self._frozen_param340
# File: /home/ubuntu/workspace/torch_dev/lib/python3.10/site-packages/transformers/cache_utils.py:1736 in update, code: self.value_cache[layer_idx] = torch.zeros_like(self.value_cache[layer_idx], device=value_states.device)
sym_size_int_25: "Sym(s9)" = torch.ops.aten.sym_size.int(arg292_1, 2); arg292_1 = None
full_7: "bf16[1, 4, 40, 256]" = torch.ops.aten.full.default([1, 4, sym_size_int_25, 256], 0, dtype = torch.bfloat16, layout = torch.strided, device = device(type='cpu'), pin_memory = False)
```
### Error logs
```I0313 11:56:01.742000 5716 torch/_dynamo/convert_frame.py:1121] [0/2] run_gc_after_compile: running gc
Traceback (most recent call last):
File "/home/ubuntu/workspace/scratchs/torch_compiler/run_gemma.py", line 98, in <module>
e2e, no_output_tokens = measure_end_to_end_latency()
File "/home/ubuntu/workspace/scratchs/torch_compiler/run_gemma.py", line 65, in measure_end_to_end_latency
model.generate(model_inputs, do_sample=False, max_new_tokens=30, min_new_tokens=30)
File "/home/ubuntu/workspace/pytorch/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/workspace/torch_dev/lib/python3.10/site-packages/transformers/generation/utils.py", line 2223, in generate
result = self._sample(
File "/home/ubuntu/workspace/torch_dev/lib/python3.10/site-packages/transformers/generation/utils.py", line 3211, in _sample
outputs = self(**model_inputs, return_dict=True)
File "/home/ubuntu/workspace/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ubuntu/workspace/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/eval_frame.py", line 663, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 1432, in __call__
return self._torchdynamo_orig_callable(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 1213, in __call__
result = self._inner_convert(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 598, in __call__
return _compile(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 1059, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/ubuntu/workspace/pytorch/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 761, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 797, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 257, in _fn
return fn(*args, **kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 715, in transform
tracer.run()
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 3500, in run
super().run()
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 3701, in RETURN_VALUE
self._return(inst)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 3686, in _return
self.output.compile_subgraph(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/output_graph.py", line 1179, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/output_graph.py", line 1437, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/output_graph.py", line 1487, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/output_graph.py", line 1544, in _call_user_compiler
raise BackendCompilerFailed(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/output_graph.py", line 1519, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/ubuntu/workspace/pytorch/torch/__init__.py", line 2349, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 1777, in compile_fx
return compile_fx(
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 2089, in compile_fx
return aot_autograd(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/backends/common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_functorch/aot_autograd.py", line 1160, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "/home/ubuntu/workspace/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 775, in load
compiled_fn = dispatch_and_compile()
File "/home/ubuntu/workspace/pytorch/torch/_functorch/aot_autograd.py", line 1145, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/ubuntu/workspace/pytorch/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/ubuntu/workspace/pytorch/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/ubuntu/workspace/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 219, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 1629, in fw_compiler_freezing
optimized_function = inner_compile(
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 628, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 735, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 1295, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 1197, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/home/ubuntu/workspace/pytorch/torch/_inductor/graph.py", line 2083, in compile_to_module
return self._compile_to_module()
File "/home/ubuntu/workspace/pytorch/torch/_inductor/graph.py", line 2130, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/home/ubuntu/workspace/pytorch/torch/_inductor/codecache.py", line 2747, in load_by_key_path
mod = _reload_python_module(key, path)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/runtime/compile_tasks.py", line 36, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_ubuntu/g6/cg6xubpp3mgot5ujrcq7ns7f2kmcmg6agwntl66tbytrm3dtpaym.py", line 24158, in <module>
inductor_entry = CppWrapperCodeCache.load_pybinding(
File "/home/ubuntu/workspace/pytorch/torch/_inductor/codecache.py", line 2250, in load_pybinding
return cls.load_pybinding_async(*args, **kwargs)()
File "/home/ubuntu/workspace/pytorch/torch/_inductor/codecache.py", line 2242, in future
result = get_result()
File "/home/ubuntu/workspace/pytorch/torch/_inductor/codecache.py", line 2051, in load_fn
result = worker_fn()
File "/home/ubuntu/workspace/pytorch/torch/_inductor/codecache.py", line 2079, in _worker_compile_cpp
cpp_builder.build()
File "/home/ubuntu/workspace/pytorch/torch/_inductor/cpp_builder.py", line 1596, in build
run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/cpp_builder.py", line 355, in run_compile_cmd
_run_compile_cmd(cmd_line, cwd)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/cpp_builder.py", line 350, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_SVE -D CPU_CAPABILITY_SVE256 -D AT_BUILD_ARM_VEC256_WITH_SLEEF -shared -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fexcess-precision=fast -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -fno-tree-loop-vectorize -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -fopenmp -I/usr/include/python3.10 -I/home/ubuntu/workspace/pytorch/torch/include -I/home/ubuntu/workspace/pytorch/torch/include/torch/csrc/api/include -march=armv8-a+sve -msve-vector-bits=256 -D_GLIBCXX_USE_CXX11_ABI=1 -ltorch -ltorch_cpu -ltorch_python -lgomp -L/usr/lib/aarch64-linux-gnu -L/home/ubuntu/workspace/pytorch/torch/lib -o /tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.so
Output:
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp: In function ‘void inductor_entry_impl(AtenTensorOpaque**, AtenTensorOpaque**)’:
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:2163:45: error: ‘s9’ was not declared in this scope; did you mean ‘s1’?
2163 | const int64_t int_array_34[] = {1L, 4L, s9, 256L};
| ^~
| s1
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:2373:45: error: ‘s13’ was not declared in this scope; did you mean ‘s10’?
2373 | const int64_t int_array_38[] = {1L, 4L, s13, 256L};
| ^~~
| s10
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:2580:45: error: ‘s17’ was not declared in this scope; did you mean ‘s10’?
2580 | const int64_t int_array_41[] = {1L, 4L, s17, 256L};
| ^~~
| s10
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:2787:45: error: ‘s21’ was not declared in this scope; did you mean ‘s1’?
2787 | const int64_t int_array_44[] = {1L, 4L, s21, 256L};
| ^~~
| s1
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:2994:45: error: ‘s25’ was not declared in this scope
2994 | const int64_t int_array_47[] = {1L, 4L, s25, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:3201:45: error: ‘s29’ was not declared in this scope
3201 | const int64_t int_array_50[] = {1L, 4L, s29, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:3408:45: error: ‘s33’ was not declared in this scope
3408 | const int64_t int_array_53[] = {1L, 4L, s33, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:3615:45: error: ‘s37’ was not declared in this scope
3615 | const int64_t int_array_56[] = {1L, 4L, s37, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:3822:45: error: ‘s41’ was not declared in this scope; did you mean ‘s1’?
3822 | const int64_t int_array_59[] = {1L, 4L, s41, 256L};
| ^~~
| s1
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:4029:45: error: ‘s45’ was not declared in this scope
4029 | const int64_t int_array_62[] = {1L, 4L, s45, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:4236:45: error: ‘s49’ was not declared in this scope
4236 | const int64_t int_array_65[] = {1L, 4L, s49, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:4443:45: error: ‘s53’ was not declared in this scope
4443 | const int64_t int_array_68[] = {1L, 4L, s53, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:4652:45: error: ‘s57’ was not declared in this scope
4652 | const int64_t int_array_71[] = {1L, 4L, s57, 256L};
| ^~~`
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+gitf349304
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1024-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.14.0
[pip3] torch==2.7.0a0+gitf349304
[conda] Could not collect
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @chauhang @penguinwu @voznesenskym @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,934,932,589
|
[Inductor] Restrict block analysis to only match integer dims and strides
|
kundaMwiza
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
Restrict block analysis to only match dimension sizes and strides that are integers. E.g. `sympy` can match index expressions like `ModularIndexing(xindex, 4, 4)) + 4*(ModularIndexing(xindex, 32, 2))` with the candidate below that is invalid.
```python
match_expr = stride_mod0_*((xindex//(dim_mod1_*dim_mod2_*dim_mod3_*dim_mod4_))) + stride_mod1_*(ModularIndexing(xindex, dim_mod2_*dim_mod3_*dim_mod4_, dim_mod1_)) + stride_mod2_*(ModularIndexing(xindex, dim_mod3_*dim_mod4_, dim_mod2_)) + stride_mod3_*(ModularIndexing(xindex, dim_mod4_, dim_mod3_)) + stride_mod4_*(ModularIndexing(xindex, 1, dim_mod4_))
match={
dim_mod4_: 32, dim_mod3_: 2, stride_mod3_: 4, dim_mod2_: 1/16,
dim_mod1_: 4, stride_mod1_: 1, stride_mod4_: 0, stride_mod2_: 0, stride_mod0_: 0
}
```
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,934,928,625
|
[Inductor] optimize the heuristics of parallel reduction
|
jiayisunx
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149614
Fix https://github.com/pytorch/pytorch/issues/148639.
Summary:
Optimize the heuristics of parallel reduction: When the number of steps of the first inner loop beyond the maximum parallel depth is much larger than the number of steps of all outer loops within the maximum parallel depth, change the starting depth of parallelism to the first inner loop and recalculate the maximum parallel depth. I ran the Inductor benchmark with this PR on CPU. A timm model poolformer_m36 BF16 has about 25% performance improvement, and no performance regression is seen.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,934,869,088
|
Combine win and win-arm64 templates
|
iremyux
|
closed
|
[
"oncall: distributed",
"module: cpu",
"open source",
"ciflow/binaries",
"release notes: build",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: distributed (checkpoint)"
] | 2
|
COLLABORATOR
|
Fixes #148776
| true
|
2,934,862,722
|
[Build] Compile failure with torch2.5+debian10+clang11
|
kevint324
|
open
|
[
"module: build",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
## environment:
torch version : 2.5
os version: debian10
compiler: clang11
## problem
build script:
```
export CC=clang-11
export CXX=clang++-11
export USE_CUDA=0
export BUILD_TEST=0
export CXXFLAGS="-Wno-unused-command-line-argument"
export GLIBCXX_USE_CXX11_ABI=1
python3 setup.py install
```
error is `undefined reference to ```std::filesystem::__cxx11::path::_M_split_cmpts() ```
```
FAILED: bin/torch_shm_manager
: && /usr/bin/clang++-11 -std=c++17 -lstdc++fs -Wno-unused-command-line-argument -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=braced-scalar-init -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wvla-extension -Wsuggest-override -Wnewline-eof -Winconsistent-missing-override -Winconsistent-missing-destructor-override -Wno-pass-failed -Wno-error=old-style-cast -Wconstant-conversion -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -faligned-new -fno-math-errno -fno-trapping-math -Werror=format -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -lstdc++fs -rdynamic -Xlinker --no-as-needed caffe2/torch/lib/libshm/CMakeFiles/torch_shm_manager.dir/manager.cpp.o -o bin/torch_shm_manager -Wl,-rpath,/home/tonghengwen/code/torch_dev/pytorch/build/lib: lib/libshm.so -lstdc++fs -lrt lib/libc10.so -Wl,-rpath-link,/home/tonghengwen/code/torch_dev/pytorch/build/lib && /usr/bin/cmake -E __run_co_compile --lwyu="ldd;-u;-r" --source=bin/torch_shm_manager && :
/usr/bin/ld: /home/tonghengwen/code/torch_dev/pytorch/build/lib/libtorch_cpu.so: undefined reference to `std::filesystem::__cxx11::path::_M_split_cmpts()'
clang: error: linker command failed with exit code 1 (use -v to see invocation)
[29/32] Linking CXX shared library lib/libtorch_python.so
Warning: Unused direct dependencies:
/home/tonghengwen/code/torch_dev/pytorch/build/lib/libtorch.so
```
## Workaround
The compilation error can be fixed by adding link library rule to stdc++fs
```
diff --git a/caffe2/CMakeLists.txt b/caffe2/CMakeLists.txt
index 9be7f3732f3..10e75f352aa 100644
--- a/caffe2/CMakeLists.txt
+++ b/caffe2/CMakeLists.txt
@@ -1421,6 +1421,7 @@ if($ENV{TH_BINARY_BUILD})
endif()
endif()
+target_link_libraries(torch_cpu PUBLIC stdc++fs)
target_link_libraries(torch_cpu PUBLIC c10)
target_link_libraries(torch_cpu PUBLIC ${Caffe2_PUBLIC_DEPENDENCY_LIBS})
target_link_libraries(torch_cpu PRIVATE ${Caffe2_DEPENDENCY_LIBS})
```
## Question
I'm not sure if this is the right way to fix this issue.
Or if is there any non-intrusive way like export a environment to add linker dependeny to torch_cpu?
I tried
```
export LD_FLAGS=-lstdc++fs
```
but setup.py just forbid me to do this.
Thanks
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.28
Python version: 3.9.18 (main, Nov 8 2024, 10:59:05) [GCC 8.3.0] (64-bit runtime)
Python platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.28
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
```
cc @malfet @seemethere
| true
|
2,934,651,750
|
[Inductor] Remove triton dtype patch which has landed
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 11
|
CONTRIBUTOR
|
As this [pr][0] has already landed, we should remove its patch.
Having [mentioned][1] this before, I am making this change now to avoid omissions.
[0]: https://github.com/triton-lang/triton/pull/3342
[1]: https://github.com/pytorch/pytorch/pull/147583/files#r1970440062
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,934,541,461
|
APL support in pytorch
|
meghaanaaa
|
closed
|
[
"module: build",
"module: windows",
"triaged",
"module: third_party",
"actionable",
"module: arm"
] | 6
|
NONE
|
I was going through the pytorch source code and wanted to use APL for blas and lapack operation however in FindAPL.cmake there is a check for library in the binary directory
“FIND_PATH(APL_BIN_DIR NAMES armpl_lp64.dll libarmpl_lp64.a PATHS ${APL_BIN_SEARCH_PATHS})”
I have installed arm performance library using the below link:
[Arm Performance Libraries | Arm Learning Paths](https://learn.arm.com/install-guides/armpl/#:~:text=Windows,-On%20your%20Windows&text=Double%20click%20to%20open%20this,terms%20of%20this%20License%20Agreement'.&text=Click%20'Install'%20and%20then%20',Finish)’%20to%20complete%20the%20installation
I don’t see any libraries in bin directory. Is this particular patch written in pytorch for a specific APL version. Currently I am using 24.10 version of Arm performance library.
It would be really helpful if someone gives an intuition on why this is happening.
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @snadampal @milpuz01
| true
|
2,934,477,976
|
Batching Implementation for aten_nested_tensor_from_mask_left_aligned
|
itsd3
|
open
|
[
"triaged",
"module: nestedtensor",
"module: vmap",
"module: functorch"
] | 1
|
NONE
|
### 🐛 Describe the bug
Hi all,
I am trying to apply jacfwd/jacrev on a torch.nn module that implements the transformer architecture. It uses vmap as per below
```
compute_batch_jacobian = torch.vmap(torch.func.jacrev(model, argnums=(0,1)), in_dims=(0, 0, 0))
out = compute_batch_jacobian(*params)
```
I am getting this error:
```
RuntimeError: Batching rule not implemented for aten::_nested_tensor_from_mask_left_aligned. We could not generate a fallback.
```
Any idea when if/when this will be implemented? I am running this on GPU.
### Versions
[pip3] torch==2.6.0
[pip3] nvidia-cublas-cu12==12.4.5.8
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,934,388,117
|
Runtime Error when using Memory-efficient Attention with `attn_mask`
|
vinhtq115
|
open
|
[
"module: nn",
"triaged",
"module: sdpa"
] | 0
|
NONE
|
### 🐛 Describe the bug
Code to produce error:
```
import torch
from torch.nn.attention import SDPBackend
from torch.nn.functional import scaled_dot_product_attention
q = torch.rand((16, 12, 1024, 64), device="cuda")
k = torch.rand((16, 12, 1024, 64), device="cuda")
v = torch.rand((16, 12, 1024, 64), device="cuda")
mask = torch.ones((16, 1024, 1024), device="cuda", dtype=torch.bool)
with torch.nn.attention.sdpa_kernel([SDPBackend.EFFICIENT_ATTENTION,]):
x = scaled_dot_product_attention(q, k, v, mask)
```
When I set `attn_mask` according to the [docs](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html#torch.nn.functional.scaled_dot_product_attention) with shape `(N, L, S)`, it always produce this error:
```
RuntimeError: The expanded size of the tensor (12) must match the existing size (16) at non-singleton dimension 1. Target sizes: [16, 12, 1024, 1024]. Tensor sizes: [16, 1024, 1024]
```
If I remove the batch dimension (`(L,S)`), it works.
```
with torch.nn.attention.sdpa_kernel([SDPBackend.EFFICIENT_ATTENTION,]):
x = scaled_dot_product_attention(q, k, v, mask[0])
```
However, doing this means that each batch will have the same mask, instead of different ones.
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Ti
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn.so.8.9.7
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7900X 12-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 31%
CPU max MHz: 5733.0000
CPU min MHz: 545.0000
BogoMIPS: 9382.43
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] ament-flake8==0.17.1
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.6.0+cu126
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,934,377,073
|
Do not fetch NCCL when system NCCL is used
|
danieldk
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
We are compiling PyTorch in a sandbox without networking. Unconditionally fetching breaks the build and is not needed when a system NCCL is used.
| true
|
2,934,323,051
|
[AOTInductor] Fix skip cpp wrapper unit test
|
zoranzhao
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
MEMBER
|
Summary: as title
Test Plan:
```
buck2 test 'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:cpu_lowering_utils_test -- --exact 'deeplearning/aot_inductor/cpu/test:cpu_lowering_utils_test - test_cpu_lower_aoti_ep_called (deeplearning.aot_inductor.cpu.test.test_lowering_utils.CPULoweringTest)'
```
```
buck test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:cudagraph_trees_expandable_segments -- --exact 'caffe2/test/inductor:cudagraph_trees_expandable_segments - test_skip_cpp_wrapper (caffe2.test.inductor.test_cudagraph_trees.CudaGraphTreeTests)'
```
https://www.internalfb.com/phabricator/paste/view/P1758059197
Reviewed By: henryoier
Differential Revision: D71528281
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,934,307,587
|
torch.onnx.export does not support nested tensor operations
|
xuantengh
|
open
|
[
"module: onnx",
"triaged"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Currently, we cannot export models containing NT operations into ONNX in PyTorch:
```python
import torch
import torch.nn as nn
torch.cuda.init()
torch.set_default_device("cuda")
class ModuleWithNT(nn.Module):
def __init__(self):
super().__init__()
def forward(self, values, offsets):
return torch.nested.nested_tensor_from_jagged(values, offsets)
v = torch.randn(12, 5)
o = torch.tensor([0, 3, 5, 6, 10, 12])
model = ModuleWithNT()
# y = model(v, o)
# print(y.shape)
torch.onnx.export(model, (
v,
o,
), "/data/user/nt.onnx", dynamo=False)
```
When `dynamo=False` (by default), PyTorch reports:
```
File "/usr/local/lib/python3.10/site-packages/torch/onnx/__init__.py", line 383, in export
export(
File "/usr/local/lib/python3.10/site-packages/torch/onnx/utils.py", line 495, in export
_export(
File "/usr/local/lib/python3.10/site-packages/torch/onnx/utils.py", line 1428, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/usr/local/lib/python3.10/site-packages/torch/onnx/utils.py", line 1053, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/usr/local/lib/python3.10/site-packages/torch/onnx/utils.py", line 937, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/usr/local/lib/python3.10/site-packages/torch/onnx/utils.py", line 844, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "/usr/local/lib/python3.10/site-packages/torch/jit/_trace.py", line 1498, in _get_trace_graph
outs = ONNXTracedModule(
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/jit/_trace.py", line 138, in forward
graph, _out = torch._C._create_graph_by_tracing(
File "/usr/local/lib/python3.10/site-packages/torch/jit/_trace.py", line 129, in wrapper
outs.append(self.inner(*trace_inputs))
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1729, in _slow_forward
result = self.forward(*input, **kwargs)
File "/mnt/report_nt.py", line 14, in forward
return torch.nested.nested_tensor_from_jagged(values, offsets)
File "/usr/local/lib/python3.10/site-packages/torch/nested/__init__.py", line 414, in nested_tensor_from_jagged
return nested_view_from_values_offsets_lengths(
File "/usr/local/lib/python3.10/site-packages/torch/nested/_internal/nested_tensor.py", line 624, in nested_view_from_values_offsets_lengths
_nt_view_dummy(),
File "/usr/local/lib/python3.10/site-packages/torch/nested/_internal/nested_tensor.py", line 584, in _nt_view_dummy
).detach()
File "/usr/local/lib/python3.10/site-packages/torch/utils/_device.py", line 104, in __torch_function__
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nested/_internal/nested_tensor.py", line 353, in __torch_function__
return func(*args, **kwargs)
RuntimeError: Unsupported value kind: Tensor
```
When `dynamo=True`, it rather reports:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 110, in __call__
exported_program = self._capture(model, args, kwargs, dynamic_shapes)
File "/usr/local/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 186, in _capture
return torch.export.export(
File "/usr/local/lib/python3.10/site-packages/torch/export/__init__.py", line 368, in export
return _export(
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1008, in wrapper
ep = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1970, in _export
return _export_for_training(
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1008, in wrapper
ep = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1834, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1772, in _non_strict_export
aten_export_artifact = _to_aten_func( # type: ignore[operator]
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1564, in _export_to_aten_ir_make_fx
gm, graph_signature = transform(_make_fx_helper)(
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1702, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1485, in _make_fx_helper
gm = make_fx(
File "/usr/local/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2196, in wrapped
return make_fx_tracer.trace(f, *args)
File "/usr/local/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2134, in trace
return self._trace_inner(f, *args)
File "/usr/local/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2105, in _trace_inner
t = dispatch_trace(
File "/usr/local/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1138, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1694, in trace
res = super().trace(root, concrete_args)
File "/usr/local/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 843, in trace
(self.create_arg(fn(*args)),),
File "/usr/local/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1193, in wrapped
out = f(*tensors) # type:ignore[call-arg]
File "<string>", line 1, in <lambda>
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1469, in wrapped_fn
return tuple(flat_fn(*args))
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 879, in functional_call
out = mod(*args[params_len:], **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1764, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 539, in call_module
ret_val = forward(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/export/_trace.py", line 1689, in forward
tree_out = mod(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1764, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 539, in call_module
ret_val = forward(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/report_nt.py", line 14, in forward
return torch.nested.nested_tensor_from_jagged(values, offsets)
File "/usr/local/lib/python3.10/site-packages/torch/nested/__init__.py", line 393, in nested_tensor_from_jagged
raise RuntimeError(
RuntimeError: torch.nested.nested_tensor_from_jagged does not support tracing with fx.symbolic_trace. Use fx.wrap to wrap the function that calls nested_tensor_from_jagged.
```
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,934,281,871
|
DISABLED test_linear (__main__.TestLazyModules)
|
pytorch-bot[bot]
|
closed
|
[
"module: nn",
"triaged",
"module: flaky-tests",
"skipped"
] | 2
|
NONE
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_linear&suite=TestLazyModules&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39077249396).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 8 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_linear`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/nn/test_lazy_modules.py", line 126, in test_linear
self.assertTrue(
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
python test/nn/test_lazy_modules.py TestLazyModules.test_linear
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `nn/test_lazy_modules.py`
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @clee2000
| true
|
2,934,252,833
|
[inductor] API for user-controlled fusion "escape-hatch" mechanism in inductor
|
SamGinzburg
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 5
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
There have been several situations where torch.compile generated fusions are suboptimal. When this happens it can be difficult for users to get the desired degree of control over what gets fused.
It could be nice to have some way of specifying rules which make inductor fusion explicit as opposed to relying entirely on heuristics/autotuning as fusions can sometimes cause unexpected regressions as a sort of "performance escape hatch".
Maybe some way of specifying rules along the lines of "Don't fuse [insert op here] with [insert other op/fused ops]" or "Ensure that operators X, Y are fused together if possible".
### Alternatives
One approach to preventing an unwanted fusion is to use a custom-op like this:
```python
x = pytorch_op(...)
x = no_op_custom_op(x)
...
```
Another approach can be to insert a graph break (definitely suboptimal). Lastly, sometimes users choose to switch to a user-defined triton kernel that explicitly fuses ops together---but this can be annoying.
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,934,218,819
|
[codemod][lowrisk] Remove unused exception parameter from caffe2/aten/src/ATen/native/TensorAdvancedIndexingUtils.h
|
r-barnes
|
open
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Summary:
`-Wunused-exception-parameter` has identified an unused exception parameter. This diff removes it.
This:
```
try {
...
} catch (exception& e) {
// no use of e
}
```
should instead be written as
```
} catch (exception&) {
```
If the code compiles, this is safe to land.
Test Plan: Sandcastle
Reviewed By: meyering
Differential Revision: D71503154
| true
|
2,934,160,421
|
[WIP] Generalize AllocatorConfig to be device-agnostic
|
guangyey
|
open
|
[
"open source",
"release notes: cpp",
"topic: improvements"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151298
* #138222
* #150312
* __->__ #149601
# Motivation
This PR aims to generalize `AllocatorConfig` to be device-agnostic. Initially, I considered two approaches:
1. Per-device `AllocatorConfig`, where each device inherits from a base class and registers its own configuration.
2. A single `AllocatorConfig` for all devices, assuming that environment variables apply universally.
Option 1 offers greater flexibility, allowing each device to define its own configuration. However, Option 2 is simpler and promotes consistency by avoiding bias toward a specific device type, such as `CUDA`. For example, we can use a generic environment variable like `PYTORCH_ALLOC_CONF` instead of `PYTORCH_CUDA_ALLOC_CONF`.
After evaluating the trade-offs, I prefer Option 2, as it keeps the design straightforward while providing sufficient flexibility at this stage. And we could extend it to `AllocatorConfig& getAllocatorConfig(c10::DeviceType device_type=c10::kCUDA)` for per-device `AllocatorConfig` in the future if necessary.
| true
|
2,934,149,622
|
Fix ModularIndexing simplification
|
bobrenjc93
|
closed
|
[
"module: cpu",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149600
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,934,117,565
|
Newer conda versions require --update-deps to update dependencies such as libgcc-ng
|
jithunnair-amd
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
* When we try to install [libstdcxx-ng 12.3.0 from conda-forge](https://github.com/pytorch/pytorch/blob/595293316d7e64e32d31716500beae58367409a2/.ci/docker/common/install_conda.sh#L65), conda 24.7.1 updates the dependencies of that package, including libgcc-ng package to the following: `libgcc-ng-14.2.0 | h69a702a_2 52 KB conda-forge`
* However, conda updated their installer script on Feb 6 2025 to version 25.1.1, which behaves differently from previous versions when installing conda packages.
* conda 25.1.1 does *not* update any dependencies in the above step, and hence the same installation of libgcc-ng from "defaults" channel is present: `libgcc-ng pkgs/main/linux-64::libgcc-ng-11.2.0-h1234567_1`
* Adding the "--update-deps" flags to the conda install command installs a newer libgcc-ng package from the "conda-forge" conda channel: `libgcc-ng-12.3.0 | h77fa898_13 762 KB conda-forge`, which is compatible with the libstdcxx-ng 12.3.0 package
* Compare this [Feb 4 docker build](https://github.com/pytorch/pytorch/actions/runs/13148456164/job/36691412387#step:6:5179) to this [Feb 10 docker build](https://github.com/pytorch/pytorch/actions/runs/13247023578/job/36975931849#step:6:5451), which shows that the latter does *not* update libgcc-ng.
* This creates linking issues when trying to use a library, that was built with a newer libgcc_s.so.1 (from libcc-ng package), in the PyTorch conda environment. Eg. ONNX-RT:
```
[0;93m2025-02-13 10:18:38.492434704 [W:onnxruntime:Default, migraphx_execution_provider.cc:167 get_flags_from_env]
[MIGraphX EP] MIGraphX ENV Override Variables Set:[m
[1;31m2025-02-13 10:18:38.628064251 [E:onnxruntime:Default, provider_bridge_ort.cc:2028 TryGetProviderInfo_ROCM] /onnxruntime/onnxruntime/core/session/provider_bridge_ort.cc:1636 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_rocm.so with error: /opt/conda/envs/py_3.10/bin/../lib/libgcc_s.so.1: version `GCC_12.0.0' not found (required by /opt/conda/envs/py_3.10/lib/python3.10/site-packages/onnxruntime/capi/libonnxruntime_providers_rocm.so)
```
| true
|
2,934,040,425
|
[pt2_provenance_tracing] add combo kernel nodes post_grad nodes origin info
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Summary: found it helpful when running prod model with combo_kernel feature enabled
Test Plan: CI
Differential Revision: D71513304
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,934,002,574
|
fix missing field initializer warning
|
ngimel
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 3
|
COLLABORATOR
|
Per title
| true
|
2,933,917,813
|
[Inductor] Remove unnecessary initialization of self.available_buffer_names
|
FFFrog
|
closed
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149596
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,933,866,010
|
[Codemod][AddExplicitStrictExportForTrainingInferenceArg] caffe2/
|
gmagogsfm
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"fx",
"module: inductor",
"ciflow/inductor"
] | 16
|
CONTRIBUTOR
|
internal diff: D71497480
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,933,834,604
|
[PrivateUse1] Impl `isBuilt()` and `isAvailable()`
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Follow-up: #146098
cc: @albanD @FFFrog
| true
|
2,933,825,455
|
[MPS/Inductor] Add support for modified_bessel_k0.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,933,808,053
|
wrap script in main() and fix string replacements
|
AshutoshDevpura
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 1
|
NONE
|
- Encapsulate CSV processing logic in a main() function.
- Update string.replace() calls to correctly assign the result back to the variable.
- Use a clearer variable name for the commit hash.
| true
|
2,933,787,811
|
'false INTERNAL ASSERT FAILED' when calling torch.pinverse and using torch.float32 on Apple Silicon
|
PlayerCham
|
closed
|
[
"triaged",
"module: macos",
"module: linear algebra",
"module: arm"
] | 1
|
NONE
|
### 🐛 Describe the bug
When running the torch.pinverse function, the ‘false INTERNAL ASSERT FAILED’ error only occurs when running on an Apple Silicon (mine is M3 Pro) device using torch.float32, and not when using torch.float64. When running on an X86_64 (mine is AMD Ryzen 9 5950X) device, the error does not occur when using torch.float32 or 64.
The error will occur very quickly, otherwise it will run normally for a long time until the calculation is completed.
```python
import torch
def reproduce_bug():
m, n, k = 50000, 8193, 10
H_aug = torch.randn(n, m, dtype=torch.float32) #Change to torch.float64 here
T = torch.randn(n, k, dtype=torch.float32) #Change to torch.float64 here
W_aug = torch.pinverse(H_aug) @ T
print("W_aug shape:", W_aug.shape)
if __name__ == "__main__":
reproduce_bug()
```
```
(test) yuyao@cyys-MacBook-Pro-14-M3-Pro Downloads % python float32.py
** On entry to SGESDD, parameter number 12 had an illegal value
Traceback (most recent call last):
File "/Users/yuyao/Downloads/float32.py", line 11, in <module>
reproduce_bug()
File "/Users/yuyao/Downloads/float32.py", line 7, in reproduce_bug
W_aug = torch.pinverse(H_aug) @ T
^^^^^^^^^^^^^^^^^^^^^
RuntimeError: false INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1602, please report a bug to PyTorch. linalg.svd: Argument 12 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
### Versions
```
(test) yuyao@cyys-MacBook-Pro-14-M3-Pro Downloads % python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 12:55:12) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[conda] numpy 2.2.4 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
```
```
(test) C:\Users\yuyao\Downloads>python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 专业版 (10.0.26100 64 位)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:49:16) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.26100-SP0
Is CUDA available: False
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 572.70
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: AMD Ryzen 9 5950X 16-Core Processor
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3401
MaxClockSpeed: 3401
L2CacheSize: 8192
L2CacheSpeed: None
Revision: 8448
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[conda] numpy 2.2.4 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
```
cc @malfet @albanD @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @snadampal @milpuz01
| true
|
2,933,763,870
|
[export] min/max ranges for dim hints
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 9
|
CONTRIBUTOR
|
Differential Revision: D71522032
Adds min/max ranges to Dim.AUTO/DYNAMIC/STATIC, so users can do `Dim.AUTO(min=2, max=2048)`.
| true
|
2,933,747,478
|
Cleanup ctx manager state management
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 7
|
CONTRIBUTOR
|
Fixes #149572
## Changes
- Move logic in `ContextManagerState` to `ContextWrappingVariable`
- Remove `ContextManagerState`
## Test Result
```bash
pytest test/dynamo/test_ctx_manager.py
```

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,933,685,763
|
Fix spelling (#149277)
|
seemethere
|
closed
|
[
"release notes: releng"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149588
Approved by: https://github.com/zou3519
Signed-off-by: Eli Uriegas <github@terriblecode.com>
| true
|
2,933,612,765
|
add some extra test oom skips for jetson due to lacking nvml support
|
Fuzzkatt
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Add a couple of Jetson skips for oom tests in test/test_cuda.py due to failures in nvidia CI. Jetson not having full nvml support is a known issue so this is mostly a test side fix.
cc @eqy, @ptrblck, @nWEIdia
| true
|
2,933,561,177
|
UserWarning: Dynamo does not know how to trace the builtin `None.pybind11_object.__new__.`
|
cora-codes
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: higher order operators",
"module: compiled autograd",
"module: pt2-dispatcher",
"module: flex attention"
] | 11
|
NONE
|
### 🐛 Describe the bug
I'm filing an issue since this is a Python built-in (granted the error message implies that it is not since it references PyBind11, but I'm opening an issue anyway since it is caused by using returning/using `None` in a compiled function).
### Versions
2.7.0a0+gitebd087e
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @ydwu4 @xmfan @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,933,539,249
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,933,535,466
|
Add triton as dependency to CUDA aarch64 build
|
atalman
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Aarch64 Triton build was added by: https://github.com/pytorch/pytorch/pull/148705
Hence add proper contrain to CUDA 12.8 Aarch64 build
Please note we want to still use:
```platform_system == 'Linux' and platform_machine == 'x86_64'```
For all other builds.
Since these are prototype binaries only used by cuda 12.8 linux aarch64 build. Which we would like to serve from download.pytorch.org
| true
|
2,933,525,914
|
Easydict support
|
xmfan
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 3
|
MEMBER
|
### 🐛 Describe the bug
the library looks to be one file: https://github.com/makinacorpus/easydict/blob/master/easydict/__init__.py
From UED TRELLIS:
```python
import torch
from easydict import EasyDict
@torch.compile(backend="eager", fullgraph=True)
def fn():
d = EasyDict()
d.asd = "asd"
d.kvj = 123
return d
fn()
```
Error:
```
Traceback (most recent call last):
File "/home/xmfan/core/a/pytorch/ed.py", line 11, in <module>
fn()
File "/home/xmfan/core/a/pytorch/torch/_dynamo/eval_frame.py", line 659, in _fn
raise e.with_traceback(None) from None
torch._dynamo.exc.Unsupported: Unsupported method call
Explanation: Dynamo does not know how to trace method `keys` of class `mappingproxy`
Hint: Avoid calling `mappingproxy.keys` in your code.
Hint: Please report an issue to PyTorch.
Developer debug context: call_method GetAttrVariable(UserDefinedClassVariable(<class 'easydict.EasyDict'>), __dict__) keys [] {}
from user code:
File "/home/xmfan/core/a/pytorch/ed.py", line 6, in fn
d = EasyDict()
File "/home/xmfan/core/a/pytorch/torch/_dynamo/polyfills/__init__.py", line 157, in instantiate_user_defined_class_object
obj.__init__(*args, **kwargs)
File "/home/xmfan/core/a/pytorch-env/lib/python3.12/site-packages/easydict/__init__.py", line 143, in __init__
for k in self.__class__.__dict__.keys():
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,933,525,587
|
DeepSeek-R1-Distill-Qwen-1.5B: torch.compile slower than AOTInductor
|
angelayi
|
open
|
[
"topic: performance",
"oncall: pt2",
"export-triaged",
"oncall: export",
"module: aotinductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
| First iteration time (s) | Average time over 30 calls (s)
-- | -- | --
Eager | 0.21249 | 0.021328
torch.compile (tlparse) | 25.765 | 6.1492, fastest: 0.009088
torch.compile with mark_dynamic (tlparse) | 63.959 | 4.6029 (one less recompilation), fastest: 0.008864
AOTI compiled artifact w/ dynamic shapes | 0.033352 | 0.0034717
More info: https://docs.google.com/document/d/1XPtQ0XoPv-VxUkx-7H9G6i68w7jLPMeu6cdUQedUrug/edit?tab=t.0
```python
import time
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch.export import export, Dim
device = "cuda"
def test_model(model, tokenizer):
class Qwen2(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.qwen = model
def forward(self, x):
result = self.qwen(x)
result.past_key_values = ()
return result
qwen2 = Qwen2().to(device)
prompt = "What are the benefits of using AI in healthcare?"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
# Generate response from the model
input_ids = input_ids.to(device)
torch._dynamo.mark_dynamic(input_ids, 1)
torch._dynamo.reset()
torch._inductor.config.force_disable_caches = True
model = torch.compile(model, fullgraph=True)
# ep = export(qwen2, (torch.cat([input_ids, input_ids, input_ids]).to(device),), dynamic_shapes=({0: Dim.DYNAMIC, 1: Dim.DYNAMIC},))
# path = torch._inductor.aoti_compile_and_package(ep, package_path="deepseek_qwen2_aoti_dynamic.pt2")
# model = torch._inductor.aoti_load_package(path)
start = time.time()
output = model(input_ids)
end = time.time()
print(f"Initial time taken: {end - start}")
logits = output.logits
next_token_id = torch.argmax(logits[:, -1])
decoded_response = tokenizer.decode(next_token_id, skip_special_tokens=True)
print("Prompt:", prompt)
print("Response:", decoded_response)
def generate_response(model, input_ids, max_length=30):
times = 0
response = []
for i in range(max_length):
input_ids = input_ids.to(device)
start = time.time()
output = model(input_ids)
end = time.time()
print(f"Time on iteration {i}: {end - start}")
times += end - start
logits = output.logits
next_token_id = torch.argmax(logits[:, -1])
response.append(next_token_id.item())
input_ids = torch.cat([input_ids, next_token_id.unsqueeze(0).unsqueeze(0)], dim=-1)
print(f"Avg time per call: {times / max_length}")
return response
response_ids = generate_response(model, input_ids)
decoded_response = tokenizer.decode(response_ids, skip_special_tokens=True)
print("Prompt:", prompt)
print("Response:", decoded_response)
if __name__ == "__main__":
model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).eval().to(device)
test_model(model, tokenizer)
```
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
| true
|
2,933,507,788
|
Fix clang-tidy errors
|
MatzeB
|
closed
|
[
"triaged",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13
|
CONTRIBUTOR
|
Summary: Cleanup clang-tidy complaints in `EmbeddingBag.cpp`: Avoid shadowed variables and unused parameters.
Test Plan: sandcastle
Differential Revision: D71512594
| true
|
2,933,477,926
|
Capture fx-graph-cache-key in PyTorch trace
|
shengfukevin
|
open
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 16
|
CONTRIBUTOR
|
Summary:
Since PT2 compiler generated Triton kernels are generated on-fly, it is a black box for workload performance simulator from AI system Co-Design team. It is impossible to get the performance projection for these kernels.
The first step to get inside into Triton kernels is to get the source code. FX graph remote cache contains the source code of triton kernel. In PyTorch trace, we want to collect the cache key. These traces got processed in Durin, user can then retrieve triton source code from gpu_kernel_stats in Durin based on the cache key.
This DIFF is to pass fx graph cache key in Inductor meta data, and then it will be captured in PyTorch trace.
Test Plan: buck2 run mode/opt caffe2/test/inductor:profiler -- caffe2.test.inductor.test_profiler.DynamoProfilerTests.test_pt2_triton_fx_graph_cache_key
Differential Revision: D71452637
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,933,472,096
|
[torch/c10d] change class variable from private to protected (#149571)
|
GirasoleY
|
closed
|
[
"oncall: distributed",
"fb-exported",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
Summary:
Change class variable from private to protected in ProcessGroupNCCL
Test Plan: Existing UT Pass.
Reviewed By: kingchc, kwen2501
Differential Revision: D71373067
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,933,471,889
|
test/test_cuda.py: rework TEST_PYNVML logic to make more sense, add not IS_JETSON condition
|
Fuzzkatt
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
PYNVML related tests in test/test_cuda.py are failing in nvidia internal CI for Jetson devices because Jetson devices don't fully support nvml (it exists as a stub library). In addition to skipping PYNVML tests for Jetson, this PR also reworks the TEST_PYNVML logic a bit to be more consistent with the rest of TEST_{something} conditions in test/test_cuda.py
cc @eqy @ptrblck @nWEIdia
| true
|
2,933,462,159
|
[CI] Fix log artifact not containing test logs?
|
clee2000
|
closed
|
[
"Merged",
"release notes: releng",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Sometimes I would find a log artifact that only has usage_logs.txt in it, even though there are other logs created by tests. I think this is somehow caused by output buffering with find. I don't understand how, but at the very least, I can see that all the jobs on this PR have the logs from the test runs
| true
|
2,933,456,550
|
Improve attr mismatch msg
|
tugsbayasgalan
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: AO frontend"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149576
Differential Revision: [D71513041](https://our.internmc.facebook.com/intern/diff/D71513041)
| true
|
2,933,455,951
|
[Inductor] Fix combo_kernel logging error
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary:
Fix logging error like:
```
in combinable_nodes
log.debug(
Message: 'ComboKernels: %d template nodes are filtered'
Arguments: (OrderedSet([8]),)
--- Logging error ---
Traceback (most recent call last):
File "/usr/local/fbcode/platform010/lib/python3.10/logging/__init__.py", line 1100, in emit
msg = self.format(record)
File "/usr/local/fbcode/platform010/lib/python3.10/logging/__init__.py", line 943, in format
return fmt.format(record)
File "/data/users/guorachel/fbsource/buck-out/v2/gen/fbcode/854b9ed00d28c5c5/caffe2/torch/fb/model_transform/experimental/benchmark/__mts_gpu_benchmark__/mts_gpu_benchmark#link-tree/torch/_logging/_internal.py", line 818, in format
record.message = record.getMessage()
File "/usr/local/fbcode/platform010/lib/python3.10/logging/__init__.py", line 368, in getMessage
msg = msg % self.args
TypeError: %d format: a real number is required, not OrderedSet
```
encountered in running a prod model + enable combo kernel feature
Test Plan: CI
Differential Revision: D71512220
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,933,447,172
|
[ued][gemma3] HF + torch.compile - torch.compile on Gemma3
|
BoyuanFeng
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I tried torch.compile on Gemma3 and found a few issues.
`torch.compile(fullgraph=True)` gives the error:
```
torch._dynamo.exc.Unsupported: Observed exception
Explanation: Dynamo found no exception handler at the top-level compiled function when encountering an exception. Exception will propagate outside the compiled region.
Hint: Dynamo has detected that tracing the code will result in an error when running in eager. Please double check that your code doesn't contain a similar error when actually running eager/uncompiled.
Hint: It may be possible to write Dynamo tracing rules for this code. Please report an issue to PyTorch if you encounter this graph break often and it is causing performance issues.
Developer debug context: raised exception ExceptionVariable(<class 'AttributeError'>)
from user code:
File "/data/users/boyuan/pytorch/torch/_dynamo/external_utils.py", line 70, in inner
return fn(*args, **kwargs)
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
`torch.compile(fullgraph=False)` gives the error:
```
Traceback (most recent call last):
File "/home/boyuan/playground/gemma3_for_causal_lm.py", line 30, in <module>
generation = generate_fn(**inputs, max_new_tokens=100, do_sample=False)
File "/data/users/boyuan/pytorch/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
File "/data/users/boyuan/pytorch/torch/_dynamo/external_utils.py", line 70, in inner
return fn(*args, **kwargs)
File "/data/users/boyuan/pytorch/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/boyuan/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 2010, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "/home/boyuan/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1420, in _validate_model_kwargs
raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['max_new_tokens', 'do_sample'] (note: typos in the generate arguments will also show up in this list)
```
Removing `max_new_tokens=100, do_sample=False` makes torch.compile(fullgraph=False) work. But there are still many recompilations. Log: [P1760575842](https://www.internalfb.com/phabricator/paste/view/P1760575842)
Repro:
```python
import torch
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
ckpt = "google/gemma-3-4b-it"
model = Gemma3ForConditionalGeneration.from_pretrained(
ckpt, device_map="auto", torch_dtype=torch.bfloat16,
)
processor = AutoProcessor.from_pretrained(ckpt)
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/spaces/big-vision/paligemma-hf/resolve/main/examples/password.jpg"},
{"type": "text", "text": "What is the password?"}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
# generate_fn = model.generate
generate_fn = torch.compile(model.generate, fullgraph=True)
generation = generate_fn(**inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
More details: [doc](https://docs.google.com/document/d/1QIrkKedwnneNPTq5O7bpxvhatWxLTgESXObNRs62Q2M/edit?usp=sharing)
### Versions
PyTorch version: 2.8.0a0+git5b8cc47
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk15_hardened_2630_gf27365f948db-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 184
On-line CPU(s) list: 0-183
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 184
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 11.5 MiB (184 instances)
L1i cache: 11.5 MiB (184 instances)
L2 cache: 92 MiB (184 instances)
L3 cache: 2.9 GiB (184 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-183
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.1
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0a0+git5b8cc47
[conda] blas 1.0 mkl
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.10 py310h5eee18b_0
[conda] mkl_random 1.2.7 py310h1128e8f_0
[conda] numpy 1.26.4 py310h5f9d8c6_0
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0a0+git5b8cc47 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,933,422,171
|
Update gen_data.py
|
AshutoshDevpura
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 5
|
NONE
|
Minor refactor: update file I/O, use main() function, and improve CSV writing
- Moved the code inside the `if True:` block into a dedicated `main()` function, and added the `if __name__ == "__main__": main()` guard.
- Updated file I/O to use pathlib and context managers for safer resource handling.
- Replaced manual CSV string concatenation with `csv.writer` for proper CSV formatting.
- Retained original functionality while enhancing readability and maintainability.
`@pytorchbot label "topic: not user facing"`
Signed-off-by: Ashutosh Devpura <ashutoshdevpura@outlook.com>
| true
|
2,933,392,647
|
Cleanup ctx manager state management
|
mlazos
|
closed
|
[
"good first issue",
"triaged",
"actionable",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Today in https://github.com/pytorch/pytorch/blob/bc86b6c55a4f7e07548a92fe7c9b52ad2c88af35/torch/_dynamo/variables/ctx_manager.py#L58
We keep an indirect state object to workaround the previous immutability requirement of VariableTrackers. Since they can now be mutated, we can store the cleanup logic directly on the ctx manager objects.
### Error logs
_No response_
### Versions
N/A
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,933,383,696
|
[torch/c10d] change class variable from private to protected
|
GirasoleY
|
closed
|
[
"oncall: distributed",
"fb-exported",
"release notes: distributed (c10d)"
] | 4
|
CONTRIBUTOR
|
Summary: Change class variable from private to protected in ProcessGroupNCCL
Test Plan: Existing UT Pass.
Reviewed By: kingchc, kwen2501
Differential Revision: D71373067
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,933,362,497
|
[ued][kokoro] torch.compile fails in kokoro (both fullgraph=True and False)
|
yushangdi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"empathy-day"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```
conda create -y -n user-empathy python=3.11
conda activate user-empathy
pip install -q kokoro>=0.9.2 soundfile
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu126
```
```
text = '''
PyTorch is an open-source machine learning library developed by Facebook's AI Research Lab (FAIR), providing a dynamic computation graph, autograd system, and modular architecture that allows for more flexibility and ease of use compared to other popular deep learning frameworks like TensorFlow. It features a Pythonic API, native support for NVIDIA GPUs, and is widely used in computer vision tasks such as image classification, object detection, segmentation, and generation, as well as natural language processing (NLP) tasks like language modeling, text classification, sentiment analysis, and machine translation. PyTorch's advantages include ease of use, flexibility, fast prototyping, and a large community, making it an ideal choice for researchers and developers working on a wide range of applications, from speech recognition and reinforcement learning to robotics and autonomous systems. With its extensive documentation, tutorials, and pre-built models, PyTorch is an excellent choice for anyone looking to get started with deep learning or take their existing projects to the next level, and can be easily integrated into various workflows, including research, development, and production environments.
'''
from kokoro import KPipeline
from kokoro import KModel
import soundfile as sf
import torch
import time
torch._dynamo.config.capture_scalar_outputs = True
device = "cuda"
model = KModel().to(device).eval()
pipeline = KPipeline(lang_code='a', model=model, device=device)
pack = pipeline.load_voice('af_heart')
# eager mode
@torch.compile(fullgraph=False) # or fullgraph=True
def forward_gpu(ps, ref_s):
return model(ps, ref_s, 1)
def run():
times = []
for _ in range(10):
audios = []
generator = pipeline(text, voice='af_heart')
start = time.time()
for (_, ps, _) in generator:
ref_s = pack[len(ps)-1]
audio = forward_gpu(ps, ref_s)
audios.append(audio)
end = time.time()
times.append(end-start)
print(times)
print(sum(times[2:])/len(times[2:]))
# for i, audio in enumerate(audios):
# # print(i, gs, ps)
# sf.write(f'{i}.wav', audio, 24000)
# print("done")
run()
```
Error msg for `fullgraph=True`:
```
WARNING: Defaulting repo_id to hexgrad/Kokoro-82M. Pass repo_id='hexgrad/Kokoro-82M' to suppress this warning.
/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/nn/modules/rnn.py:123: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
warnings.warn(
/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/nn/utils/weight_norm.py:143: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.
WeightNorm.apply(module, name, dim)
WARNING: Defaulting repo_id to hexgrad/Kokoro-82M. Pass repo_id='hexgrad/Kokoro-82M' to suppress this warning.
W0319 13:42:00.363000 419191 .conda/envs/user-empathy/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py:6679] [0/0] failed during evaluate_expr(Ne(Mod(310*Max(1, u0), 8), 0), hint=None, size_oblivious=False, forcing_spec=False
E0319 13:42:00.364000 419191 .conda/envs/user-empathy/lib/python3.11/site-packages/torch/fx/experimental/recording.py:299] [0/0] failed while running evaluate_expr(*(Ne(Mod(310*Max(1, u0), 8), 0), None, False, False), **{})
Traceback (most recent call last):
File "/home/shangdiy/test.py", line 46, in <module>
run_eager()
File "/home/shangdiy/test.py", line 34, in run_eager
audio = forward_gpu(ps, ref_s)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 659, in _fn
raise e.with_traceback(None) from None
torch._dynamo.exc.UserError: Could not guard on data-dependent expression Ne(Mod(310*Max(1, u0), 8), 0) (unhinted: Ne(Mod(310*Max(1, u0), 8), 0)). (Size-like symbols: u0)
Caused by: attention_output = torch.nn.functional.scaled_dot_product_attention( # transformers/models/albert/modeling_albert.py:404 in forward (_dynamo/utils.py:3285 in run_node)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "/home/shangdiy/test.py", line 23, in forward_gpu
return model(ps, ref_s, 1)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 133, in forward
audio, pred_dur = self.forward_with_tokens(input_ids, ref_s, speed)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 102, in forward_with_tokens
bert_dur = self.bert(input_ids, attention_mask=(~text_mask).int())
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/modules.py", line 182, in forward
outputs = super().forward(*args, **kwargs)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 804, in forward
encoder_outputs = self.encoder(
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 535, in forward
layer_group_output = self.albert_layer_groups[group_idx](
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 487, in forward
layer_output = albert_layer(hidden_states, attention_mask, head_mask[layer_index], output_attentions)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 450, in forward
attention_output = self.attention(hidden_states, attention_mask, head_mask, output_attentions)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 404, in forward
attention_output = torch.nn.functional.scaled_dot_product_attention(
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example
from user code:
File "/home/shangdiy/test.py", line 23, in forward_gpu
return model(ps, ref_s, 1)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 133, in forward
audio, pred_dur = self.forward_with_tokens(input_ids, ref_s, speed)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 102, in forward_with_tokens
bert_dur = self.bert(input_ids, attention_mask=(~text_mask).int())
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/modules.py", line 182, in forward
outputs = super().forward(*args, **kwargs)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 804, in forward
encoder_outputs = self.encoder(
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 535, in forward
layer_group_output = self.albert_layer_groups[group_idx](
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 487, in forward
layer_output = albert_layer(hidden_states, attention_mask, head_mask[layer_index], output_attentions)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 450, in forward
attention_output = self.attention(hidden_states, attention_mask, head_mask, output_attentions)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 404, in forward
attention_output = torch.nn.functional.scaled_dot_product_attention(
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
Error msg for `fullgraph=False`:
```
Traceback (most recent call last):
File "/home/shangdiy/test.py", line 46, in <module>
run()
File "/home/shangdiy/test.py", line 34, in run
audio = forward_gpu(ps, ref_s)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/test.py", line 23, in forward_gpu
return model(ps, ref_s, 1)
^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 133, in forward
audio, pred_dur = self.forward_with_tokens(input_ids, ref_s, speed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 86, in forward_with_tokens
@torch.no_grad()
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1201, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 328, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 495, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/output_code.py", line 553, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_shangdiy/ee/cee3vxf5cyozzwpjjizc3knv674q2zfikzhti67aecnxiwn3dlpy.py", line 206, in call
triton_poi_fused__to_copy_add_gt_2.run(buf4, buf0, buf5, u0, stream=stream0)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 921, in run
self.autotune_to_one_config(*args, **kwargs)
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 775, in autotune_to_one_config
timings = self.benchmark_all_configs(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 749, in benchmark_all_configs
timings = {
^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 750, in <dictcomp>
launcher: self.bench(launcher, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 627, in bench
return benchmarker.benchmark_gpu(kernel_call, rep=40)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/benchmarking.py", line 39, in wrapper
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/benchmarking.py", line 243, in benchmark_gpu
_callable()
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 612, in kernel_call
launcher(
File "<string>", line 5, in launcher
File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/triton/backends/nvidia/driver.py", line 529, in __call__
self.launch(gridX, gridY, gridZ, stream, function, self.launch_cooperative_grid, global_scratch, *args)
ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?)
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 @anijain2305
### Versions
pytorch nightly
| true
|
2,933,361,819
|
Please Ignore - Created for import only.
|
c00w
|
closed
|
[
"ciflow/trunk",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,933,354,192
|
Remove unused import
|
c00w
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149568
* #149567
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,933,354,039
|
IGNORE - Introduce new template heuristic for triton autotune configs
|
c00w
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149568
* __->__ #149567
| true
|
2,933,347,982
|
User-torch._dynamo.disable annotations may be misleading
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|

A dynamo developer didn't decide to add this, I (as the user) did. There should be some logic to determine this.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,933,346,876
|
Should (eventually) be recommending nonstrict_trace instead of allow_in_graph in error messages:
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|

cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,933,335,766
|
fix dynamic float when dynamic=True
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149564
Fixes https://github.com/pytorch/pytorch/issues/149406#issuecomment-2738111733. Basically previously we would only make floats dynamic via automatic dynamic, now if you set dynamic=True, we will make the floats dynamic on the first compile.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,933,304,776
|
[MPS] Add `modified_bessel_k0` support to eager.
|
dcci
|
closed
|
[
"Merged",
"topic: improvements",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 4
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,933,299,990
|
avoid graph breaks on torch._C._nn._parse_to
|
ydwu4
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149562
* #149561
* #149560
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,933,292,243
|
Avoid graph break on torch.__future__.get_swap_module_params_on_conversion
|
ydwu4
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149562
* __->__ #149561
* #149560
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,933,292,145
|
specialize SymNodeVariable when used as list index
|
ydwu4
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149562
* #149561
* __->__ #149560
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,933,289,406
|
Improve handling for custom ops with no returns
|
bdhirsh
|
open
|
[
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 4
|
CONTRIBUTOR
|
during user empathy day I tried turning this function into a custom op, from torchdiffeq: https://github.com/rtqichen/torchdiffeq/blob/master/torchdiffeq/_impl/misc.py#L376
The experience was actually pretty smooth - add a one-liner API, and add some type annotations:
```
@torch.library.custom_op("torchdiffeq::check_timelike", mutates_args=())
def _check_timelike(name: str, timelike: torch.Tensor, can_grad: bool) -> None:
```
two things I'm not sure about though are:
(1) this was actually not quite enough, compile still yelled at me to write a FakeTensor rule (which is just no-op boilerplate, we should be able to infer this from the `None` return type)
(2) I'm not sure if we should do anything more automatic r.e. effect tokens here. In theory, you could imagine the compiler reordering the custom op (which asserts some data-dependent properties of the input), so it runs after that input is actually used in downstream compute, which would be wrong. Effect tokens aren't quite enough, but i'm not sure how risky the status quo is (will we reorder this custom op in practice?)
cc @chauhang @penguinwu @zou3519
| true
|
2,933,279,269
|
[test] trying to find a flaky test
|
clee2000
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,933,271,857
|
[BE] Eliminate TODO for 2022
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cpp",
"topic: bc breaking"
] | 3
|
CONTRIBUTOR
|
Need to think a bit more about what types.h includes
Fixes #ISSUE_NUMBER
| true
|
2,933,259,446
|
[ued][f5-tts][dynamo] dont graph break on `torch.jit.isinstance`
|
bdhirsh
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
It looks like this is a flavor of `isinstance()` that is meant to make torchscript happy. From user empathy day, it looks like `torchaudio` uses this API pretty heavily. We should probably just handle it in dynamo (by mapping it to builtin `isinstance`). Example: https://github.com/pytorch/audio/blob/main/src/torchaudio/functional/functional.py#L233
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,933,180,802
|
Fix NVTX functions compatibility with torch._dynamo
|
zsnoob
|
closed
|
[
"triaged",
"open source",
"release notes: cuda",
"module: dynamo",
"release notes: dynamo"
] | 4
|
NONE
|
## Problem Solved
This PR resolves the incompatibility between NVTX functions and torch._dynamo. When attempting to use NVTX profiling tools within code compiled with torch.compile(), the following error occurs:
```
torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor int call_function <function range_push at 0x...>
```
The root cause is that torch._dynamo requires all function calls within a compiled graph to return tensor types, but NVTX functions return integers, objects, or None.
## Changes
- Added a global toggle system to enable/disable tensor returns for NVTX functions
- Implemented a decorator to handle type conversion automatically
- Enhanced all NVTX functions to support tensor return mode
- Added clear documentation and type annotations
- Maintained backward compatibility with existing code
## Impact on Existing Functionality
This change has **zero impact** on existing functionality when used normally. The default behavior remains unchanged, and all functions continue to return their original types.
Only when explicitly enabled via `torch.utils.nvtx.enable_tensor_returns()` will the functions return tensor types instead. This opt-in approach ensures no disruption to existing code.
## Testing
- Added comprehensive unit tests that verify:
- Default behavior is preserved
- Tensor return mode correctly converts all return types to tensors
- Switching between modes works as expected
## Usage Example
```python
# Enable tensor returns for dynamo compatibility
torch.cuda.nvtx.enable_tensor_returns()
# Use NVTX functions in dynamo-compiled code
# All functions now return tensors
# with torch.compile context
with torch.cuda.nvtx.range("my_range"):
pass
# Disable tensor returns to restore original behavior
torch.cuda.nvtx.disable_tensor_returns()
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,933,113,911
|
Unify nccl versions for x86 and aarch64 builds
|
atalman
|
closed
|
[
"module: binaries",
"module: ci",
"triaged",
"topic: binaries"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Please see:
https://github.com/pytorch/pytorch/pull/149540
After landing of: https://github.com/pytorch/pytorch/pull/149351
CUDA aarch64 builds where broken, since nccl was defined in ``.ci/docker/common/install_cuda_aarch64.sh`` as well as in the matrix.
Consider removing the installation of nccl in the docker.
Change to have only 1 source of truth for both builds.
### Versions
2.8.0
cc @seemethere @malfet @osalpekar @pytorch/pytorch-dev-infra
| true
|
2,933,091,488
|
[Inductor] Use real input to autotune user defined triton kernels
|
muchulee8
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149553
Summary:
User defined Triton kernel sometimes rely on real inputs to determine
the path of execution. We need real inputs to invoke the correct
behavior of the user defined triton kernels (see example in test case,
where we have an early return for random inputs)
Test Plan:
Included in the commit.
python test/inductor/test_aot_inductor.py -k triton_autotuning
python test/inductor/test_aot_inductor.py -k triton_mutated_autotuning
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
2,933,081,462
|
Fix nvtx incompatibility with dynamo
|
zsnoob
|
closed
|
[
"open source",
"release notes: cuda",
"module: dynamo",
"release notes: dynamo"
] | 7
|
NONE
|
## Problem Solved
This PR resolves the incompatibility between NVTX functions and torch._dynamo. When attempting to use NVTX profiling tools within code compiled with torch.compile(), the following error occurs:
```
torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor int call_function <function range_push at 0x....>
```
The root cause is that torch._dynamo requires all function calls within a compiled graph to return tensor types, or None. But some NVTX functions return integers.
## Changes
- Added a global toggle system to enable/disable tensor returns for NVTX functions
- Implemented a decorator to handle type conversion automatically
- Enhanced all NVTX functions to support tensor return mode
- Added clear documentation and type annotations
- Maintained backward compatibility with existing code
## Impact on Existing Functionality
This change has **zero impact** on existing functionality when used normally. The default behavior remains unchanged, and all functions continue to return their original types.
Only when explicitly enabled via `torch.utils.nvtx.enable_tensor_returns()` will the functions return tensor types instead. This opt-in approach ensures no disruption to existing code.
## Testing
- Added comprehensive unit tests that verify:
- Default behavior is preserved
- Tensor return mode correctly converts all return types to tensors
- Switching between modes works as expected
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,933,067,814
|
Remove PyTorch conda installation instructions from the documentation and tutorials
|
atalman
|
open
|
[
"module: docs",
"triaged",
"topic: docs"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Please see: https://github.com/pytorch/pytorch/issues/138506
and https://dev-discuss.pytorch.org/t/pytorch-deprecation-of-conda-nightly-builds/2590
Pytorch have deprecated usage of conda builds for release 2.6. We need to remove mentioning of conda package installtion instructions from tutorials and documents.
Examples:
https://pytorch.org/tutorials/beginner/introyt/tensorboardyt_tutorial.html#before-you-start
* https://pytorch.org/audio/main/build.linux.html
* https://pytorch.org/audio/main/installation.html
* https://pytorch.org/audio/main/build.windows.html#install-pytorch
* https://pytorch.org/tutorials/recipes/recipes/tensorboard_with_pytorch.html
* https://pytorch.org/tutorials/beginner/introyt/captumyt.html
* https://pytorch.org/vision/main/training_references.html
* https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html#environment-setup
* https://pytorch.org/docs/stable/notes/windows.html#package-not-found-in-win-32-channel
### Versions
2.7.0
cc @svekars @sekyondaMeta @AlannaBurke
| true
|
2,933,054,264
|
Remove pre-cxx11 from the documentation and tutorials
|
atalman
|
closed
|
[
"module: docs",
"triaged",
"topic: docs"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Please see: https://github.com/pytorch/pytorch/issues/123649
and https://dev-discuss.pytorch.org/t/pytorch-linux-wheels-switching-to-new-wheel-build-platform-manylinux-2-28-on-november-12-2024/2581/2
Pytorch is using D_GLIBCXX_USE_CXX11_ABI=1 and Manylinux 2.28
Hence we should remove the usage of PRE_CXX11_ABI from the documents
Example: https://pytorch.org/cppdocs/installing.html#system-requirements
### Versions
2.7.0
cc @svekars @sekyondaMeta @AlannaBurke
| true
|
2,933,023,571
|
[dynamo] support Python 3.13t
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149549
A few bug fixes to get Dynamo mostly working with 3.13 nogil. Dynamo encounters internal CPython assert errors in older versions of 3.13. The fix has been landed on [CPython's 3.13 branch](https://github.com/python/cpython/tree/3.13) and will be included in 3.13.3 (https://peps.python.org/pep-0719/ - april 8). If you wish to try `torch.compile` on the latest 3.13 branch, you can comment out the error checking (i.e. https://github.com/pytorch/pytorch/blob/70b6cd4e11fe61f8f9d6229b6da510c4d91a992b/torch/__init__.py#L2535 and https://github.com/pytorch/pytorch/blob/70b6cd4e11fe61f8f9d6229b6da510c4d91a992b/torch/_dynamo/eval_frame.py#L899).
We will work on getting PyTorch CI up for Dynamo/dynamo-wrapped/inductor once 3.13.3 is available.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,932,987,662
|
[ROCm] NLLLoss (torch.nll_loss) Performance Tuning by Dynamically Selecting # of GPU threads
|
apakbin
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 16
|
CONTRIBUTOR
|
Instead of fixing the number of GPU threads to 32 regardless of input size, this PR dynamically selects the number of threads based on the formula: clamp(2^round(log2(dim0/16)), min = 32, max = 1024). The experiments below were done on an MI300 machine for data type float32:


cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,932,978,573
|
Remove `torch.nn` from `MOD_SKIPLIST`
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149547
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,932,978,130
|
[cherry-pick] nccl: upgrade to 2.26.2 to avoid hang on ncclCommAbort (#149351)
|
atalman
|
closed
|
[
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
cherry-pick of nccl: upgrade to 2.26.2 to avoid hang on ncclCommAbort (#149351) to the release
Generated by: export RELEASE_VERSION_TAG=2.7 ./regenerate.sh
| true
|
2,932,968,713
|
`torch.load(..., map_location="meta")` hangs indefinitely
|
Cyrilvallez
|
open
|
[
"module: serialization",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
Hey! We have come around a checkpoint that cannot be loaded on meta device, but is no problem to load on cpu.
Here is a min repro:
```python
from transformers.utils.hub import cached_file
import torch
# This will download the checkpoint
file = cached_file("MrLight/dse-qwen2-2b-mrl-v1", "pytorch_model.bin")
# This one hangs indefinitely
st = torch.load(file, map_location="meta", weights_only=True)
```
However, doing
```python
from transformers.utils.hub import cached_file
import torch
file = cached_file("MrLight/dse-qwen2-2b-mrl-v1", "pytorch_model.bin")
st = torch.load(file, map_location="cpu", weights_only=True)
```
works fine, and the weights seem to be correctly formed.
As the weights look fine, I'm not sure if it's an issue on your end, or if the checkpoint is still corrupted in some weird way. Note that the default weight map is "cuda:0", but this should not be an issue.
You can check out https://github.com/huggingface/transformers/issues/36803 for more informations!
### Versions
torch==2.6.0
cc @mruberry @mikaylagawarecki
| true
|
2,932,915,544
|
[c10d][fr] Fix the start event get for a potential undefined behavior
|
fduwjj
|
closed
|
[
"oncall: distributed",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149544
When the enableTiming_ is not set, we directly set `ncclStartEvent_` inside each work item to nullptr. Then when sending to fr, ncclStartEvent_.get() could lead to undefined behavior, so we just make it safer.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,932,894,766
|
Improve error message when view of intermediate is returned from autograd.Function and marked dirty
|
soulitzer
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: autograd",
"topic: improvements"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149543
* #149220
Fixes https://github.com/pytorch/pytorch/issues/149252
| true
|
2,932,840,110
|
Fix a typo "trochrec" to "torchrec"
|
Microve
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary: As titled, the path is incorrect due to the typo
Test Plan: CI
Differential Revision: D71490709
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,932,796,802
|
Add `is_batchedtensor` to dynamo builder
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149541
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,932,769,398
|
Modify cuda aarch64 install for cudnn and nccl. Cleanup aarch64 cuda 12.6 docker
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
1. Use NCCL_VERSION=v2.26.2-1 . Fixes nccl cuda aarch64 related failure we see here: https://github.com/pytorch/pytorch/actions/runs/13955856471/job/39066681549?pr=149443 . After landing: https://github.com/pytorch/pytorch/pull/149351
TODO: Followup required to unify NCCL definitions across the x86 and aarch64 builds
3. Cleanup Remove older CUDA versions for aarch64 builds . CUDA 12.6 where removed by: https://github.com/pytorch/pytorch/pull/148895
| true
|
2,932,769,322
|
Update ExecuTorch pin update
|
mergennachin
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Latest commit in https://hud.pytorch.org/hud/pytorch/executorch/viable%2Fstrict/1?per_page=50
Follow-up to https://github.com/pytorch/pytorch/issues/144480#issuecomment-2731150636
Also, need to incorporate change from https://github.com/pytorch/executorch/pull/8817
Test Plan:
Monitor linux-jammy-py3-clang12-executorch test
| true
|
2,932,753,950
|
Specify the default PyTorch Distributed backend for MPS
|
wangkuiyi
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 7
|
CONTRIBUTOR
|
Fixes #149537
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,932,751,851
|
Need to specify default communication backend for MPS
|
wangkuiyi
|
closed
|
[
"oncall: distributed",
"module: mps"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When I ran unit tests of TorchFT on my Mac Studio M2 Ultra ([steps](https://github.com/pytorch/torchft/issues/136#issuecomment-2734816067)), [this line](https://github.com/pytorch/torchft/blob/9d3834082012af8268202100fdcc16734f46b2cb/torchft/process_group_test.py#L675) in `test_device_mesh` triggered the following exception:
https://github.com/pytorch/pytorch/blob/94d761fbf084ee94c38674a0d693f2dc6850ce4b/torch/distributed/distributed_c10d.py#L373-L385
and gave me the following error messages:
> ValueError: We detected accelerator mps on your machine. But we don't know which communication backend to use for this accelerator. Please specify the `backend` argument in the `init_process_group` call.
### Versions
2.6.0 installed from pip install in a Miniforge environment.
With more details from the above script:
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 15.4 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.29.5
Libc version: N/A
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:41:52) [Clang 15.0.7 ] (64-bit runtime)
Python platform: macOS-15.4-arm64-arm-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.4
[pip3] torch==2.6.0
[pip3] torchft==0.1.1
[pip3] torchx==0.7.0
[conda] numpy 2.2.4 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchft 0.1.1 pypi_0 pypi
[conda] torchx 0.7.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,932,731,692
|
[test] sccache docker build
|
clee2000
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,932,699,132
|
Update commitlist.py instructions for the GitHub repo regime
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149535
| true
|
2,932,648,255
|
Addind RoPE to pytorch core
|
manuelcandales
|
open
|
[
"module: nn",
"triaged",
"enhancement",
"needs design"
] | 2
|
CONTRIBUTOR
|
The RoPE python code is being copied and pasted over and over in multiple pytorch org repos. I propose we move the RoPE operation to pytorch core (e.g. under nn.functional) and also add a RotaryPositionalEmbeddings module. Some examples of code duplication:
pytorch/ao:
- https://github.com/pytorch/ao/blob/64bcf4c25755a783685ba7383000b3bf722523c1/torchao/_models/llama/model.py#L546-L558
pytorch/benchmark:
- https://github.com/pytorch/benchmark/blob/2c5bc4ad6ae2e78a943aff182ad7c3400a7bb879/torchbenchmark/models/simple_gpt/model.py#L441-L458
- https://github.com/pytorch/benchmark/blob/2c5bc4ad6ae2e78a943aff182ad7c3400a7bb879/torchbenchmark/models/simple_gpt_tp_manual/model.py#L400-L417
pytorch/torchchat:
- https://github.com/pytorch/torchchat/blob/4d8bab57ce5dca927402923c2b1ad83cd7e2f6ac/torchchat/model.py#L988-L1000
pytorch/torchtune:
- https://github.com/pytorch/torchtune/blob/c3703482bde72e572b535d3f7c43c81e94164ebc/torchtune/modules/position_embeddings.py#L99-L122
- https://github.com/pytorch/torchtune/blob/c3703482bde72e572b535d3f7c43c81e94164ebc/torchtune/models/llama3_1/_position_embeddings.py#L168-L191
pytorch/xla:
- https://github.com/pytorch/xla/blob/4190fc0e8e73598966cb019108aa871a92bae046/torchax/test/llama/llama_model.py#L296-L310
pytorch/pytorch:
- https://github.com/pytorch/pytorch/blob/518563d6efa6b76b7e9a04e04dd2d8587b62737c/benchmarks/gpt_fast/model.py#L280-L292
- https://github.com/pytorch/pytorch/blob/518563d6efa6b76b7e9a04e04dd2d8587b62737c/benchmarks/gpt_fast/mixtral_moe_model.py#L293-L305
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,932,637,266
|
UnsupportedOperatorException: aten._fft_r2c.default
|
ivyw-ts
|
closed
|
[
"module: onnx",
"triaged",
"oncall: pt2",
"oncall: export"
] | 7
|
NONE
|
### 🐛 Describe the bug
We ran into this error when trying to convert the <a href="https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa">VoxLingua107 ECAPA-TDNN Spoken Language Identification Model</a> to ONNX. Replication and error outputs are below; a more verbose logs file is attached as well.
[onnx_export_logs.md](https://github.com/user-attachments/files/19347887/onnx_export_logs.md)
### Steps to replicate the error (using Linux machine):
We followed the README for Linux to download and build Pytorch on a Conda environment, but checked out the commit at <a href="https://github.com/pytorch/pytorch/commit/f89309fb732f93a21b5a3e49124623949b20c7dc">f89309f</a>. The next steps detail how to replicate the error we encountered when exporting the VoxLingua model.
1. Install speechbrain dependencies:
```
pip install git+https://github.com/speechbrain/speechbrain.git@develop
```
2. Set up VoxLingua project in new Python file:
```
import torch
import torchaudio
from speechbrain.inference.classifiers import EncoderClassifier
import torch.onnx
language_id = EncoderClassifier.from_hparams(source="speechbrain/lang-id-voxlingua107-ecapa", savedir="tmp")
# Create dummy audio signal data
signal = torch.zeros(48000)
prediction = language_id.classify_batch(signal)
print(prediction)
```
3. Add torch.onnx command to end of Python file:
```
torch.onnx.export(language_id, signal, "langid.onnx", export_params=True,
do_constant_folding=True, input_names=['input'], output_names=['output'],
dynamic_axes={'input' : {0 : 'batch_size'}}, dynamo=True, report=True)
```
4. Run in conda environment:
```
python3 <FILENAME>.py
```
### Error message:
```
torch._subclasses.fake_tensor.UnsupportedOperatorException: aten._fft_r2c.default
```
### Stack trace:
```
Traceback (most recent call last):
File "/home/convertonnx.py", line 14, in <module>
torch.onnx.export(language_id, signal, "langid.onnx", export_params=True,
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
do_constant_folding=True, input_names=['input'], output_names=['output'],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
dynamic_axes={'input' : {0 : 'batch_size'}}, dynamo=True, report=True) #variable length axes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anaconda3/envs/pytorch_conda/lib/python3.13/site-packages/torch/onnx/__init__.py", line 351, in export
return _compat.export_compat(
~~~~~~~~~~~~~~~~~~~~~^
model,
^^^^^^
...<19 lines>...
fallback=fallback,
^^^^^^^^^^^^^^^^^^
)
^
File "/home/anaconda3/envs/pytorch_conda/lib/python3.13/site-packages/torch/onnx/_internal/exporter/_compat.py", line 304, in export_compat
onnx_program = _core.export(
model,
...<11 lines>...
verbose=verbose,
)
File "/home//anaconda3/envs/pytorch_conda/lib/python3.13/site-packages/torch/onnx/_internal/exporter/_core.py", line 1292, in export
raise _errors.TorchExportError(
...<7 lines>...
) from first_error
```
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.13.2 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:02) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
CPU family: 6
Model: 79
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
Stepping: 1
BogoMIPS: 5199.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 2 MiB (8 instances)
L3 cache: 320 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.14.1
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] triton==3.2.0
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.14.1 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.