id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,993,658,007
|
[ROCm][CI/CD] create ROCm 6.4 images, part 1, skip magma tarball
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm"
] | 10
|
COLLABORATOR
|
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,993,652,914
|
[inductor][take 2] Change minimum number of SMs to 58 to let L4 Ada use Triton GEMM backend
|
henrylhtsang
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148622
* __->__ #151235
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,993,644,674
|
NCCL: Fix cmake file when cross compiling.
|
kumpera
|
open
|
[
"triaged",
"open source"
] | 4
|
CONTRIBUTOR
|
When cross compiling pytorch, we cannot execute compiled code.
FindNCCL uses try_run to check if the header version matches the binary version.
This fixes cross compilation scenarios by checking for CMAKE_CROSSCOMPILING prior to try_execute.
| true
|
2,993,585,344
|
Code Injection via `torch.jit.annotations.parse_type_line()`
|
paul-gerste-sonarsource
|
open
|
[
"oncall: jit",
"security"
] | 1
|
NONE
|
### 🐛 Describe the bug
We found the mitigation of https://github.com/pytorch/pytorch/issues/88868 to be incomplete. It is still possible to execute arbitrary Python code when controlling the input to `torch.jit.annotations.parse_type_line()` by bypassing the filter in `_eval_no_call()`.
Impact: An attacker that can control the first argument of `torch.jit.annotations.parse_type_line()` can execute arbitrary Python code.
You can find the details in our advisory: [PyTorch 2.6.0 Security Advisory.pdf](https://github.com/user-attachments/files/19739249/PyTorch.2.6.0.Security.Advisory.pdf)
PoC:
```py
import torch
torch.jit.annotations.parse_type_line('''# type: [help=='[print(1337),exit(0)]' for help.__class__.__eq__ in [eval]] -> 1337''', None, 1)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3 (arm64)
GCC version: Could not collect
Clang version: 20.1.1
CMake version: version 3.31.3
Libc version: N/A
Python version: 3.13.2 (main, Feb 4 2025, 14:51:09) [Clang 16.0.0 (clang-1600.0.26.6)] (64-bit runtime)
Python platform: macOS-15.3-arm64-arm-64bit-Mach-O
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,993,573,908
|
[logging] Separate cuda synchronize overhead in pad_mm
|
masnesral
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151232
Summary: In order to more accurately debug the overhead of pad_mm, explicity do a cuda.synchronize before benchmarking.
Test Plan: See internal test plan here: https://fburl.com/f365xfcj
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,993,571,566
|
[logging] Separate cuda synchronize overhead in autotuning
|
masnesral
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151231
Summary: In order to more accurately debug the overhead of autotuning, explicity do a cuda.synchronize before benchmarking and time that.
Test Plan: See internal test plan here: https://fburl.com/f365xfcj
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,993,554,760
|
[ROCM] Fix in-place aten sum with specialized templated kernels.
|
carlobertolli
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"ciflow/rocm"
] | 3
|
CONTRIBUTOR
|
We noticed a regression when doing aten.sum in-place (a+=b) and the type of the output is not the same as the functor.
Co-authored by: Jerry Mannil <jerry.mannil@amd.com>
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,993,464,427
|
DISABLED test_foreach_reduce_large_input__foreach_max_w_empty_False_cuda_bool (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 2
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_reduce_large_input__foreach_max_w_empty_False_cuda_bool&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40495778601).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_reduce_large_input__foreach_max_w_empty_False_cuda_bool`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 1104, in test_foreach_reduce_large_input
wrapped_op(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_max', keys=('aten::_foreach_max', 'Unrecognized', 'aten::zeros', 'aten::empty', 'aten::zero_', 'aten::fill_', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/test_foreach.py TestForeachCUDA.test_foreach_reduce_large_input__foreach_max_w_empty_False_cuda_bool
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,993,464,426
|
DISABLED test_parity__foreach_add_fastpath_inplace_cuda_bfloat16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_add_fastpath_inplace_cuda_bfloat16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40495820555).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_add_fastpath_inplace_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_add_', keys=('aten::_foreach_add_', 'Unrecognized', 'aten::result_type', 'cudaLaunchKernel', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1161, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1173, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.bfloat16]], args=(TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.bfloat16]]), kwargs={'alpha': '3.14'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_add_fastpath_inplace_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,993,302,960
|
Add @requires_multicast_support to test_multimem_all_gather
|
pragupta
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"ciflow/periodic-rocm-mi300"
] | 6
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,993,268,125
|
[Easy] The event_id of torch.cuda.Event and torch.xpu.Event always is 0
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu",
"ci-no-td"
] | 21
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151226
* #151411
* #151221
* #151404
Although torch.cuda.Event and torch.xpu.Event have cuda_event and sycl_event fields respectively, the event_id exposed from the base class torch.Event is always 0, which can confuse users.
The memory of torch.Event is not useful to torch.cuda.Event and torch.xpu.Event, but we still need to inherit from torch.Event because CPython will check it.
Repro with cuda:
```
>>> import torch
>>> event = torch.cuda.Event()
>>> event.cuda_event
0
>>> event.event_id
0
>>> event.record()
>>> event.cuda_event
127982096
>>> event.event_id
0
```
| true
|
2,993,255,140
|
[dynamo] keep C++ symbolic shape guards disabled for benchmarks
|
isuruf
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #140756
* __->__ #151225
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,993,139,653
|
[MPSInductor] Fix noop codegen
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151224
By adding `pass` in front of the comment for fake set_device call
Which fixes `TestGPU.test_zero_element_mutation_mps`, which previously
failed with
```
torch._inductor.exc.InductorError: RuntimeError: Failed to import /var/folders/sc/2thx6_x95h7_h9qs8s48yh140000gn/T/tmp2emka_sx/7k/c7kmnwhb363ysalhewglr3cwtej6tiz3t4ppqa4bvhubaokmlprw.py
IndentationError: expected an indented block after 'with' statement on line 38 (c7kmnwhb363ysalhewglr3cwtej6tiz3t4ppqa4bvhubaokmlprw.py, line 40)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,993,064,703
|
[FSDP] Cannot writeback when the parameter shape changes
|
efsotr
|
open
|
[
"oncall: distributed",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
```python
import os
import torch
from torch import nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
cmd = """
CUDA_VISIBLE_DEVICES=0 torchrun --nproc-per-node=1 mini_bug_reproduce.py # raise error
CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc-per-node=2 mini_bug_reproduce.py # pass
"""
torch.distributed.init_process_group(backend='nccl')
world_size = int(os.environ["WORLD_SIZE"])
local_rank = int(os.environ["LOCAL_RANK"])
torch.set_default_device(local_rank)
device = torch.get_default_device()
class Test(nn.Module):
def __init__(self):
super().__init__()
self.e = nn.Embedding(5, 4)
def forward(self, x):
x = self.e(x)
return x.sum()
model = Test().half()
model = FSDP(
model,
use_orig_params=True,
)
## copied from https://github.com/huggingface/accelerate/blob/main/src/accelerate/accelerator.py#L1679-1719
upcasted_log = []
for module in FSDP.fsdp_modules(model):
if not module._has_params:
continue # skip if FSDP module not managing parameters
param = module._flat_param
if (
param.dtype != torch.float32
and param.device != torch.device("meta")
and param.requires_grad
):
# keep log of names_params that was upcasted
# NOTE: resorted to this because warnings.simplefilter("once") is somehow not working
name_param_log = (module.module.__class__.__name__, ", ".join(module._flat_param._fqns))
if name_param_log not in upcasted_log:
upcasted_log.append(name_param_log)
# this works because of FSDP's _runtime_utils.lazy_init.
# Have to be careful not to call anything before this that
# triggers lazy_init (e.g., _is_fsdp_root).
param.data = param.data.to(torch.float32) # upcasting
module._handle._orig_param_dtype = torch.float32 # update
x = torch.randint(0, 5, (20,), device=device)
model.eval()
with torch.no_grad():
loss = model(x)
```
1 gpu raise error, 2 gpus pass
```log
Expects torch.Size([20]) but got torch.Size([5, 4])
```
It seems that proper initialization do not occur when using only a single GPU.
### Versions
torch 2.6.0
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,992,942,401
|
[FSDP] Detail information about parameters when raising errors
|
efsotr
|
open
|
[
"oncall: distributed",
"triaged"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
When raising an error, please provide the fully qualified name of the parameter within the model hierarchy, such as `model.layers[0].self_attn.q_proj.weight`, `model.layers[0].mlp.gate_proj.weight`.
Relevant Code
for example:
https://github.com/pytorch/pytorch/blob/d5a19e4525f49049f822930ed85fe32bb004589c/torch/distributed/fsdp/_flat_param.py#L2389-L2396
https://github.com/pytorch/pytorch/blob/d5a19e4525f49049f822930ed85fe32bb004589c/torch/distributed/fsdp/_flat_param.py#L784-L788
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,992,915,351
|
[Easy] Fix the function signature of torch.Event
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 31
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151226
* #151411
* __->__ #151221
* #151404
As the title stated.
The difference between declaration and implemention.
declaration:
https://github.com/pytorch/pytorch/blob/d5a19e4525f49049f822930ed85fe32bb004589c/torch/_C/__init__.pyi.in#L157-L162
Implementation:
https://github.com/pytorch/pytorch/blob/d5a19e4525f49049f822930ed85fe32bb004589c/torch/csrc/Event.cpp#L30-L32
**Question**: Which one should we choose?
- Change enable_timing to False to be consistent with torch.cuda.Event
- Change enable_timing to True to avoid BC-break
| true
|
2,992,833,206
|
Rendezvous on dead node
|
georgkaleido
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"release notes: distributed (torchelastic)"
] | 2
|
NONE
|
This fixes #111646. If a participant in a completed(aka ongoing) rendezvous leaves, this will not trigger a rerendezvous even though [docs state as much](https://github.com/pytorch/pytorch/blob/d5a19e4525f49049f822930ed85fe32bb004589c/torch/distributed/elastic/rendezvous/__init__.py#L69).
We force the agent to restart by adding a participant to the wait_list already. This will kick off [all restarting and joining a new rendezvous.](https://github.com/pytorch/pytorch/blob/142f0f86ce054f401d9d5145e4291629cafba45f/torch/distributed/elastic/agent/server/api.py#L906)
A test was also added to verify this behaviour.
An alternative would be to instead change the [api ](https://github.com/pytorch/pytorch/blob/d5a19e4525f49049f822930ed85fe32bb004589c/torch/distributed/elastic/rendezvous/api.py#L144) to expose a cleaner way that a RendezvousBackend can provide a flag indicating the need for a restart.
Fixes #111646
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,992,754,383
|
Optimize typing in `lr_scheduler.py`
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: optim"
] | 3
|
CONTRIBUTOR
|
## Changes
- Add typing annotation in `lr_scheduler.py`
## Test Result
```bash
pytest test/optim/test_lrscheduler.py -vv
```

| true
|
2,992,721,334
|
Implement MKLGenerator
|
michalowski-arm
|
open
|
[
"module: cpu",
"triaged",
"open source",
"topic: not user facing",
"module: dynamo",
"skip-url-lint"
] | 22
|
CONTRIBUTOR
|
This PR aims to fix the issue from #132395 by implementing a new `MKLGeneratorImpl` that stores a consistent, global `vslStream` for use in random numbers generation. This path was previously disabled due to a problem of repeating variates, caused by repeated reseeding of the MKL generator with variates from the `CPUGenerator`. This new implementation only seeds the `MKLGenerator` once using the `CPUGenerator`, and then keeps reusing the same `vslStream`, providing the full period of the RNG.
For the sake of reproducibility, the saving and restoring of the `MKLGenerator` has been linked to `CPUGenerator` state changes, and the former does not provide its own `get_state()` and `set_state()` functionality. The point was to keep the user experience identical to before -- they do not need to handle a separate `MKLGenerator` explicitly.
There already exists a test to check for repetition based on the script from #132395. It can be found `test_distribution.py` as `test_multinomial_sequential_draw()`. For the old (reseeded) implementation of the MKL `vslStream`, this test showed 21 repetitions. For this new implementation, the test gives 0 repetitions as expected.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,992,595,547
|
[Dynamo] add torch.Event && torch.Stream into _in_graph_classes of UserDefinedClassVariable
|
FFFrog
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151217
* #151213
* #151208
As the title stated.
Repro Codes:
```Python
torch.compile(backend="eager")
def func():
stream = torch.Stream(device="cuda:0")
event = torch.Event()
event.record(stream)
event.synchronize()
return event.query()
print(func())
```
Changed Before:
Return:
```Python
/root/Git.d/pytorch/pytorch/torch/_dynamo/variables/functions.py:1352: UserWarning: Dynamo does not know how to trace the builtin `None.Stream.__new__.` This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind).
If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround.
If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use `torch.compiler.allow_in_graph`.
torch._dynamo.utils.warn_once(explanation + "\n" + "\n".join(hints))
/root/Git.d/pytorch/pytorch/torch/_dynamo/variables/functions.py:1352: UserWarning: Dynamo does not know how to trace the builtin `None.Event.__new__.` This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind).
If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround.
If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use `torch.compiler.allow_in_graph`.
torch._dynamo.utils.warn_once(explanation + "\n" + "\n".join(hints))
True
```
Graph captured:
```Python
def forward(self):
stream = torch.Stream(stream_id = 3, device_index = 0, device_type = 1); stream = None
return ()
def forward(self):
get_user_object_from_id = torch__dynamo_utils_get_user_object_from_id(140287996703088)
stream = torch.Stream(stream_id = 3, device_index = 0, device_type = 1)
record = get_user_object_from_id.record(stream); stream = record = None
synchronize = get_user_object_from_id.synchronize(); synchronize = None
query = get_user_object_from_id.query(); get_user_object_from_id = query = None
return ()
```
Changed After:
Return:
```Python
True
```
Graph captured:
```Python
def forward(self):
stream = torch.Stream(device = 'cuda:0')
event = torch.Event()
record = event.record(stream); stream = record = None
synchronize = event.synchronize(); synchronize = None
query = event.query(); event = query = None
return ()
```
| true
|
2,992,499,362
|
DISABLED test_queues (__main__.LibUvTCPStoreTest)
|
pytorch-bot[bot]
|
closed
|
[
"oncall: distributed",
"module: flaky-tests",
"skipped"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_queues&suite=LibUvTCPStoreTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40480609862).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_queues`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/distributed/test_store.py", line 199, in test_queues
fut.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/opt/conda/envs/py_3.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/opt/conda/envs/py_3.10/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/var/lib/jenkins/workspace/test/distributed/test_store.py", line 184, in worker_a
self.assertEqual(local_store.queue_pop("b"), b"b1")
torch.distributed.DistStoreError: wait timeout after 10ms, keys: /b
To execute this test, run the following from the base repo dir:
python test/distributed/test_store.py LibUvTCPStoreTest.test_queues
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `distributed/test_store.py`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @clee2000
| true
|
2,992,499,062
|
DISABLED test_queues (__main__.PrefixTCPStoreTest)
|
pytorch-bot[bot]
|
closed
|
[
"oncall: distributed",
"module: flaky-tests",
"skipped"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_queues&suite=PrefixTCPStoreTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40480609862).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_queues`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/distributed/test_store.py", line 199, in test_queues
fut.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/opt/conda/envs/py_3.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/opt/conda/envs/py_3.10/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/var/lib/jenkins/workspace/test/distributed/test_store.py", line 184, in worker_a
self.assertEqual(local_store.queue_pop("b"), b"b1")
torch.distributed.DistStoreError: wait timeout after 10ms, keys: /test_prefix/b
To execute this test, run the following from the base repo dir:
python test/distributed/test_store.py PrefixTCPStoreTest.test_queues
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `distributed/test_store.py`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @clee2000
| true
|
2,992,498,972
|
DISABLED test_parity__foreach_acos_fastpath_outplace_cuda_float64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_outplace_cuda_float64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40481457003).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_outplace_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_acos', keys=('aten::_foreach_acos', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1161, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1173, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float64], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float64], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float64], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float64], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float64], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float64], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float64], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float64], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float64], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float64], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float64], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float64], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float64], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float64], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float64], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float64], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float64], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float64], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float64], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float64]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_acos_fastpath_outplace_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,992,404,266
|
[Event] add weakref for torch.Event
|
FFFrog
|
open
|
[
"open source",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151217
* __->__ #151213
* #151208
**Backgroup:**
`torch._dynamo.utils.store_user_object_weakref(value)` was introduted by this [PR](https://github.com/pytorch/pytorch/pull/133635/files#diff-9f0663783bcd93e948e0491ef61b48123bdc9977bcc632fd707da578df13bfa1R802) for `torch.xxx.Event`, but `torch.Event` don`t support weakref.
So, the code shown below will fail:
```Python
@torch.compile(backend="eager"):
event = torch.cuda.Event() //Success
event = torch.Event() //Fail
```
**Optional sulotions:**
- Use Python class to wrap the current `torch.Event` class (Python class not created by C API supports weakref by default)
- add weakref capability by Python C API(Just like this pr did)
**Question:**
For testcase: Where can I put the tests?(If necessary)
| true
|
2,992,212,794
|
Super tiny fix typo
|
fzyzcjy
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,992,175,620
|
[xla hash update] update the pinned xla hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned xla hash.
| true
|
2,992,175,519
|
[ONNX] exported nodes of Multi-head attention can be simplified
|
m23ayou2
|
open
|
[
"module: onnx",
"triaged"
] | 7
|
NONE
|
I am exporting the nn.multiheadattention layer from pytorch to onnx and i have seen that many new operations that are not expected

Is it a bug or a feature!
| true
|
2,992,171,964
|
[Dynamo] Fix the unimplemented_v2 of EventVariable.call_method in ctx_manager.py
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151217
* #151213
* __->__ #151208
Changes:
- Field of `explanations` shoule be `str` instead of `tuple`
- Not only `torch.cuda.Event`, but alse `torch.xpu.Event` can trigger this message.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,992,171,404
|
Update slow tests
|
pytorchupdatebot
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 13
|
COLLABORATOR
|
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
| true
|
2,992,170,495
|
Fix corner case in `torch.arange()` where int64_t truncation leads to size 0
|
shink
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes #149097
### Changes
This PR introduces a workaround for corner case where casting start/end/step to int64_t may introduce precision loss. If all values are within the range that double can represent exactly (i.e., [-2^53, 2^53]), we prefer using double arithmetic for consistency across devices. Otherwise, fallback to int64_t computation.
### Tests
All results are same as np
```
python test/test_torch.py -k test_arange
```
cc: @albanD
| true
|
2,992,150,389
|
Fix `MaskedTensor` to device ignored mask
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes #147140
## Changes
- Add `to` implementation in `MaskedTensor` to support move `mask` to target device
## Test Result
```python
In [1]: import torch
...: from torch.masked import as_masked_tensor
...: data = torch.tensor([1,2,3])
...: mask = torch.tensor([True,False,True])
...: mt = as_masked_tensor(data, mask).to('cuda')
...: mt.get_data().device, mt.get_mask().device
/home/zong/code/pytorch/torch/masked/maskedtensor/core.py:247: UserWarning: The PyTorch API of MaskedTensors is in prototype stage and will change in the near future. Please open a Github issue for features requests and see our documentation on the torch.masked module for further information about the project.
return MaskedTensor(data, mask)
/home/zong/code/pytorch/torch/masked/maskedtensor/_ops_refs.py:354: UserWarning: The PyTorch API of MaskedTensors is in prototype stage and will change in the near future. Please open a Github issue for features requests and see our documentation on the torch.masked module for further information about the project.
return MaskedTensor(new_data, _maybe_get_mask(args[0]))
Out[1]: (device(type='cuda', index=0), device(type='cuda', index=0))
In [2]: mt.sum(dim=0)
/home/zong/code/pytorch/torch/masked/maskedtensor/core.py:247: UserWarning: The PyTorch API of MaskedTensors is in prototype stage and will change in the near future. Please open a Github issue for features requests and see our documentation on the torch.masked module for further information about the project.
return MaskedTensor(data, mask)
Out[2]: MaskedTensor(4, True)
```
```bash
pytest test/test_maskedtensor.py -vv
```

| true
|
2,992,080,311
|
[Dynamo] Dynamo fails to trace reduce_scatter_v
|
yyp0
|
open
|
[
"oncall: distributed",
"triaged",
"module: c10d",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
### 🐛 Describe the bug
When attempting to trace a module containing reduce_scatter operations with Dynamo (where input tensors have varying sizes), the following error occurs:
```
from user code:
File "<eval_with_key>.0", line 659, in forward
call_backward_2 = torch__dynamo_external_utils_call_backward(getitem_638, (getitem_8,), _softmax_backward_data); getitem_638 = _softmax_backward_data = None
File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/torch/_dynamo/external_utils.py", line 108, in call_backward
grads = fake._forward_cls.backward(fake, *args) # type: ignore[attr-defined]
File "/opt/tiger/mariana/janus/megatron/gate.py", line 567, in backward
torch.distributed.reduce_scatter(
File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 4159, in reduce_scatter
opts = ReduceScatterOptions()
File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/torch/_dynamo/polyfills/__init__.py", line 173, in instantiate_user_defined_class_object
obj = cls.__new__(cls, *args, **kwargs)
```
It seems that Dynamo currently lacks support for tracing torch.distributed.reduce_scatter with non-uniform inputs. Are there any workarounds to enable Dynamo compatibility with variable-sized reduce_scatter?
@anijain2305 @zhuhaozhe @EikanWang
### Versions
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] optree==0.14.0
[pip3] torch==2.6.0a0+git037c3cc
[pip3] torchlibrosa==0.1.0
[pip3] torchvision==0.22.0a0+fab1188
[pip3] torchvision==0.22.0a0+fab1188
[pip3] triton==3.0.0
[conda] Could not collect
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,992,065,212
|
Turn off symm_mem when cuda version is <12.3
|
xw285cornell
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 8
|
CONTRIBUTOR
|
Summary: It looks symmetric memory only supports cuda12.3+. We do have the definition w/ 12.3- but we don't have implementation. So maybe a good idea to even disable the definition.
Test Plan: CI
Reviewed By: jianyuh, houseroad, ngimel, jiawenliu64
Differential Revision: D72936993
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,991,946,118
|
torch.compile doesn't respect `torch.set_default_device`
|
bobrenjc93
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
CONTRIBUTOR
|
```
import torch
torch.set_default_device('cuda')
def foo(x):
return x * torch.randn(1)
x = torch.randn(1)
foo(x) # eager ok
torch.compile(foo)(x) # compile not ok because torch.randn doesn't use the default device
```
gives the following compile error
```
(/home/bobren/local/a/pytorch-env) [22:39] devgpu035:/home/bobren/local/a/pytorch python r.py
Traceback (most recent call last):
File "/data/users/bobren/a/pytorch/r.py", line 10, in <module>
torch.compile(foo)(x) # not ok
File "/data/users/bobren/a/pytorch/torch/_dynamo/eval_frame.py", line 658, in _fn
return fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/convert_frame.py", line 1452, in __call__
return self._torchdynamo_orig_callable(
File "/data/users/bobren/a/pytorch/torch/_dynamo/convert_frame.py", line 1233, in __call__
result = self._inner_convert(
File "/data/users/bobren/a/pytorch/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
File "/data/users/bobren/a/pytorch/torch/_dynamo/convert_frame.py", line 1079, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/bobren/a/pytorch/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/convert_frame.py", line 779, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/data/users/bobren/a/pytorch/torch/_dynamo/convert_frame.py", line 815, in _compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/bobren/a/pytorch/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/data/users/bobren/a/pytorch/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/convert_frame.py", line 736, in transform
tracer.run()
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 3519, in run
super().run()
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 421, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/builtin.py", line 1113, in call_function
return handler(tx, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/builtin.py", line 791, in <lambda>
return lambda tx, args, kwargs: obj.call_function(
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/builtin.py", line 1113, in call_function
return handler(tx, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/builtin.py", line 991, in _handle_insert_op_in_graph
return dispatch_torch_function(tx, fn_var, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/torch_function.py", line 558, in dispatch_torch_function
res = tx.symbolic_torch_function_state.call_torch_function_mode(
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/torch_function.py", line 283, in call_torch_function_mode
return cur_mode.call_torch_function(tx, fn, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/torch_function.py", line 401, in call_torch_function
return call_torch_function(
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/torch_function.py", line 515, in call_torch_function
return tx.inline_user_function_return(torch_function_var, tf_args, {})
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1195, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 3745, in inline_call
return tracer.inline_call_()
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 3928, in inline_call_
self.run()
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2272, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/data/users/bobren/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/torch.py", line 1205, in call_function
return self.call_tensor_method(tx, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/torch.py", line 1480, in call_tensor_method
return args[0].call_method(tx, self.get_function().__name__, args[1:], kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/tensor.py", line 634, in call_method
return wrap_fx_proxy(
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/builder.py", line 2362, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/builder.py", line 2428, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
File "/data/users/bobren/a/pytorch/torch/_dynamo/variables/builder.py", line 2526, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "/data/users/bobren/a/pytorch/torch/_dynamo/utils.py", line 3267, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/data/users/bobren/a/pytorch/torch/_dynamo/utils.py", line 3165, in get_fake_value
ret_val = wrap_fake_exception(
File "/data/users/bobren/a/pytorch/torch/_dynamo/utils.py", line 2679, in wrap_fake_exception
return fn()
File "/data/users/bobren/a/pytorch/torch/_dynamo/utils.py", line 3166, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/data/users/bobren/a/pytorch/torch/_dynamo/utils.py", line 3363, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/data/users/bobren/a/pytorch/torch/_dynamo/utils.py", line 3333, in run_node
return getattr(args[0], node.target)(*args[1:], **kwargs)
File "/data/users/bobren/a/pytorch/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1311, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1932, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1414, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 2562, in _dispatch_impl
self.wrap_meta_outputs_with_default_device_logic(
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 2689, in wrap_meta_outputs_with_default_device_logic
return tree_map(wrap, r)
File "/data/users/bobren/a/pytorch/torch/utils/_pytree.py", line 1355, in tree_map
return treespec.unflatten(map(func, *flat_args))
File "/data/users/bobren/a/pytorch/torch/utils/_pytree.py", line 1192, in unflatten
leaves = list(leaves)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 2667, in wrap
) = FakeTensor._find_common_device(func, flat_args)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 923, in _find_common_device
merge_devices(arg)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 918, in merge_devices
raise RuntimeError(
torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_method mul(*(FakeTensor(..., device='cuda:0', size=(1,)), FakeTensor(..., size=(1,))), **{}): got RuntimeError('Unhandled FakeTensor Device Propagation for aten.mul.Tensor, found two different devices cuda:0, cpu')
from user code:
File "/data/users/bobren/a/pytorch/r.py", line 6, in foo
return x * torch.randn(1)
File "/data/users/bobren/a/pytorch/torch/utils/_device.py", line 104, in __torch_function__
return func(*args, **kwargs)
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,991,873,813
|
I can't compile Pytorch 2.0.0 because ninja: error: build.ninja:10911: multiple rules generate caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/UfuncCPUKernel_add.cpp.DEFAULT.cpp.o
|
gty1829
|
open
|
[
"needs reproduction",
"module: build",
"triaged"
] | 2
|
NONE
|
After I run `DEBUG=1 USE_DISTRIBUTED=0 USE_MKLDNN=0 USE_CUDA=0 BUILD_TEST=0 USE_FBGEMM=0 USE_NNPACK=0 USE_QNNPACK=0 USE_XNNPACK=0 python setup.py develop`, the compiler finally output as follows:

I didn't change CMakeList.txt.
When I download the source code from github, I found when I run `git submodule sync` or `git submodule update --init --recursive`, there is no output from the cmd as follows, but I can view the submodules from `cat .gitmodules`. So I manually down load the submodules like third_party/benchmark from github. I don't know whether this will affect the compilation error mentioned above.

My device and envs:
Ubuntu 22.04
CMake 3.31.0
ninja 1.12.1
Python 3.10
cc @malfet @seemethere
| true
|
2,991,846,243
|
[torch.export] Exported model with LSTM has outputs c_n and h_n with wrong dimensions
|
alaa-ali
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 1
|
NONE
|
### 🐛 Describe the bug
There is a bug in torch.export when exporting a model with LSTM layer.
When running the following source code in Python, these two outputs of LSTM layer (h_n, c_n) don't match the expected shapes. The generated output for internal states has an additional unnecessary dimension. The shapes are 4D with an additional singleton dimension.
```
import torch
import torch.nn as nn
class CustomModel(nn.Module):
def __init__(self, kwargs):
super(CustomModel, self).__init__()
self.lstm = nn.LSTM(input_size=kwargs['input_size'], hidden_size=kwargs['hidden_size'], num_layers=kwargs['num_layers'], bias=kwargs['bias'], batch_first=kwargs['batch_first'], dropout=kwargs['dropout'], bidirectional=kwargs['bidirectional'], proj_size=kwargs['proj_size'])
def forward(self, *args):
input = args[0]
output, (h_n, c_n) = self.lstm(input)
return output, h_n, c_n
model = CustomModel(kwargs={'input_size': 13,
'hidden_size': 20,
'num_layers': 1,
'bias': False,
'batch_first': True,
'dropout': 0.2,
'bidirectional': True,
'proj_size': 0})
sample_input = torch.rand(4, 10, 13)
exported_model = torch.export.export(model, (sample_input,))
print(exported_model)
```
The resulting exported model:
```
lstm = torch.ops.aten.lstm.input(args_0, [zeros, zeros_1], [p_lstm_weight_ih_l0, p_lstm_weight_hh_l0, p_lstm_weight_ih_l0_reverse, p_lstm_weight_hh_l0_reverse], False, 1, 0.2, True, True, True); args_0 = zeros = zeros_1 = p_lstm_weight_ih_l0 = p_lstm_weight_hh_l0 = p_lstm_weight_ih_l0_reverse = p_lstm_weight_hh_l0_reverse = None
getitem: "f32[4, 10, 40]" = lstm[0]
getitem_1: "f32[2, 1, 4, 20]" = lstm[1]
getitem_2: "f32[2, 1, 4, 20]" = lstm[2]; lstm = None
return (getitem, getitem_1, getitem_2)
```
It's noticeable that the two last outputs of LSTM layer (h_n, c_n) are 4D with an additional singleton dimension. Although these two outputs should be 3D as mentioned below:
https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html

### Versions
Python version: 3.11.2
Python platform: Linux-6.1.0-32-amd64-x86_64-with-glibc2.36
Is CUDA available: True
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,991,822,586
|
[CUDA Graph tree] Cannot capture buffer allocation on side CUDA Streams
|
lirundong
|
closed
|
[
"triaged",
"module: cuda graphs"
] | 7
|
NONE
|
### 🐛 Describe the bug
Inductor CUDA Graph Tree implementation cannot capture multi-stream programs that contain buffer allocations on side streams.
## A minimal example
```python
import torch
from torch._inductor.cudagraph_trees import cudagraphify_impl
from torch._inductor.cudagraph_trees import reset_cudagraph_trees
def multi_stream_allocation(args):
main_stream = torch.cuda.current_stream()
side_stream = torch.cuda.Stream()
entry = main_stream.record_event()
with torch.cuda.stream(side_stream):
entry.wait(side_stream)
side_stream_buffer = torch.ones(*args, device="cuda:0", dtype=torch.float32)
side_exit = side_stream.record_event()
main_stream_buffer = torch.ones(*args, device="cuda:0", dtype=torch.float32)
side_exit.wait(main_stream)
if isinstance(args, list):
# Reflect the CUDA GraphTree warmup logic implemented in
# https://github.com/pytorch/pytorch/blob/81aee3c9/torch/_inductor/cudagraph_trees.py#L682
args.clear()
return main_stream_buffer, side_stream_buffer
if __name__ == "__main__":
torch._dynamo.reset()
reset_cudagraph_trees()
# Expect error message like
# RuntimeError: These storage data ptrs are not allocated in pool (0, 1) but should be {139780908122112}
graphed_multi_stream_func = cudagraphify_impl(
multi_stream_allocation,
inputs=[],
static_input_idxs=[],
is_backward=False,
is_inference=False,
device_index=0,
)
for i in range(3):
main_stream_buffer, side_stream_buffer = graphed_multi_stream_func([2, 3])
print(f"#{i}: {main_stream_buffer.norm()=}")
print(f"#{i}: {side_stream_buffer.norm()=}")
```
Output
```log
RuntimeError: These storage data ptrs are not allocated in pool (0, 1) but should be {139780908122112}
```
### Versions
[collect_env.log](https://github.com/user-attachments/files/19729417/collect_env.log)
cc @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng
| true
|
2,991,770,840
|
[inductor] [silent incorrectness] `torch.nn.PairwiseDistance(p=2)` outputs incorrect results with eager
|
shaoyuyoung
|
open
|
[
"high priority",
"triaged",
"module: correctness (silent)",
"oncall: pt2",
"module: inductor",
"ubn"
] | 8
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `torch.nn.PairwiseDistance(p=2)` outputs incorrect results
**device backend**: both triton and CPP
**note**: I have used `fp64` as baseline to compare the results
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
import os
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.pad = torch.nn.ReflectionPad3d(1)
self.dist = torch.nn.PairwiseDistance(p=2)
def forward(self, x):
x = self.pad(x)
x = x.view(x.size(0), -1)
x = torch.chunk(x, 2, dim=1)
x = self.dist(x[0], x[1])
return x
model = Model().eval().cuda()
x = torch.randn(2, 3, 4, 4, 4).cuda()
inputs = [x]
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
fp64 = run_test(model.to(dtype=torch.float64), [x.to(dtype=torch.float64)], 'eager')
print(output)
print(c_output)
print(fp64)
print(torch.allclose(output, c_output, 1e-3, 1e-3, equal_nan=True))
print(torch._dynamo.utils.same(output, c_output, fp64))
print(torch.max(torch.abs(output - c_output)))
```
### Error logs
```
tensor([22.9208, 22.6405], device='cuda:0')
tensor([23.1078, 21.4387], device='cuda:0')
tensor([22.9208, 22.6405], device='cuda:0', dtype=torch.float64)
False
E0414 11:20:47.280000 958741 site-packages/torch/_dynamo/utils.py:2930] RMSE (res-fp64): 0.86004, (ref-fp64): 0.00000 and shape=torch.Size([2]). res.dtype: torch.float32, multiplier: 3.000000, tol: 0.000100, use_larger_multiplier_for_smaller_tensor: 0
False
tensor(1.2018, device='cuda:0')
```
### Versions
nightly 20250414
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
2,991,725,957
|
Fix `keepdim` param optional description
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 7
|
CONTRIBUTOR
|
Fixes #151104
Fix optional description of `dim` and `keepdim`, except `torch.quantile` which already fixed in #146485
## Test Result
### Before

### After

cc @soulitzer
| true
|
2,991,645,017
|
compile of vmap of jacfwd fails
|
marikgoldstein
|
open
|
[
"triaged",
"oncall: pt2",
"module: functorch"
] | 0
|
NONE
|
### 🐛 Describe the bug
Hi Pytorch Compile developers,
Thanks so much for the great library and functionality.
I looked around but couldn't find the following as an existing issue.
In short:
calling vmap of jacfwd of a function works for me, but compiling it triggers an assert that prints out a message, saying I should report the bug. Jacrev works fine though, but isn't optimal for my setup.
Please let me know what you think and apologies if doing anything silly.
Setup:
For batch size N, I have a network that takes in a batch of times of shape (N,) and an image of shape (N,C,H,W). The network also produces an image of (N,C,H,W). For example, a diffusion model.
I'd like the pixel-wise derivative of the output with respect to t. So the model is net(t, x) and I want (d/dt) net(t, x).
I'd like to do this in a forward pass so I can compute a loss function comparing the image of time derivatives to some value.
(Of course, let me know if there are also just some other better ways to do this, but still reporting the bug)
If i'm not mistaken:
- In general, the jacobian here is (N, C, H, W, N) but it actually factors across datapoints so it makes sense to differentiate a 1 datapoint function and then vmap, to get the correct (N, C, H, W) containing d/dt of each output pixel
- jacfwd makes sense here over jacrev since it is high output dim (image) and low input dim (scalar).
The error:
torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function <built-in function mul>(*(FakeTensor(..., device='cuda:0', size=(1,), requires_grad=True), GradTrackingTensor(lvl=3, value=
BatchedTensor(lvl=1, bdim=0, value=
FakeTensor(..., device='cuda:0', size=(16, 1, 1, 1, 1))
)
)), **{}): got RuntimeError('InferenceMode::is_enabled() && self.is_inference() INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/VariableMethodStubs.cpp":66, please report a bug to PyTorch. Expected this method to only be reached in inference mode and when all the inputs are inference tensors. You should NOT call this method directly as native::_fw_primal. Please use the dispatcher, i.e., at::_fw_primal. Please file an issue if you come across this error otherwise.')
Versions/hardware:
- Python 3.9.19
- torch 2.8.0.dev20250412+cu126
- NVIDIA-SMI 560.28.03, Driver Version: 560.28.03, CUDA Version: 12.6
- NVIDIA A100 80GB PCIe
Here I include code to reproduce:
```
import torch
import torch.nn as nn
from torch.func import vmap, jacfwd, jacrev
import os
class Net(nn.Module):
# a simple network with 1 param that multiples it times t and x
def __init__(self,):
super().__init__()
self.param = nn.Parameter(
torch.randn(1), requires_grad=True,
)
def forward(self, t, x):
return self.param * t[:, None, None, None] * x
if __name__ == '__main__':
os.environ['TORCHDYNAMO_VERBOSE']='1'
device = torch.device('cuda')
net = Net()
net.to(device)
N = 16
t = torch.rand(N,).to(device)
x = torch.randn(N, 3, 32, 32).to(device)
net.train()
def f_single(t_i, x_i):
return net(t_i[None, ...], x_i[None, ...]).squeeze(0)
# jacobian on single datapoint method and then vmap
# so that jacobian is computed separately per datapoint
# yielding desired (N, C, H, W) instead of needless (N, C, H, W, N)
ddt_f_single = jacfwd(f_single, argnums=0)
ddt_f = vmap(ddt_f_single)
output = ddt_f(t, x)
print("output shape", output.shape)
# above works. but calling the compiled func fails.
ddt_f = torch.compile(ddt_f)
output2 = ddt_f(t, x)
```
### Error logs
[goldsm20@a100-4029 directory_name]$ python minimal.py
output shape torch.Size([16, 3, 32, 32])
Traceback (most recent call last):
File "/gpfs/data/ranganathlab/mark/flows_ddt/minimal.py", line 43, in <module>
output2 = ddt_f(t, x)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
return fn(*args, **kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1452, in __call__
return self._torchdynamo_orig_callable(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1233, in __call__
result = self._inner_convert(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1079, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 779, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 815, in _compile_inner
out_code = transform_code_object(code, transform)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 736, in transform
tracer.run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3491, in run
super().run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2244, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 1812, in call_function
return super().call_function(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1195, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3717, in inline_call
return tracer.inline_call_()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3900, in inline_call_
self.run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2244, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1195, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3717, in inline_call
return tracer.inline_call_()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3900, in inline_call_
self.run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2244, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1195, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3717, in inline_call
return tracer.inline_call_()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3900, in inline_call_
self.run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2146, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1195, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3717, in inline_call
return tracer.inline_call_()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3900, in inline_call_
self.run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2244, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 1812, in call_function
return super().call_function(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1195, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3717, in inline_call
return tracer.inline_call_()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3900, in inline_call_
self.run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2244, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1195, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3717, in inline_call
return tracer.inline_call_()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3900, in inline_call_
self.run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2244, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1195, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3717, in inline_call
return tracer.inline_call_()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3900, in inline_call_
self.run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2256, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1195, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3717, in inline_call
return tracer.inline_call_()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3900, in inline_call_
self.run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2244, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1195, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3717, in inline_call
return tracer.inline_call_()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3900, in inline_call_
self.run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 827, in wrapper
return inner_fn(self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2146, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/nn_module.py", line 952, in call_function
return variables.UserFunctionVariable(fn, source=source).call_function(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1195, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3717, in inline_call
return tracer.inline_call_()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3900, in inline_call_
self.run()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 421, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 1114, in call_function
return handler(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 792, in <lambda>
return lambda tx, args, kwargs: obj.call_function(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 1114, in call_function
return handler(tx, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 1079, in _handle_insert_op_in_graph
return wrap_fx_proxy(tx, proxy)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 2362, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 2428, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 2526, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 3269, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 3167, in get_fake_value
ret_val = wrap_fake_exception(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 2681, in wrap_fake_exception
return fn()
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 3168, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 3365, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 3324, in run_node
return node.target(*args, **kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1311, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1932, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1423, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 2211, in _dispatch_impl
(flat_args, flat_arg_fake_tensors) = self.validate_and_convert_non_fake_tensors(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 2640, in validate_and_convert_non_fake_tensors
validated_args = [validate(a) for a in flat_args]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 2640, in <listcomp>
validated_args = [validate(a) for a in flat_args]
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 2630, in validate
f"with 'allow_non_fake_inputs'. Found in {render_call(func, args, kwargs)}"
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_utils.py", line 694, in render_call
str_args.extend(repr(a) for a in args)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_utils.py", line 694, in <genexpr>
str_args.extend(repr(a) for a in args)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_tensor.py", line 590, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_tensor_str.py", line 726, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_tensor_str.py", line 439, in _str_intern
self, tangent = torch.autograd.forward_ad.unpack_dual(inp)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/autograd/forward_ad.py", line 168, in unpack_dual
primal, dual = torch._VF._unpack_dual(tensor, level=level)
torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function <built-in function mul>(*(FakeTensor(..., device='cuda:0', size=(1,), requires_grad=True), GradTrackingTensor(lvl=3, value=
BatchedTensor(lvl=1, bdim=0, value=
FakeTensor(..., device='cuda:0', size=(16, 1, 1, 1, 1))
)
)), **{}): got RuntimeError('InferenceMode::is_enabled() && self.is_inference() INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/VariableMethodStubs.cpp":66, please report a bug to PyTorch. Expected this method to only be reached in inference mode and when all the inputs are inference tensors. You should NOT call this method directly as native::_fw_primal. Please use the dispatcher, i.e., at::_fw_primal. Please file an issue if you come across this error otherwise.')
from user code:
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_functorch/eager_transforms.py", line 1273, in wrapper_fn
results = vmap(push_jvp, randomness=randomness)(basis)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_functorch/eager_transforms.py", line 1262, in push_jvp
output = _jvp_with_argnums(
File "/gpfs/scratch/goldsm20/miniconda3/envs/ddc/lib/python3.9/site-packages/torch/_functorch/eager_transforms.py", line 1101, in _jvp_with_argnums
result_duals = func(*duals)
File "/gpfs/data/ranganathlab/mark/flows_ddt/minimal.py", line 31, in f_single
return net(t_i[None, ...], x_i[None, ...]).squeeze(0)
File "/gpfs/data/ranganathlab/mark/flows_ddt/minimal.py", line 14, in forward
return self.param * t[:, None, None, None] * x
### Versions
[goldsm20@a100-4029 directory_name]$ python3 collect_env.py
Collecting environment information...
PyTorch version: 2.8.0.dev20250412+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.8 (Ootpa) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-18)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.9.19 (main, Mar 21 2024, 17:11:28) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-477.27.1.el8_8.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.6.20
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 560.28.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6342 CPU @ 2.80GHz
Stepping: 6
CPU MHz: 3500.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 36864K
NUMA node0 CPU(s): 0-11
NUMA node1 CPU(s): 12-23
NUMA node2 CPU(s): 24-35
NUMA node3 CPU(s): 36-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu11==8.7.0.84
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-fid==0.3.0
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250412+cu126
[pip3] torchaudio==2.6.0.dev20250127+cu124
[pip3] torchdiffeq==0.2.3
[pip3] torchvision==0.22.0.dev20250127+cu124
[pip3] triton==2.3.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.1.0.70 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.20.5 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-fid 0.3.0 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0.dev20250412+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250412+cu126 pypi_0 pypi
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250127+cu124 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
cc @chauhang @penguinwu @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,991,585,470
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,991,567,644
|
Mark auto_functionalized HOPs as cacheable
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151194
* #151193
Fixes #151188
Test Plan:
- new tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,991,567,589
|
Improve sort with non-constant keys error message
|
zou3519
|
closed
|
[
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151194
* __->__ #151193
Fixes https://github.com/pytorch/pytorch/issues/143505
Test Plan:
- new test
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,991,481,764
|
[ZCH vNext] Bucket offsets and sizes in torchrec shard metadata for bucket wise sharding
|
faran928
|
open
|
[
"oncall: distributed",
"fb-exported",
"release notes: distributed (sharded)"
] | 10
|
CONTRIBUTOR
|
Summary:
X-link: https://github.com/pytorch/torchrec/pull/2885
X-link: https://github.com/pytorch/torchrec/pull/2884
Bucket offsets and sizes in torchrec shard metadata for bucket wise sharding for ZCH v.Next
Test Plan: buck test torchrec/distributed/tests:test_sharding_plan
Differential Revision: D72921209
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,991,438,422
|
Fix DWConv in QNNPACK for aarch32
|
joseluisbf-kpsr
|
open
|
[
"module: cpu",
"triaged",
"open source",
"release notes: quantization"
] | 2
|
NONE
|
Some function arguments are stored below the stack pointer, that is, in a free memory area. Any call that stores values in the SP (e.g. OS context switch) will corrupt these values after return. We didn't face this problem in Linux, but it raises an `_ARMV4_Exception_data_abort_default` in RTEMS.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,991,404,420
|
Clarify that x and dx are mutually exclusive in torch.trapezoid doc
|
aishwaryar12309
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 6
|
CONTRIBUTOR
|
This PR addresses [#151105](https://github.com/pytorch/pytorch/issues/151105) by stating that x and dx are mutually exclusive parameters in torch.trapezoid()
| true
|
2,991,389,434
|
NotImplementedError: The operator 'aten::_linalg_solve_ex.result' is not currently implemented for the MPS device. Available in nightly for CPU.
|
peterdn1
|
closed
|
[
"triaged",
"module: linear algebra",
"module: mps"
] | 1
|
NONE
|
### 🐛 Describe the bug
The HiDream-i1 model (currently among the most advanced open-source AI image generators) was released on Hugging Face this week (April 8, 2025). While the full model fails to run on an RTX 5090, I’ve successfully managed to load all components on my M1, with some minor code modifications to their open source project which I will contribute if I am able to have a fully functional mac implementation.
Initially, I ran into issues due to an unimplemented method—this has since been addressed in the latest nightly builds. However, when attempting to generate an image, I now encounter the following error:
NotImplementedError: The operator 'aten::_linalg_solve_ex.result' is not currently implemented for the MPS device. Available in nightly for CPU.
I would really like to get this working with MPS, as it will allow this model as well as others to run efficiently on mac hardware.
Appreciate you efforts.
Regards,
Peter
### Versions
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.0.13.3)
CMake version: version 3.29.5
Libc version: N/A
Python version: 3.10.17 | packaged by conda-forge | (main, Apr 10 2025, 22:23:34) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Ultra
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[conda] numpy 2.2.4 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
cc @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,991,284,069
|
Make auto_functionalize HOPs cacheable
|
zou3519
|
closed
|
[
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"dynamo-triage-jan2025"
] | 0
|
CONTRIBUTOR
|
I think this should go into 2.7.1. This was the reason that sglang had torch.compile caching issues and the fix is very simple.
cc @chauhang @penguinwu @ydwu4 @bdhirsh
| true
|
2,991,262,051
|
[aot] remove zip in remove_dupe_args
|
bobrenjc93
|
closed
|
[
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151187
| true
|
2,991,254,107
|
autograd: Add VJP and JVP rules for aten::aminmax
|
vijayabhaskar-ev
|
open
|
[
"triaged",
"open source",
"release notes: autograd"
] | 5
|
NONE
|
Adds functionally correct backward (VJP) and forward (JVP) autograd rules for the aten::aminmax operator to derivatives.yaml using existing helper functions. This ensures correct eager mode differentiation.
Fixes #148808
| true
|
2,991,167,688
|
fix sympy FloorToInt when compile
|
zhangheng408
|
open
|
[
"module: cpu",
"triaged",
"open source"
] | 4
|
NONE
|
fix follow error
<img width="1903" alt="image" src="https://github.com/user-attachments/assets/2d967bc4-f884-4e5f-b1d4-d8cca2f281a7" />
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,991,102,596
|
[dynamo] Prevent lazy variable realization on STORE_FAST
|
anijain2305
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/pull"
] | 27
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151184
Fixes https://github.com/pytorch/pytorch/issues/131893
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,991,093,443
|
[Inductor] Add Additional Configs for persistent+TMA version of Triton mm and addmm
|
NikhilAPatel
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Summary:
This PR introduces additional autotuning configurations for the persistent+TMA version of Triton `mm` and `addmm` operations. The new configurations are as follows:
* `(128, 128, 64, 5, 8)`
* `(256, 128, 64, 4, 8)`
* `(128, 128, 64, 5, 4)`
These configurations were selected based on exhaustive autotuning performed on commonly used shapes from an internal foundational model.
While these new configs are generally more performant across the board, we see notable gains a few specific cases:
* In scenarios where `n >> m, k`, the configurations `(128, 128, 64, 5, 8)` and `(256, 128, 64, 4, 8)` tend to produce an additional 5-10% speedup over the aten baseline compared to the original configurations.
* Similarly, the configuration `(128, 128, 64, 5, 4)` yields approximately an 8% improvement in scenarios where k >> m, n.
These enhancements are expected to provide performance benefits across diverse use cases, particularly when compared to the original set of configurations.
Test Plan:
contbuild & OSS CI
Reviewers: paulzhan
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,991,077,077
|
A problem discovered when computing complex matrices in deep neural networks
|
DareikAndPutty
|
open
|
[
"triaged",
"module: complex",
"module: NaNs and Infs"
] | 4
|
NONE
|
### 🐛 Describe the bug
Previously, while working with the latest YOLO model provided by Ultralytics, I attempted an operation where I performed torch.fft.fft2() on the output feature maps of certain CSPBlocks to obtain their corresponding complex matrices. I then manipulated the modulus matrices of these complex matrices, multiplied the resulting matrices back into the complex matrices, and finally used torch.fft.ifft2() to obtain the processed output. At this point, a problem arose: during training, the loss value would suddenly become NaN.
I later tested the same operation on other simpler models, such as using ResNet for classification tasks or UNet for segmentation tasks, and found that adding the same operation did not cause this issue. I initially thought the problem lay in the design of my operator.
However, recently, when I continued testing on YOLO, I discovered that if I manipulated the modulus matrices of the complex matrices, multiplied the resulting matrices back into the modulus matrices, and then recombined them with the phase matrices to form the complex matrices again, the loss value did not suddenly become NaN. This puzzled me because, mathematically, these two operations should be equivalent.
### Versions
Several versions before torch-2.5 have been tested on hardware devices such as 3090, 4070tisuper, TITAN, etc. Tested on both Ubuntu and Windows
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames
| true
|
2,990,935,528
|
Docs: Fix typos in the Symbolic Numbers docstrings
|
koyuki7w
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
| null | true
|
2,990,919,873
|
metamate attempt 0 multi graph
|
bobrenjc93
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151180
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,990,919,836
|
[ez] remove unused arg in _create_wrapped_callback
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151180
* __->__ #151179
* #150828
* #150755
* #150754
* #150753
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,990,835,779
|
Optimize `cdist` param description
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: docs"
] | 3
|
CONTRIBUTOR
|
Fixes #151101
| true
|
2,990,808,924
|
[MPS] Get Vmap to work with mps backend
|
qqaatw
|
open
|
[
"open source",
"ciflow/trunk",
"release notes: mps",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151177
| true
|
2,990,808,901
|
[MPS] Fix where
|
qqaatw
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: mps",
"ciflow/mps"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151177
* __->__ #151176
Fixes #150967
| true
|
2,990,805,335
|
improve noop elimination for slice and slice_scatter
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Improves noop elimination for `slice` and `slice_scatter`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,990,801,571
|
[MPS] Fix where
|
qqaatw
|
closed
|
[
"release notes: mps",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
| true
|
2,990,800,805
|
[MPS] Fix where
|
qqaatw
|
closed
|
[
"release notes: mps",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
| true
|
2,990,740,684
|
[wip test] (sizes[i] == 0)
|
laithsakka
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151172
* #151171
* #151170
| true
|
2,990,732,908
|
Fix: missing () in generated runtime assert c++ code
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151171
* #151170
Address one of the issues in https://github.com/pytorch/pytorch/issues/151127
generated code used to be
not a==5 or b==5
should be
not (a==5 or b==5)
address one of the issues in the comments of Address one of the issues in https://github.com/pytorch/pytorch/issues/151127
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,990,728,489
|
Fix Issues in deferring runtime assertions.
|
laithsakka
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151171
* __->__ #151170
This PR fix two bugs:
1) Update self.bound_unbacked_symbols before emitting runtime asserts :
set self.bound_unbacked_symbols before emitting runtime asserts to include runtime asserts depending on the current node
2) In the pass that remove unused graph inputs, we should not remove symbols that are used by runtime assertions.
Address some of the issues in https://github.com/pytorch/pytorch/issues/151127
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,990,724,738
|
[dynamo][error message] Hint for dict_items as inputs to the compiled region
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151184
* __->__ #151169
* #151168
* #151164
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,990,711,965
|
[dynamo] Graph break fixes while tracing inspect module
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151184
* #151169
* __->__ #151168
* #151164
Fixes https://github.com/pytorch/pytorch/issues/139374
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,990,704,071
|
[Testing] Skip `test_unspec_inputs_float64_mps`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151155
* __->__ #151167
* #151166
As backend does nto support float64
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,990,704,045
|
[CI] Fix `GPUTests.test_scheduler_vertical_fusion1`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
By enabling the test_operators on MPS device
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,990,703,147
|
Bug on running TorchScript on H100
|
mikeybydun1
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
Hello,
I have a torch script that using torch 1.13.0 (cuda version),
I am compiling a pytorch code into .pt file and then run the model.
On every gpu its working well (a100 for example),
but when i run the same code on NVIDIA H100 the results just became nan.
Do you have any idea why?
Pytorch version? what i need to configure?
Thanks!
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,990,680,681
|
[dynamo][nn_module] Use method.__self__ to find source for patched methods
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151169
* #151168
* __->__ #151164
Fixes https://github.com/pytorch/pytorch/issues/137476
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,990,633,295
|
[CUDA][cuBLAS][cuBLASLt] Opt-in unified cuBLAS + cuBLASLt workspaces
|
eqy
|
closed
|
[
"module: cuda",
"module: cublas",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
opt-in version of https://github.com/pytorch/pytorch/pull/145130 as there was a lack of repro for the 70% forward issue
`TORCH_CUBLASLT_UNIFIED_WORKSPACE=1`
@izaitsevfb could you comment if it was repeatable per every forward pass, on startup, or something else?
cc @ptrblck @msaroufim @jerryzh168 @csarofeen @xwang233 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,990,578,308
|
add keepdim to cosine similarity(cpp-change)
|
Isalia20
|
open
|
[
"module: nn",
"triaged",
"open source",
"release notes: nn",
"topic: improvements"
] | 3
|
COLLABORATOR
|
Part of #149134 cpp changes. Not sure if anything else should be changed in this part of the PR. If I change the input argument then I need to change the native_functions.yaml as well
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,990,540,068
|
Update ir.cpp: if it's not ROCm then it may be Vulkan
|
Efenstor
|
closed
|
[
"oncall: jit",
"module: rocm",
"open source",
"module: vulkan",
"release notes: jit"
] | 2
|
NONE
|
Fix build for USE_ROCM=OFF USE_VULKAN=ON
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,990,483,037
|
[CI][CUDA] Disable scaled_gemm tests on blackwell
|
abhilash1910
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 6
|
NONE
|
On SM100 or later , torch._scaled_mm is not supported;
It is supported till compute capability 9.0
cc @nWEIdia @tinglvv @eqy
| true
|
2,990,359,956
|
Failed to destroy or init process group after calling _abort_process_group
|
Gong-air
|
open
|
[
"oncall: distributed",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
In a distributed training scenario using PyTorch's torch.distributed module, I encountered an issue when attempting to destroy or reinitialize a process group after calling the internal function _abort_process_group. This issue prevents me from creating new process groups or reinitializing the default process group (WORLD) after the original group has been aborted.
```python
import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
def run(rank, world_size):
# Initialize the default process group
dist.init_process_group(
backend="nccl",
init_method="env://",
world_size=world_size,
rank=rank
)
print(f"[Rank {rank}] Default process group initialized.")
# Perform a simple all_reduce operation
device = torch.device(f"cuda:{rank}")
tensor = torch.tensor([rank + 1], dtype=torch.float32).to(device)
dist.all_reduce(tensor, op=dist.ReduceOp.SUM)
print(f"[Rank {rank}] After all_reduce: {tensor.item()}")
# Abort the process group
print(f"[Rank {rank}] Aborting the process group...")
dist.distributed_c10d._abort_process_group()
# Attempt to reinitialize the default process group
try:
print(f"[Rank {rank}] Re-initializing default process group...")
# dist.destroy_process_group() # another error occurs
dist.init_process_group(
backend="nccl",
init_method="env://",
world_size=world_size,
rank=rank
)
print(f"[Rank {rank}] Default process group re-initialized successfully.")
except Exception as e:
print(f"[Rank {rank}] Failed to re-initialize default process group: {e}")
def main():
world_size = 2 # Set to 2 ranks
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "29500"
processes = []
for rank in range(world_size):
p = Process(target=run, args=(rank, world_size))
p.start()
processes.append(p)
for p in processes:
p.join()
if __name__ == "__main__":
main()
```
### Versions
PyTorch version: 2.6.0+cu124
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,990,304,768
|
Fix license check for setuptools>=77
|
oraluben
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Fixes #151157
See issue for more information
| true
|
2,990,297,710
|
`test_distinfo_license` failed after `setuptools>=77`
|
oraluben
|
closed
|
[
"module: build",
"module: tests",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
`test_distinfo_license` checkes if `LICENSE` file exists under `torch-<version>.dist-info/` in the wheel.
After https://github.com/pypa/setuptools/commit/ef9b8e5c5eec50853c4cd2ceeccbf5f963172560 ([setuptools v77.0](https://github.com/pypa/setuptools/releases/tag/v77.0.0)), `<pkg>.dist-info/{LICENSE,NOTICE}` have been renamed to `<pkg>.dist-info/licenses/{LICENSE,NOTICE}`, cause the test to fail.
### Versions
-
cc @malfet @seemethere @mruberry @ZainRizvi
| true
|
2,990,194,792
|
Inductor doesn't support tensor.view(dtype).copy_(...)
|
YouJiacheng
|
closed
|
[
"high priority",
"triaged",
"module: correctness (silent)",
"oncall: pt2",
"module: inductor"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import os
import torch
from torch import Tensor
os.environ["TORCHINDUCTOR_FX_GRAPH_CACHE"] = "0"
os.environ["TORCHINDUCTOR_UNIQUE_KERNEL_NAMES"] = "1"
os.environ["TORCHINDUCTOR_BENCHMARK_KERNEL"] = "1"
@torch.compile
def view_copy(target: Tensor, source: Tensor):
assert target.dtype == torch.bfloat16
assert source.dtype == torch.uint16
target.view(torch.uint16).copy_(source)
target: Tensor = torch.ones(65536 * 1024, dtype=torch.bfloat16, device="cuda")
source = torch.full_like(target, 4, dtype=torch.uint16)
target.view(torch.uint16).copy_(source)
print(target[0]) # 3.6734e-40
view_copy(target, source)
print(target[0]) # 4.
```
### Error logs
The generated triton code is wrong
```python
@triton_heuristics.pointwise(
size_hints={'x': 67108864},
filename=__file__,
triton_meta={'signature': {'in_ptr0': '*u16', 'out_ptr0': '*bf16', 'xnumel': 'i32'}, 'device': DeviceProperties(type='cuda', index=0, multi_processor_count=132, cc=90, major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor.from_dict({'arg_properties': {'tt.divisibility': (0, 1, 2), 'tt.equal_to': ()}, 'cls': 'AttrsDescriptor'})]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'triton_poi_fused_0', 'mutated_arg_names': ['out_ptr0'], 'optimize_mem': True, 'no_x_dim': False, 'num_load': 1, 'num_reduction': 0, 'backend_hash': 'A0D3A2B50857E9501D843044B01F725922648D76E6D26323B14F8A4EA4473D1B', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False, 'kernel_num_gb': 0.268435456},
min_elem_per_thread=0
)
@triton.jit
def triton_poi_fused_0(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 67108864
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = tl.full([XBLOCK], True, tl.int1)
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), None)
tmp1 = tmp0.to(tl.float32, bitcast=False)
tl.store(out_ptr0 + (x0), tmp1, None)
```
Full code:
```python
# AOT ID: ['0_inference']
from ctypes import c_void_p, c_long, c_int
import torch
import math
import random
import os
import tempfile
from math import inf, nan
from torch._inductor.hooks import run_intermediate_hooks
from torch._inductor.utils import maybe_profile
from torch._inductor.codegen.memory_planning import _align as align
from torch import device, empty_strided
from torch._inductor.async_compile import AsyncCompile
from torch._inductor.select_algorithm import extern_kernels
from torch._inductor.codegen.multi_kernel import MultiKernelCall
import triton
import triton.language as tl
from torch._inductor.runtime.triton_heuristics import (
grid,
split_scan_grid,
grid_combo_kernels,
start_graph,
end_graph,
cooperative_reduction_grid,
)
from torch._C import _cuda_getCurrentRawStream as get_raw_stream
from torch._C import _cuda_getCurrentRawStream as get_raw_stream
aten = torch.ops.aten
inductor_ops = torch.ops.inductor
_quantized = torch.ops._quantized
assert_size_stride = torch._C._dynamo.guards.assert_size_stride
empty_strided_cpu = torch._C._dynamo.guards._empty_strided_cpu
empty_strided_cuda = torch._C._dynamo.guards._empty_strided_cuda
empty_strided_xpu = torch._C._dynamo.guards._empty_strided_xpu
reinterpret_tensor = torch._C._dynamo.guards._reinterpret_tensor
alloc_from_pool = torch.ops.inductor._alloc_from_pool
async_compile = AsyncCompile()
empty_strided_p2p = torch._C._distributed_c10d._SymmetricMemory.empty_strided_p2p
# kernel path: /tmp/torchinductor_root/5m/c5mmszhvnmnuimm6a3j7emw2wh7vx6mwt6uar6eqk5mygz5jqgw4.py
# Topologically Sorted Source Nodes: [], Original ATen: []
# Source node to ATen node mapping:
# Graph fragment:
# %copy_ : [num_users=0] = call_function[target=torch.ops.aten.copy_.default](args = (%arg0_1, %view_2), kwargs = {})
triton_poi_fused_0 = async_compile.triton('triton_poi_fused_0', '''
import triton
import triton.language as tl
from triton.compiler.compiler import AttrsDescriptor
from torch._inductor.runtime import triton_helpers, triton_heuristics
from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math
from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties
triton_helpers.set_driver_to_gpu()
from torch._dynamo.testing import rand_strided
from torch._C import _cuda_getCurrentRawStream as get_raw_stream
import torch
from torch._inductor.runtime.triton_heuristics import grid, split_scan_grid
@triton_heuristics.pointwise(
size_hints={'x': 67108864},
filename=__file__,
triton_meta={'signature': {'in_ptr0': '*u16', 'out_ptr0': '*bf16', 'xnumel': 'i32'}, 'device': DeviceProperties(type='cuda', index=0, multi_processor_count=132, cc=90, major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor.from_dict({'arg_properties': {'tt.divisibility': (0, 1, 2), 'tt.equal_to': ()}, 'cls': 'AttrsDescriptor'})]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'triton_poi_fused_0', 'mutated_arg_names': ['out_ptr0'], 'optimize_mem': True, 'no_x_dim': False, 'num_load': 1, 'num_reduction': 0, 'backend_hash': 'A0D3A2B50857E9501D843044B01F725922648D76E6D26323B14F8A4EA4473D1B', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False, 'kernel_num_gb': 0.268435456},
min_elem_per_thread=0
)
@triton.jit
def triton_poi_fused_0(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 67108864
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = tl.full([XBLOCK], True, tl.int1)
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), None)
tmp1 = tmp0.to(tl.float32, bitcast=False)
tl.store(out_ptr0 + (x0), tmp1, None)
def get_args():
arg_0 = rand_strided((67108864,), (1,), device='cuda:0', dtype=torch.uint16)
arg_1 = rand_strided((67108864,), (1,), device='cuda:0', dtype=torch.bfloat16)
return arg_0, arg_1,
def call(args):
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
stream0 = get_raw_stream(0)
triton_poi_fused_0.run(*args, 67108864, grid=grid(67108864), stream=stream0)
def benchmark_all_configs(args):
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
return triton_poi_fused_0.benchmark_all_configs(*args, 67108864, grid=grid(67108864))
if __name__ == '__main__':
from torch._inductor.runtime.benchmarking import benchmarker
args = get_args()
ms = benchmarker.benchmark_gpu(lambda: call(args), rep=40)
num_gb = 0.268435456
gb_per_s = num_gb / (ms / 1e3)
print(f"{ms:.3f}ms {num_gb:.3f}GB {gb_per_s:.2f}GB/s")
''', device_str='cuda')
async_compile.wait(globals())
del async_compile
def call(args):
arg0_1, arg1_1 = args
args.clear()
assert_size_stride(arg0_1, (67108864, ), (1, ))
assert_size_stride(arg1_1, (67108864, ), (1, ))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
# Topologically Sorted Source Nodes: [], Original ATen: []
stream0 = get_raw_stream(0)
triton_poi_fused_0.run(arg1_1, arg0_1, 67108864, grid=grid(67108864), stream=stream0)
del arg0_1
del arg1_1
return ()
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((67108864, ), (1, ), device='cuda:0', dtype=torch.bfloat16)
arg1_1 = rand_strided((67108864, ), (1, ), device='cuda:0', dtype=torch.uint16)
fn = lambda: call([arg0_1, arg1_1])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.9 (main, Mar 11 2025, 17:26:57) [Clang 20.1.0 ] (64-bit runtime)
Python platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 168
On-line CPU(s) list: 0-74
Off-line CPU(s) list: 75-167
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 42
Socket(s): 2
Stepping: 8
BogoMIPS: 5199.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.9 MiB (84 instances)
L1i cache: 2.6 MiB (84 instances)
L2 cache: 168 MiB (84 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-83
NUMA node1 CPU(s): 84-167
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
2,990,121,861
|
[MPS] Start benchmarking compile results
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151155
To know passrate and speedup.
Modify workflow to run when `macos-test.sh` is modified.
Got some ridiculous speedup numbers, like 7x for resnet and 93x for yolo
| true
|
2,990,083,087
|
[dynamo][super variable] Fix bug to use correct source
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151154
Fixes https://github.com/pytorch/pytorch/issues/150994
We should cherry-pick to 2.7 branch if possible, because this breaks torch.compile on some HF models. Look at the issue referenced here.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,990,081,302
|
DISABLED test_allgather_stress_cuda (__main__.ProcessGroupGlooLazyInitTest)
|
jithunnair-amd
|
open
|
[
"oncall: distributed",
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in #150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,990,028,958
|
[MPSInductor] Fix larger-than-threadgroup Welford reductions
|
malfet
|
closed
|
[
"Merged",
"Reverted",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151152
* #151151
* #150824
* #151042
By using `welford_combine` primitive in the loop
This fixes `GPUTests.test_multilayer_var_lowp_mps`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,990,028,923
|
[MPSInductor][BE] Implement reduction caching
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"topic: not user facing",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151155
* #151152
* __->__ #151151
* #150824
* #151042
That avoids double/triple invocation of welford reductions when both
mean and deviation must be returned
Code has been copy-n-pasted for Halide implementation
https://github.com/pytorch/pytorch/blob/575f348965abe8ea428eba7098f67ec9764a7f9a/torch/_inductor/codegen/halide.py#L1189-L1191
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,989,995,356
|
Fix TypeIndex.h signature extraction
|
r-barnes
|
open
|
[
"fb-exported",
"ciflow/trunk"
] | 3
|
CONTRIBUTOR
|
Summary:
Addresses [this post](https://fb.workplace.com/groups/1405155842844877/permalink/24043640001903139/).
[List of broken tests](https://l.workplace.com/l.php?u=https%3A%2F%2Fwww.internalfb.com%2Fomh%2Fview%2Fxrcia_encoder_pe%2Ftests%3Fpower_search_query%3D%257B%2522key%2522%253A%2522TEST_ISSUES_ROOT_AND%2522%252C%2522children%2522%253A%255B%257B%2522key%2522%253A%2522EQUALS_ANY_STATE%2522%252C%2522field%2522%253A%2522TEST_ISSUES_STATE%2522%252C%2522value%2522%253A%255B%2522OPEN%2522%255D%257D%252C%257B%2522key%2522%253A%2522EQUALS_ANY_ISSUE_TYPE%2522%252C%2522field%2522%253A%2522TEST_ISSUES_TYPE%2522%252C%2522value%2522%253A%255B%2522FAILURE%2522%255D%257D%255D%257D&h=AT13hFE24-RjG4JgIT3e0R6JEE3e3rMVR7SMy4qsoiE7moAN62ZtYAIfzDDIfq9G2ey0S8R0gMFYkDo_chXvH_QHVrAYx-bu-amC0wpMJRXWfNujB_dhOl6oSv95VsqAbPM2fBWZ&__tn__=R]-R&c[0]=AT1RnkuYHJ4JIyNo3wy5-JtYE4-FG4QNzQ_sOg3aFOwLKm24FBV8592S-AiDp0rFNrMMwNHIWmw7qj_gyKOS4-uTWmFQjsXDfU51BuoWeIyntSP7vVmeQ0RUfHsOdgiHGx3cO1NXBzBDKPmtGJuPwyY8guSiY-VXk1Y2iUeF9iM)
Test Plan:
Sandcastle
```
buck2 build --flagfile fbsource//arvr/mode/platform010/cuda12_8/dev fbsource//arvr/libraries/neural_net_inference:Backends_TorchScript_tests
```
Differential Revision: D72805179
| true
|
2,989,965,826
|
Update _ordered_set.py
|
ghost
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
NONE
|
Fixes #150850
Subclass torch/utils/_ordered_set.py and error on update.
@pytorchbot label "topic: not user facing"
| true
|
2,989,964,185
|
[fbgemm_gpu] Incorporate Torch DSA
|
q10
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Summary:
X-link: https://github.com/facebookresearch/FBGEMM/pull/1035
X-link: https://github.com/pytorch/FBGEMM/pull/3950
- Incorporte the PyTorch DSA infrastructure into the FBGEMM kernel launcher
utility
Test Plan:
```
# Nvidia
buck2 test 'fbcode//mode/opt' fbcode//deeplearning/fbgemm/fbgemm_gpu/test/utils:tensor_accessor_builder
buck2 test 'fbcode//mode/opt' fbcode//deeplearning/fbgemm/fbgemm_gpu/test/utils:tensor_accessor_builder_with_memcheck
buck2 run 'fbcode//mode/opt' -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=a100 -c fbcode.platform=platform010 fbcode//deeplearning/fbgemm/fbgemm_gpu/test/utils:kernel_launcher
# AMD
buck2 run mode/opt-amd-gpu -c fbcode.platform=platform010 fbcode//deeplearning/fbgemm/fbgemm_gpu/test/utils:tensor_accessor_builder_with_memcheck
buck2 run mode/opt-amd-gpu -c fbcode.platform=platform010 fbcode//deeplearning/fbgemm/fbgemm_gpu/test/utils:kernel_launcher
buck2 run mode/opt-amd-gpu -c fbcode.platform=platform010 fbcode//deeplearning/fbgemm/fbgemm_gpu/test/tbe:split_embeddings_utils
```
Differential Revision: D72759030
| true
|
2,989,962,042
|
move find_hop_schema into _higher_order_ops/schema.py
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151067
* __->__ #151147
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,989,962,009
|
[hop] Make base_hop share utils with control flow ops in backward
|
ydwu4
|
open
|
[
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151067
* #151147
* __->__ #151146
| true
|
2,989,946,000
|
[dynamo] unimplemented -> unimplemented_v2 in variables/builtin.py
|
williamwen42
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"module: compile ux"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151145
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,989,904,078
|
[Inductor] Add utility to rewrite sympy expressions with FloorDiv
|
blaine-rister
|
closed
|
[
"module: cpu",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Feature used in https://github.com/pytorch/pytorch/pull/146942.
# Feature
This PR a two new routines:
- Pattern match `floor(x/y)` and convert it to `x // y`. This is done by a new static method `FloorDiv.rewrite`, which is not part of sympy's general expression evaluation. The user has to specifically call this method to rewrite the expression.
- The inverse: expand `x // y` back into `floor(x/y)`. This is triggered by `expr.expand(floordiv=True)`.
The pattern match is useful in the parent PR because `FloorDiv` ops generate FX-friendly Python code, which we can directly embed into `SymInt`'s for things like the Triton launch grid.
It would be possible to directly call `FloorDiv` when the grid expression is first constructed, as opposed to this approach of pattern matching it after the fact. However, since these grid expressions generate and evaluate Python code on the fly, that is not completely straightforward. It seems nice to have a utility for "fixing" a sympy expression which wasn't originally constructed with `FloorDiv`.
# Test plan
Added some unit tests covering these features. Expansion allows us to check that the pattern matcher is sound. The parent PR also uses pattern matching to compute launch grids in some dynamic shape tests.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,989,902,093
|
add torch.distributed.get_local_rank() (!?!?!?!)
|
vxnuaj
|
closed
|
[
"oncall: distributed"
] | 2
|
NONE
|
### add torch.distributed.get_local_rank() (!?!?!?!)
hey!
was writing a function that involved the following,
```python
dist.init_process_group(backend = backend)
local_rank = dist.get_local_rank()
rank = dist.get_rank()
world_size = dist.get_world_size()
torch.cuda.set_device(rank)
device = torch.device(f'cuda:{rank}')
```
only to find out that `torch.distributed` doesn't give the optionality to return the `local_rank` by `distributed.get_local_rank()`.
Of course, I could easily bypass this by running `os.environ.get('LOCAL_RANK')`, but if this feature is trivial ti implement, it would be useful to avoid confusion.
This was mentioned here as well: https://github.com/pytorch/pytorch/issues/122816
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,989,898,890
|
AOT Dispatcher converts a single `detach` call to multiple `aten.detach`
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I observed this in #150706, where a `detach` in Dynamo graph becomes multiple `aten.alias` in AOT graph, [tlparse](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpqAJWZf/-_0_0_0/compilation_metrics_13.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000):

This isn't blocking #150706 because I have a separate fix to eliminate the `detach` in Dynamo graph, but I'm not sure if the AOTDispatcher behavior is intentional, and if it is, feel free to close this issue.
Minimal Repro:
```python
import torch
@torch.compile(backend="aot_eager", fullgraph=True)
def f(x):
x = x.detach()
res = x + 1
return res
f(torch.ones(1))
```
### Error logs
Running with `TORCH_LOGS="graph_code, aot_graph" gives:
```
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code] TRACED GRAPH
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code] ===== pre insert_deferred_runtime_asserts __compiled_fn_1 =====
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code] <eval_with_key>.0 class GraphModule(torch.nn.Module):
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code] def forward(self, L_x_: "f32[1]"):
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code] l_x_ = L_x_
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code]
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code] # File: /home/ryanguo99/scratch/compile-time.py:5 in f, code: x = x.detach()
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code] x: "f32[1]" = l_x_.detach(); l_x_ = None
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code]
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code] # File: /home/ryanguo99/scratch/compile-time.py:6 in f, code: res = x + 1
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code] res: "f32[1]" = x + 1; x = None
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code] return (res,)
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code]
V0411 16:03:36.439000 2423193 torch/fx/passes/runtime_assert.py:118] [0/0] [__graph_code]
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code] TRACED GRAPH
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code] ===== __compiled_fn_1 =====
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code] /home/ryanguo99/repos/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code] def forward(self, L_x_: "f32[1][1]cpu"):
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code] l_x_ = L_x_
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code]
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code] # File: /home/ryanguo99/scratch/compile-time.py:5 in f, code: x = x.detach()
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code] x: "f32[1][1]cpu" = l_x_.detach(); l_x_ = None
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code]
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code] # File: /home/ryanguo99/scratch/compile-time.py:6 in f, code: res = x + 1
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code] res: "f32[1][1]cpu" = x + 1; x = None
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code] return (res,)
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code]
V0411 16:03:36.440000 2423193 torch/_dynamo/output_graph.py:1431] [0/0] [__graph_code]
V0411 16:03:36.531000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:123] [0/0] [__aot_graphs] aot_config id: 0, fw_metadata=ViewAndMutationMeta(input_info=[InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=False, keep_input_mutations=True)], output_info=[OutputAliasInfo(output_type=<OutputType.non_alias: 1>, raw_type=<class 'torch._subclasses.functional_tensor.FunctionalTensor'>, base_idx=None, dynamic_dims=set(), requires_grad=False, functional_tensor=None)], num_intermediate_bases=0, keep_input_mutations=True, traced_tangents=[], subclass_inp_meta=[PlainTensorMeta(unwrapped_idx=0, memory_format=None)], subclass_fw_graph_out_meta=[PlainTensorMeta(unwrapped_idx=0, memory_format=None)], subclass_tangent_meta=[], is_train=False, traced_tangent_metas=None, num_symints_saved_for_bw=None, grad_enabled_mutation=None, deterministic=None, static_input_indices=[], tokens={}, indices_of_inputs_that_requires_grad_with_mutations_in_bw=[], bw_donated_idxs=None, num_backward_tokens=0, num_graphsafe_rng_states=0, graphsafe_rng_state_index=None),subclass_metadata=None
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] TRACED GRAPH
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] ===== Forward graph 0 =====
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] /home/ryanguo99/repos/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] def forward(self, arg0_1: "f32[1][1]cpu"):
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] # File: /home/ryanguo99/scratch/compile-time.py:5 in f, code: x = x.detach()
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] detach: "f32[1][1]cpu" = torch.ops.aten.detach.default(arg0_1); arg0_1 = None
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] detach_1: "f32[1][1]cpu" = torch.ops.aten.detach.default(detach); detach = None
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] detach_2: "f32[1][1]cpu" = torch.ops.aten.detach.default(detach_1); detach_1 = None
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs]
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] # File: /home/ryanguo99/scratch/compile-time.py:6 in f, code: res = x + 1
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] add: "f32[1][1]cpu" = torch.ops.aten.add.Tensor(detach_2, 1); detach_2 = None
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] return (add,)
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs]
I0411 16:03:36.537000 2423193 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs]
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code] TRACED GRAPH
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code] ===== tensorify_python_scalars =====
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code] /home/ryanguo99/repos/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code] def forward(self, arg0_1: "f32[1]"):
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code] # File: /home/ryanguo99/scratch/compile-time.py:5 in f, code: x = x.detach()
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code] detach: "f32[1]" = torch.ops.aten.detach.default(arg0_1); arg0_1 = None
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code] detach_1: "f32[1]" = torch.ops.aten.detach.default(detach); detach = None
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code] detach_2: "f32[1]" = torch.ops.aten.detach.default(detach_1); detach_1 = None
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code]
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code] # File: /home/ryanguo99/scratch/compile-time.py:6 in f, code: res = x + 1
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code] add: "f32[1]" = torch.ops.aten.add.Tensor(detach_2, 1); detach_2 = None
V0411 16:03:36.537000 2423193 torch/fx/passes/_tensorify_python_scalars.py:364] [0/0] [__graph_code] return (add,)
```
### Versions
main 1a1a32ce5af, Python 3.11
cc @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,989,879,508
|
[AOTInductor] Add Python interface for user managed buffer.
|
muchulee8
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary: Add pybind for user managed buffer in update_constants_buffer.
Test Plan:
Included in commit.
```
python test/inductor/test_aot_inductor.py -k user_managed
```
Differential Revision: D72892310
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
2,989,867,686
|
[doc fix] fix torch export docs for preserve_module_call_signature
|
supercharleszhu
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
The preserve_module_call_signature explanation is missing in the __init__.py. Copying that from _trace.py
| true
|
2,989,829,418
|
PyTorch algorithm optimization
|
mikeybydun1
|
closed
|
[] | 1
|
NONE
|
Hello,
I have a PyTorch algorithm model (compiled into .pt file) that do torch.prod of tensor in shape (1000,400,400,144)
The algorithm takes 10 seconds on strong nvidia GPU (for exmaple A100).
I am trying to find way to make it run faster.
For now, the only effective optimization was using BFloat16.
You have any suggestion for other optimization?
Thanks!
| true
|
2,989,807,472
|
[ROCm][TunableOp] Support submatrices in offline tuning
|
naromero77amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"ciflow/rocm"
] | 4
|
COLLABORATOR
|
This PR adds support for submatrices in offline tuning for:
- GEMM
- GEMM and bias
- ScaledGEMM
- Batch Strided GEMM
New UTs to cover submatrices. Submatrices for strided batch API is not part of this PR and will be done seperately.
There is also a bug fix for offline tuning for full matrix for GEMM and bias in the `NT` case. Offline and online UTs were updated to cover this corner case.
To improve code readability, swapped definition of transA and transB.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,989,785,991
|
DISABLED test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_False (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40409722784).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 160, in test_cache_load_function
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 7 but got 14.
Absolute difference: 7
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False_grad_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,989,785,951
|
DISABLED test_parity__foreach_acos_fastpath_outplace_cuda_float32 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_outplace_cuda_float32&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40412598855).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_outplace_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.