id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,794,151,479
|
unexpected behaviour of `torch.chunk`
|
nitaifingerhut
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
Let the following code
```py
x = torch.rand(4, 3, 128, 128)
chunks_list = torch.chunk(x, 3, 0)
```
The resulting `chunks_list` is of length 2, each element with shape (2, 3, 128, 128).
According to the [docs](https://pytorch.org/docs/main/generated/torch.chunk.html):
_"If the tensor size along the given dimension dim is not divisible by chunks, all returned chunks will be the same size, except the last one."_ - so either the code is not correct or the docs aren't, as it returned only 2 chunks (not 3), both of the same shape even though stated otherwise above.
### Versions
```sh
Collecting environment information...
PyTorch version: 2.4.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.31.1
Libc version: N/A
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:20) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.14.1
[pip3] onnxoptimizer==0.3.13
[pip3] onnxruntime==1.16.0
[pip3] torch==2.4.1
[pip3] torch-tb-profiler==0.4.3
[pip3] torchvision==0.19.1
[conda] numpy 1.26.4 py310hd45542a_0 conda-forge
[conda] torch 2.4.1 pypi_0 pypi
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torchvision 0.19.1 pypi_0 pypi
```
| true
|
2,794,146,255
|
[pytorch/ncclx] Remove Alltoallv specialization for PTD all_to_all
|
pavanbalaji
|
closed
|
[
"fb-exported",
"Stale"
] | 5
|
CONTRIBUTOR
|
Summary:
PTD all_to_all uses a list of tensors, while ncclAllToAllv (provided
by NCCLX and RCCL) assumes that a single contiguous buffer is used.
These are fundamentally mismatched. The list of tensors might not be
contiguous or even ordered (buffer addresses might not be in
increasing order).
This patch removes the ncclAllToAllv specialization for PTD
all_to_all, and instead let's it directly call ncclSend/ncclRecv.
Test Plan: CI
Differential Revision: D68289467
| true
|
2,794,142,116
|
Made partitioning more(?) deterministic
|
Chillee
|
open
|
[
"open source",
"Stale",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145024
* #145059
* #145029
| true
|
2,794,137,174
|
[MPSInductor] Add `Worker.current_device` method
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145023
That just returns 0, as multi-gpu is not currently supported by MPS
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,794,101,148
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,794,082,086
|
[CD] Annotate linux/arm64 cuda wheels with consistent nvidia dependencies
|
tmm1
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 19
|
CONTRIBUTOR
|
This resolves issues installing torch nightly wheels into a `uv sync`-generated `.venv`
The root cause is that the x64 and arm64 cuda nightly wheels have inconsistent metadata. This can be seen comparing `generated-linux-aarch64-binary-manywheel-nightly.yml` and `generated-linux-binary-manywheel-nightly.yml`
`uv` expects consistency:
https://github.com/astral-sh/uv/issues/10693
>Frankly, it's really not ideal that they change their dependencies from wheel to wheel.
>They could still put the dependencies there with the same platform markers they're using in the other wheel though... 🤷♀
https://github.com/astral-sh/uv/issues/10119#issuecomment-2559898792
>I think this is something that basically has to be solved by PyTorch. The issue is that the wheels for `2.6.0.dev20241222+cu126` don't have consistent metadata, and it's a fundamental assumption of uv that the metadata for a given version _is_ consistent.
To resolve this, I modified the arm64 nightly build workflow to add two new `PYTORCH_EXTRA_INSTALL_REQUIREMENTS` entries, under `manywheel-py3_11-cuda-aarch64-build` and `manywheel-py3_12-cuda-aarch64-build`. These are based on their equivalents in the x64 workflow for the corresponding python versions.
I used the cuda 12.6 dependencies versions for the nvidia packages, to match the `DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.6-main` being used by these jobs.
(The arm64 workflow file already had several `PYTORCH_EXTRA_INSTALL_REQUIREMENTS` entries, under various cpu wheels. I'm not sure why these are there, but I left them as-is.)
| true
|
2,794,077,447
|
Prevent legacy_load when weights_only=True (correctly)
|
mikaylagawarecki
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: improvements",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Only prevent `legacy_load` (.tar format removed in https://github.com/pytorch/pytorch/pull/713), not the whole of `_legacy_load` (.tar format + _use_new_zipfile_serialization=False)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145020
Differential Revision: [D68301405](https://our.internmc.facebook.com/intern/diff/D68301405)
| true
|
2,794,055,572
|
DISABLED test_sparse_add_cuda_float64 (__main__.TestSparseCSRCUDA)
|
pytorch-bot[bot]
|
open
|
[
"module: sparse",
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped"
] | 17
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sparse_add_cuda_float64&suite=TestSparseCSRCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35738066324).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sparse_add_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_sparse_csr.py", line 2338, in test_sparse_add
run_test(m, n, index_dtype)
File "/var/lib/jenkins/pytorch/test/test_sparse_csr.py", line 2330, in run_test
self.assertEqual(actual, expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 3 / 15 (20.0%)
Greatest absolute difference: 5.944452556227633 at index (4, 0) (up to 1e-07 allowed)
Greatest relative difference: inf at index (4, 1) (up to 1e-07 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_sparse_csr.py TestSparseCSRCUDA.test_sparse_add_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_sparse_csr.py`
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr
| true
|
2,794,055,554
|
DISABLED test_sparse_add_cuda_float64 (__main__.TestSparseCSRCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"module: sparse",
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped"
] | 2
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sparse_add_cuda_float64&suite=TestSparseCSRCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35738066324).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sparse_add_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_sparse_csr.py", line 2338, in test_sparse_add
run_test(m, n, index_dtype)
File "/var/lib/jenkins/pytorch/test/test_sparse_csr.py", line 2330, in run_test
self.assertEqual(actual, expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 3 / 15 (20.0%)
Greatest absolute difference: 5.944452556227633 at index (4, 0) (up to 1e-07 allowed)
Greatest relative difference: inf at index (4, 1) (up to 1e-07 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_sparse_csr.py TestSparseCSRCUDA.test_sparse_add_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_sparse_csr.py`
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr
| true
|
2,794,055,530
|
DISABLED test_equivalent_template_code (__main__.BenchmarkMultiTemplateFusionCudaTest)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 18
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_equivalent_template_code&suite=BenchmarkMultiTemplateFusionCudaTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35738064722).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_equivalent_template_code`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_benchmark_fusion.py", line 286, in test_equivalent_template_code
).run(
RuntimeError: Expected to find "triton_tem_fused_addmm_relu_0.run" but did not find it
Searched string:
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf0 = empty_strided_cuda((256, 256), (256, 1), torch.float16)
# Topologically Sorted Source Nodes: [a], Original ATen: [aten.addmm]
stream0 = get_raw_stream(0)
triton_tem_fused_addmm_0.run(arg2_1, arg0_1, buf0, grid=torch._inductor.kernel.mm_common.mm_grid(256, 256, meta0), stream=stream0)
del arg0_1
del arg2_1
buf1 = buf0; del buf0 # reuse
# Topologically Sorted Source Nodes: [a, relu], Original ATen: [aten.addmm, aten.relu]
stream0 = get_raw_stream(0)
triton_poi_fused_addmm_relu_1.run(buf1, arg1_1, 65536, grid=grid(65536), stream=stream0)
del arg1_1
return (buf1, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((256, 256), (256, 1), device='cuda:0', dtype=torch.float16)
arg1_1 = rand_strided((256, ), (1, ), device='cuda:0', dtype=torch.float16)
arg2_1 = rand_strided((256, 256), (256, 1), device='cuda:0', dtype=torch.float16)
fn = lambda: call([arg0_1, arg1_1, arg2_1])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: triton_tem_fused_addmm_relu_0.run
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_benchmark_fusion.py BenchmarkMultiTemplateFusionCudaTest.test_equivalent_template_code
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_benchmark_fusion.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,794,055,499
|
DISABLED test_equivalent_template_code (__main__.BenchmarkMultiTemplateFusionCudaTest)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_equivalent_template_code&suite=BenchmarkMultiTemplateFusionCudaTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35738064722).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_equivalent_template_code`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_benchmark_fusion.py", line 286, in test_equivalent_template_code
).run(
RuntimeError: Expected to find "triton_tem_fused_addmm_relu_0.run" but did not find it
Searched string:
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf0 = empty_strided_cuda((256, 256), (256, 1), torch.float16)
# Topologically Sorted Source Nodes: [a], Original ATen: [aten.addmm]
stream0 = get_raw_stream(0)
triton_tem_fused_addmm_0.run(arg2_1, arg0_1, buf0, grid=torch._inductor.kernel.mm_common.mm_grid(256, 256, meta0), stream=stream0)
del arg0_1
del arg2_1
buf1 = buf0; del buf0 # reuse
# Topologically Sorted Source Nodes: [a, relu], Original ATen: [aten.addmm, aten.relu]
stream0 = get_raw_stream(0)
triton_poi_fused_addmm_relu_1.run(buf1, arg1_1, 65536, grid=grid(65536), stream=stream0)
del arg1_1
return (buf1, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((256, 256), (256, 1), device='cuda:0', dtype=torch.float16)
arg1_1 = rand_strided((256, ), (1, ), device='cuda:0', dtype=torch.float16)
arg2_1 = rand_strided((256, 256), (256, 1), device='cuda:0', dtype=torch.float16)
fn = lambda: call([arg0_1, arg1_1, arg2_1])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: triton_tem_fused_addmm_relu_0.run
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_benchmark_fusion.py BenchmarkMultiTemplateFusionCudaTest.test_equivalent_template_code
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_benchmark_fusion.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,793,971,487
|
[BE] Remove conda from scripts and build files Part 2
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Continuation of https://github.com/pytorch/pytorch/pull/144870
Remove conda logic from scripts:
1. Remove conda build from triton build script
2. Remove conda checks from setup.py
3. Remove conda from release scripts
4. Script read_conda_versions.sh is not used (checked via git grep)
Related to: https://github.com/pytorch/pytorch/issues/138506
| true
|
2,793,922,080
|
Add ATen functions in native_functions.yaml to torch_in_graph_functions list automatically
|
yanboliang
|
open
|
[
"triaged",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Currently, whenever a new native function is added, we must manually add an entry to `torch/_dynamo/trace_rules.torch_c_binding_in_graph_functions` to ensure that Dynamo includes it in the FX graph during tracing. If this step is missed, the pull request introducing the native function may encounter confusing CI failures(e.g. #132135). For example, a PR author intending to add a native function for eager mode might see numerous compile-related test failures, which can be extremely challenging for open-source contributors to diagnose and resolve.
To address this, we propose introducing a helper function that automatically adds all ATen functions from `native_functions.yaml` to the `torch_c_binding_in_graph_functions` list. Since changes to ATen functions or Torch-level APIs are the most common scenarios, this solution would cover the majority of use cases and alleviate the current pain points a lot.
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @albanD @anijain2305 @zou3519
| true
|
2,793,921,912
|
easy: Fix missing tab in test/dynamo/test_compile.py
|
c00w
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145013
It turns out that if you request a merge on a pytorch PR, and then push a fix for a bad rebase, and the test is
relativley new, the merge will go through with the previous commit and not notice the test break.
Explicitly running the test now passes vs failing, and this is just the last missing commit from https://github.com/pytorch/pytorch/pull/144817
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,793,905,843
|
upgrade to sccache 0.9.1 - dealing with nvcc -E correctly
|
wdvr
|
closed
|
[
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
sccache 0.9.1 should be dealing with `nvcc -E` correctly
see https://github.com/mozilla/sccache/pull/2300
If this works as expected, we can get rid of this code:
https://github.com/pytorch/pytorch/pull/142813/files
cc @malfet
| true
|
2,793,902,615
|
composability test cleanup
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"merging"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145125
* #144834
* #145099
* __->__ #145011
* #145010
minor changes to test public PP api instead of internal/private one and
also save a few lines of code for microbatch splitting in the process
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,793,872,893
|
[Pipelining] Relax scale_grads assert
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144834
* #145099
* #145011
* __->__ #145010
The assert felt morally valid- if no gradients are scaled, then something
is definitely wrong with the setup. In one instance, PP +
optimizer-in-backward (in torchtitan) resulted in grad=None after
running .backward() and before scaling grads.
On the other hand, the existing assert is too restrictive. It's
possible that a model used with pipelining would have some parameters
that do not receieve gradients, and we shouldn't hard-error in these
cases. (E.g. if the parameter is literally not used, or is frozen).
In the extreme case, the whole stage could be frozen. So we do not
complain if no grads are scaled.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,793,854,711
|
Ensuring Infiniband is setup for torch run trainings and debugging it
|
ArkashJ
|
closed
|
[] | 2
|
NONE
|
>
> Incase of infiniband issues, here's steps to make sure that IP over InfiniBand (IPoIB) is set up correctly:
> 1) Verify IPoIB Devices:
> `/etc/init.d/openibd status`
> Ensure that the devices are correctly listed (shows IPoIB devices)
>
> 2) Check Permissions: Make sure you have the required root permissions for /dev/infiniband/umad0 before running InfiniBand-specific tools like ibping.
>
> 3) Driver Verification: Ensure that the necessary Mellanox drivers are installed. `ofed_info -s`
>
> 4) Network Interfaces Configuration: Verify the `/etc/network/interfaces` file to ensure that your InfiniBand interfaces are correctly configured.
>
> 5) Connection Status: run `ip link show ibx` to see infiniband link layer is still up
> List of useful pages
> - https://network.nvidia.com/products/infiniband-drivers/linux/mlnx_ofed/
> - https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_the_base_rdma_subsystem#sec-Configuring_the_Base_RDMA_Subsystem
> - https://docs.oracle.com/cd/E19436-01/820-3522-10/ch4-linux.html
> -https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_infiniband_and_rdma_networks/configuring-ipoib_configuring-infiniband-and-rdma-networks#configuring-an-ipoib-connection-by-using-the-network-system-role_configuring-ipoib
>
_Originally posted by @ArkashJ in [#144779](https://github.com/pytorch/pytorch/issues/144779#issuecomment-2596971921)_
| true
|
2,793,853,777
|
[Pipelining] Relax scale_grads assert
|
wconstab
|
closed
|
[
"oncall: distributed"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
The assert is morally valid- if no gradients are scaled, then something
is definitely wrong with the setup. In one instance, PP +
optimizer-in-backward (in torchtitan) resulted in grad=None after
running .backward() and before scaling grads. This is obviously not a
correct scenario.
On the other hand, the existing assert is too restrictive. It's
possible that a model used with pipelining would have some parameters
that do not receieve gradients, and we shouldn't hard-error in these
cases. (E.g. if the parameter is literally not used, or is frozen).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,793,848,519
|
[Pipelining] Relax scale_grads assert
|
wconstab
|
closed
|
[
"oncall: distributed"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
The assert is morally valid- if no gradients are scaled, then something
is definitely wrong with the setup. In one instance, PP +
optimizer-in-backward (in torchtitan) resulted in grad=None after
running .backward() and before scaling grads. This is obviously not a
correct scenario.
On the other hand, the existing assert is too restrictive. It's
possible that a model used with pipelining would have some parameters
that do not receieve gradients, and we shouldn't hard-error in these
cases. (E.g. if the parameter is literally not used, or is frozen).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,793,848,089
|
[Pipelining] Improve shape inference debug logging
|
wconstab
|
closed
|
[
"oncall: distributed"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Remove log that just said "running forward" since that is not so useful
in itself, replace with somewhat equivalent log that reports both input
and output shapes after running forward.
Note: enabled by `TORCH_LOGS=+pp`
Example:
```
[rank0]:V0115 13:28:58.282000 3908366 torch/distributed/pipelining/stage.py:1400] Shape inference: stage 0 inputs (tensor(..., device='meta', size=(1, 64), dtype=torch.int64),), outputs (tensor(..., device='meta', size=(1, 64, 256), dtype=torch.bfloat16),)
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,793,830,121
|
DISABLED test_cow_input_masked_argmin_cuda_float32 (__main__.TestCompositeComplianceCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"module: unknown"
] | 2
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cow_input_masked_argmin_cuda_float32&suite=TestCompositeComplianceCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35727153944).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cow_input_masked_argmin_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1167, in test_wrapper
return test(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/test_ops.py", line 1905, in test_cow_input
check_cow_input(arg, args_copy[idx], idx)
File "/var/lib/jenkins/pytorch/test/test_ops.py", line 1858, in check_cow_input
self.assertTrue(
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : Argument 0 during forward call avoided materialization, but the operation mutated its data.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3128, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3128, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 465, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1628, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1179, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 24: SampleInput(input=Tensor[size=(3, 5), device="cuda:0", dtype=torch.float32], args=(), kwargs={'mask': 'Tensor[size=(3, 5), device="cuda:0", dtype=torch.bool]', 'dim': '-1', 'keepdim': 'False'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=24 PYTORCH_TEST_WITH_ROCM=1 python test/test_ops.py TestCompositeComplianceCUDA.test_cow_input_masked_argmin_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_ops.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr
| true
|
2,793,830,107
|
DISABLED test_list_clearing_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 22
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_list_clearing_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35728356843).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 8 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_list_clearing_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 9819, in test_list_clearing
fn_compiled(inps)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1228, in run
return compiled_fn(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 397, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 427, in cudagraphify
return manager.add_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2255, in add_function
return fn, fn(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1949, in run
out = self._run(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2057, in _run
out = self.run_eager(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2221, in run_eager
return node.run(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 635, in run
check_memory_pool(self.device_index, self.cuda_graphs_pool, refs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1754, in check_memory_pool
if torch._C._cuda_checkPoolLiveAllocations(device, pool_id, unique_storages):
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_list_clearing_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,793,829,945
|
DISABLED test_aoti_eager_support_str_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 23
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_support_str_dynamic_shapes_cuda&suite=DynamicShapesCodegenGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35728355868).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 8 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_support_str_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 1025, in test_aoti_eager_support_str
res_value = getattr(torch.ops.aten, op_name)(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_aoti_eager_support_str_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,793,829,872
|
DISABLED test_profiler_mark_wrapper_call_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 21
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_profiler_mark_wrapper_call_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35732496276).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_profiler_mark_wrapper_call_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,793,829,194
|
ARM GITHUB ACTIONS RUNNERS
|
johnnynunez
|
closed
|
[] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
"""Hey everyone - we shipped linux arm64 runners today: """
https://github.blog/changelog/2025-01-16-linux-arm64-hosted-runners-now-available-for-free-in-public-repositories-public-preview/
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,793,824,597
|
Upgrade to DLPack 1.0.
|
ysiraichi
|
open
|
[
"open source",
"module: dlpack",
"release notes: python_frontend"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150691
* #150218
* #150217
* #150216
* __->__ #145000
This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:
- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
- Fallback to old implementation if no `max_version` or if version
lower than 1.0
- Check that the to-be-consumed capsule is of version up to 1.X
In order to accommodate these new specifications, this PR adds the
following main changes:
- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
API for creating a versioned DLPack capsule (called by `__dlpack__`
method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
traits (e.g. capsule name, conversion functions) depending on which
DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
common fields of both classes
- `fromDLPackImpl<T>` function (DLConvertor.cpp): constructs a tensor
from a DLPAck capsule
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `tensor_fromDLPackImpl<T>` function (tensor_new.cpp): outer function
for constructing a tensor out of a DLPack capsule that also marks the
capsule as used
| true
|
2,793,811,702
|
[dynamo/export] call local_scalar_dense when full() value is scalar tensor
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/144907
```
class Foo(torch.nn.Module):
def forward(self, val):
return torch.full((80, 2), val, dtype=torch.float32)
export(Foo(), args=(torch.tensor(1),))
```
When we have a `torch.full` call like above, where the fill value is a scalar Tensor and not a scalar value, the FX graph from `_dynamo.export()` contains a single node: the full op. We run into a `PendingUnbackedSymbolNotFound` error, because the `item()` call is implicit; the UnbackedSymInt is extracted but goes directly into the data of the output tensor value, and we're then unable to locate it when we try to compute unbacked bindings.
On the other hand, non-strict export doesn't face this, because an explicit `item()`, or `local_scalar_dense` node is inserted, and the unbacked binding is directly the example value of that node.
This adds a dynamo handler to imitate what happens in non-strict.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,793,780,947
|
Check bounds for index select to match CPU behavior
|
jhavukainen
|
closed
|
[
"open source",
"release notes: mps",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
Fixes #144824
Checks that there is no element in the index tensor is less than zero or more than number of elements in available on the axis. MPSGraph is not performing this check but instead silently returns zeros for out of bounds elements which differs from expectations set by the CPU behavior.
| true
|
2,793,779,494
|
[inductor] fix TORCH_LOGS="benchmarking"
|
ColinPeppler
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Saw this error with TORCH_LOGS="benchmarking"
```
File "/data/users/colinpeppler/pytorch/torch/_inductor/runtime/benchmarking.py", line 37, in wrapper
result = fn(*args, **kwargs)
File "/data/users/colinpeppler/pytorch/torch/_inductor/runtime/benchmarking.py", line 66, in wrapper
return fn(self, *args, **kwargs)
torch._inductor.exc.InductorError: TypeError: Benchmarker.benchmark() missing 1 required positional argument: 'fn_kwargs'
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144997
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,793,701,762
|
[Pipelining] Relax scale_grads assert
|
wconstab
|
closed
|
[
"oncall: distributed"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
The assert is morally valid- if no gradients are scaled, then something
is definitely wrong with the setup. In one instance, PP +
optimizer-in-backward (in torchtitan) resulted in grad=None after
running .backward() and before scaling grads. This is obviously not a
correct scenario.
On the other hand, the existing assert is too restrictive. It's
possible that a model used with pipelining would have some parameters
that do not receieve gradients, and we shouldn't hard-error in these
cases. (E.g. if the parameter is literally not used, or is frozen).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,793,701,588
|
[Pipelining] Improve shape inference debug logging
|
wconstab
|
closed
|
[
"oncall: distributed"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Remove log that just said "running forward" since that is not so useful
in itself, replace with somewhat equivalent log that reports both input
and output shapes after running forward.
Note: enabled by `TORCH_LOGS=+pp`
Example:
```
[rank0]:V0115 13:28:58.282000 3908366 torch/distributed/pipelining/stage.py:1400] Shape inference: stage 0 inputs (tensor(..., device='meta', size=(1, 64), dtype=torch.int64),), outputs (tensor(..., device='meta', size=(1, 64, 256), dtype=torch.bfloat16),)
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,793,673,471
|
Use `typing.IO[bytes]` instead of `io.BytesIO` in annotations
|
randolf-scholz
|
closed
|
[
"oncall: distributed",
"module: cpu",
"open source",
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"fx",
"module: inductor",
"module: dynamo",
"release notes: export",
"suppress-bc-linter"
] | 16
|
CONTRIBUTOR
|
Fixes #144976
Using appoach ① `IO[bytes]`, but could also try with a protocol.
## Notes:
- moved `torch.serialization.FILE_LIKE` to `torch.types.FileLike`
- Use `FileLike` annotation where it makes sense
- made sure those functions also support `os.PathLike`
- Replaced `isinstance(x, io.BytesIO)` with `isinstance(x, (io.IOBase, IO))` where appropriate.
- Replaced `BinaryIO` with `IO[bytes]` (the two ABCs are almost identical, the only difference is that `BinaryIO` allows `bytearray` input to `write`, whereas `IO[bytes]` only `bytes`)
- needed to make `torch.serialization._opener` generic to avoid LSP violations.
- skipped `torch/onnx/verification` for now (functions use `BytesIO.getvalue` which is not part of the `IO[bytes]` ABC, but it kind of seems that this is redundant, as e.g. `onnx.load` supports `str | PathLike[str] | IO[bytes]` directly...
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @ezyang @SherlockNoMad @EikanWang @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,793,650,822
|
Prevent _legacy_load with weights_only=True
|
pytorchbot
|
closed
|
[
"open source",
"release notes: quantization"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144914
| true
|
2,793,627,103
|
Enable fp16 linear layers in PyTorch via ACL
|
renato-arantes
|
open
|
[
"module: cpu",
"triaged",
"open source",
"module: arm",
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"release notes: linalg_frontend",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
This pull request aims to enable the use of linear layers with the fp16 data type through the ACL.
On a Graviton3 instance running with 16 threads, `torch.randn(2048, 4096, dtype=torch.half)` will take 50+% less time to complete compared with `torch.randn(2048, 4096, dtype=torch.float32)`.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @yf225 @ColinPeppler @desertfire
| true
|
2,793,571,525
|
[torchbench] stable_diffusion_unet compilation failure
|
IvanKobzarev
|
open
|
[
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor",
"pt2-pass-rate-regression"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```
python benchmarks/dynamo/torchbench.py --only stable_diffusion_unet --performance --cold-start-latency --inference --bfloat16 --export-aot-inductor --disable-cudagraphs --device cuda
```
```
cuda eval stable_diffusion_unet
ERROR:common:Backend dynamo failed in warmup()
Traceback (most recent call last):
File "/data/users/ivankobzarev/a/pytorch/benchmarks/dynamo/common.py", line 3386, in warmup
fn(model, example_inputs)
File "/data/users/ivankobzarev/a/pytorch/benchmarks/dynamo/common.py", line 1635, in export_aot_inductor
optimized = AOTInductorModelCache.load(model, example_inputs)
File "/data/users/ivankobzarev/a/pytorch/benchmarks/dynamo/common.py", line 1596, in load
ep = torch.export.export(
File "/data/users/ivankobzarev/a/pytorch/torch/export/__init__.py", line 368, in export
return _export(
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1040, in wrapper
raise e
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1013, in wrapper
ep = fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 2064, in _export
return _export_for_training(
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1040, in wrapper
raise e
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1013, in wrapper
ep = fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1929, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1864, in _non_strict_export
aten_export_artifact = _to_aten_func( # type: ignore[operator]
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1650, in _export_to_aten_ir_make_fx
gm, graph_signature = transform(_make_fx_helper)(
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1794, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1570, in _make_fx_helper
gm = make_fx(
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 2200, in wrapped
return make_fx_tracer.trace(f, *args)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 2138, in trace
return self._trace_inner(f, *args)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 2109, in _trace_inner
t = dispatch_trace(
File "/data/users/ivankobzarev/a/pytorch/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1142, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1698, in trace
res = super().trace(root, concrete_args)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 843, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1197, in wrapped
out = f(*tensors) # type:ignore[call-arg]
File "<string>", line 1, in <lambda>
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1473, in wrapped_fn
return tuple(flat_fn(*args))
File "/data/users/ivankobzarev/a/pytorch/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 879, in functional_call
out = mod(*args[params_len:], **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1768, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 539, in call_module
ret_val = forward(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1778, in forward
tree_out = mod(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1768, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 539, in call_module
ret_val = forward(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/diffusers/models/unets/unet_2d_condition.py", line 1246, in forward
sample = self.mid_block(
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1768, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 539, in call_module
ret_val = forward(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 884, in forward
for attn, resnet in zip(self.attentions, self.resnets[1:]):
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/container.py", line 322, in __getitem__
return self.__class__(list(self._modules.values())[idx])
TypeError: _ModuleStackTracer.__init__.<locals>.AttrProxy.__init__() missing 1 required positional argument: 'path'
warmup_failed
```
### Error logs
_No response_
### Versions
master Jan 16
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78
| true
|
2,793,570,997
|
DO_NOT_MERGE test for pytorchmergebot
|
wdvr
|
closed
|
[
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
DO_NOT_MERGE test for pytorchmergebot
| true
|
2,793,533,135
|
[torchbench] Missing meta function for aten::_cudnn_rnn_flatten_weight
|
IvanKobzarev
|
closed
|
[
"triaged",
"oncall: pt2",
"pt2-pass-rate-regression"
] | 3
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Torchbench for export-aot-inductor for tts_angular model fails due to missing Meta fn.
```
python benchmarks/dynamo/torchbench.py --only tts_angular --accuracy --no-translation-validation --inference --bfloat16 --export-aot-inductor --disable-cudagraphs --device cuda
```
```
NotImplementedError: aten::_cudnn_rnn_flatten_weight: attempted to run this operator with Meta tensors, but there was no fake impl or Meta kernel registered. You may have run into this message while using an operator with PT2 compilation APIs (torch.compile/torch.export); in order to use this operator with those APIs you'll need to add a fake impl. Please see the following for next steps: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
TorchDynamo optimized model failed to run because of following error
fail_to_run
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu
| true
|
2,793,492,932
|
DISABLED test_allow_implicit_sharing (__main__.TestQuantizePT2E)
|
pytorch-bot[bot]
|
open
|
[
"oncall: quantization",
"triaged",
"module: flaky-tests",
"skipped"
] | 5
|
NONE
|
Platforms: mac, macos, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_allow_implicit_sharing&suite=TestQuantizePT2E&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35717971181).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_allow_implicit_sharing`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr @malfet
| true
|
2,793,483,070
|
Fix `pt2-bug-report.yml` formatting
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
This is a 2nd regression caused by https://github.com/pytorch/pytorch/pull/144574
Test plan: `python3 -c "import yaml; foo=yaml.safe_load(open('pt2-bug-report.yml'));print(foo['body'][0])"`
Before it printed
```
% python3 -c "import yaml; foo=yaml.safe_load(open('pt2-bug-report.yml'));print(foo['body'][0])"
{'type': 'markdown', 'attributes': {'value': ''}}
```
After
```
% python3 -c "import yaml; foo=yaml.safe_load(open('pt2-bug-report.yml'));print(foo['body'][0])"
{'type': 'markdown', 'attributes': {'value': '#### Note: Please write your bug report in English to ensure it can be understood and addressed by the development team.\n'}}
```
Fixes https://github.com/pytorch/pytorch/issues/144970
| true
|
2,793,446,303
|
[WIP] Add pre-allgather version for context parallelism
|
fegin
|
closed
|
[
"oncall: distributed",
"Stale",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144986
cc @H-Huang @awgu @kwen2501 @wanchaol @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,793,428,042
|
Introduce new template heuristic for triton autotune configs
|
jataylo
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ci-no-td",
"ciflow/inductor-rocm",
"ciflow/inductor-periodic"
] | 49
|
COLLABORATOR
|
Initial PR to refactor bulkiness of mm_common to allow for better device-specific specialisation e.g. in https://github.com/pytorch/pytorch/pull/143286 we require large conditionalisation to get ROCm specific optimisations in.
This PR introduces a new file `torch/_inductor/template_heuristics.py` which implements device specific subclasses for autotune configs:
- CPUConfigHeuristic()
- CUDAConfigHeuristic()
- ROCmConfigHeuristic()
- XPUConfigHeuristic()
These subclasses are integrated as part of the `InductorChoices` class, which will be the interface for the kernel files to access the configs.
The mm_common, mm_plus_mm and conv configurations are implemented in this class, in the future we plan to bring in flex attention configurations also so all of the tuning config logic for templated triton kernels are handled in this file.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,793,404,250
|
Document decoupled_weight_decay for Adam for consistency with N/RAdam
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: docs",
"release notes: optim"
] | 6
|
CONTRIBUTOR
|
Followup from #144972 and #143710
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144984
| true
|
2,793,384,511
|
Change how units are printed in Memory Viz
|
hjmeta
|
open
|
[
"triaged",
"open source",
"fb-exported",
"Stale",
"release notes: visualization"
] | 9
|
NONE
|
Summary:
In Pytorch Memory Viz tool, Memory units are printed as GiB rather than GB (Eg: Total memory used after allocation: 11.9GiB (12763436160 bytes)).
This caused confusion because in other charts like Active Memory Timeline, the Y axis of data is in GBs.
Memory Capacity of machines is also adverised in GBs, eg: A100 80GB.
Switching to printing GB etc units, instead of GiB.
Manually syncing changes from D68232933 to fbsource
Test Plan: Tested in www in D68232933
Differential Revision: D68277650
| true
|
2,793,366,607
|
[compiled autograd] It would be nice if the compiiled autograd graph was actually runnable
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: compiled autograd"
] | 0
|
CONTRIBUTOR
|
It won't be after the stack in https://github.com/pytorch/pytorch/pull/143296
cc @chauhang @penguinwu @xmfan @yf225
| true
|
2,793,354,876
|
[MPSInductor] More is_dtype_supported gating
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144981
* #144971
This makes `GPUTest.test_scalar_cpu_tensor_arg_mps` pass
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,793,339,558
|
[inductor] Fix for pattern file contains 'getitem' fails during impor…
|
kareemshaik80
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 25
|
CONTRIBUTOR
|
…t of the pattern module
For example any pattern module that has the following pattern generated, fails to import because
the name getitem undefined.
native_dropout_default = CallFunction(aten.native_dropout.default, div_Tensor_1, KeywordArg('dropout_p'), True, _users=2)
getitem = CallFunction(getitem, native_dropout_default, 0)
this fix will resolve the error.
Fixes #144674
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,793,331,478
|
[WIP] Dynamo trace through UserFnVar arg
|
IvanKobzarev
|
closed
|
[
"Stale",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144979
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,793,327,260
|
Update ExecuTorch Pin
|
mergennachin
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
- **Update scipy, numba, pandas, numpy pins in CI**
- **Update executorch pin**
| true
|
2,793,322,587
|
[test] fix unit test
|
c-p-i-o
|
open
|
[
"oncall: distributed",
"open source",
"Stale",
"topic: not user facing",
"release notes: distributed (checkpoint)"
] | 2
|
CONTRIBUTOR
|
Summary:
Fixes #143994
Test Plan:
github unit tests for AMD pass
Fixes #143994
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,793,304,625
|
Avoid `io.BytesIO` in function argument annotations.
|
randolf-scholz
|
closed
|
[
"module: typing",
"oncall: pt2",
"oncall: export"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
This causes typing errors for library users, for example:
```python
from tempfile import TemporaryFile
import torch # v2.5.1
with TemporaryFile() as file:
model = torch.nn.Linear(3, 3)
exported_model = torch.export.export(model, args=(torch.randn(3),))
# "BufferedRandom" cannot be assigned to type "str | PathLike[Unknown] | BytesIO"
torch.export.save(exported_model, file) # ❌ typing error!
```
### Alternatives
1. Use `typing.IO[bytes]` / `typing.BinaryIO` which offer abstract classes, which `mypy` and `pyright` consider supertypes of `io.BytesIO`.
2. Rollout a custom `Protocol` type
3. Use some `Union` type including `IO[bytes]`
For example, `torch.serialization` exports [`FILE_LIKE: TypeAlias = Union[str, os.PathLike, BinaryIO, IO[bytes]]`](https://github.com/pytorch/pytorch/blob/727ae1331820bb3d83d70e9cd3c9d3cd4c79ff56/torch/serialization.py#L80), which is used in [`torch.serialization.save`](https://github.com/pytorch/pytorch/blob/727ae1331820bb3d83d70e9cd3c9d3cd4c79ff56/torch/serialization.py#L885-L892), but not in [`torch.export.save`](https://github.com/pytorch/pytorch/blob/727ae1331820bb3d83d70e9cd3c9d3cd4c79ff56/torch/export/__init__.py#L382-L389)
### Additional context
- https://github.com/python/typing/discussions/829
- https://github.com/astral-sh/ruff/issues/15532
cc @ezyang @malfet @xuzhao9 @gramster @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,793,285,155
|
Update scipy, numba, pandas, numpy pins in CI
|
mergennachin
|
closed
|
[
"Stale",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
### Summary
Test with the following configuration
For python<3.10
- numba 0.58.1, numpy 1.26.2, pandas 2.0.3, scipy 1.12.0, scikit-image 0.22.0
For python>=3.10
- numba 0.60.0, numpy 2.0.2, pandas 2.2.3, scipy 1.13.1, scikit-image 0.24.0
### Test Plan
- Tried locally with various conda environments (python 3.9, 3.10, 3.11)
- Waiting for CI
| true
|
2,793,278,267
|
Add option to limit number of SMs used by matmul kernels
|
lw
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cuda",
"topic: performance",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 18
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144974
Newer matmul kernels, e.g. those targeting Hopper GPUs, sometime use a "persistent" schedule which consists in launching as many CUDA blocks as there are SMs on the GPU, with each such block then working on multiple output tiles in a row. This allows to eliminate the overhead of starting and finishing each tile, effectively doing cross-tile pipelining. In previous generations these latencies could be hidden by having multiple CUDA blocks per SM but, with blocks becoming larger, only one can run at a time per SM and thus this needs to be taken care of in software.
Persistent kernels become an issue when other kernels are running concurrently. The classical example is a NCCL communication kernel running in the background. In such cases the matmul expects to be able to use all the SMs but is prevented from doing so because some of the are busy. This can lead to its blocks being scheduled as two separate waves on the available SMs. This "wave quantization" can double the latency of the matmul kernels.
While we wait for smarter solutions, such as automatic load balancing among the blocks, an easy way to unblock ourselves is to tell the matmuls to only use a subset of the GPU's SMs. For this, I am introducing a global `sm_carveout` flag which can be used to specify how many SMs should be left available for other kernels.
For now I only change the cuBLAS kernels and the scaled-mm CUTLASS kernel. More kernels can be opted-in later.
I tested this change manually, by using the Kineto profiler to look up the grid size of a scaled-mm kernel with different values of `sm_carveout`, and making sure it changed. Suggestions are welcome for a more automated test.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,793,274,218
|
Add flop formula for _scaled_mm
|
lw
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144973
This will make it work correctly with the partitioner's AutoAC
| true
|
2,793,256,064
|
Fix loading older state_dict into AdamW after refactor
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: optim"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144972
| true
|
2,793,240,677
|
[BE] Move `is_device_supported` to helper function
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144981
* __->__ #144971
And extend `test_inf` to check half (explicitly instead of check_lowp) and bfloat16
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,793,165,182
|
[bug report template] pt2 bug report is disabled
|
shaoyuyoung
|
closed
|
[
"high priority",
"oncall: pt2"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Currently, pt2 bug report is disabled.
Previously, if using pt2 bug report, the issue will be automatically labeled "oncall: pt2"

### Versions
None
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,793,095,740
|
Support something similar to export dynamic dims for torch.compile with fullgraph=True
|
ezyang
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
In export, instead of using mark_dynamic on input Tensors, you can directly specify which inputs are dynamic or not
```
dynamic_shapes = (({0: Dim("dim")}, None, None, None),)
torch.export.export(
Slice(),
inp,
dynamic_shapes=dynamic_shapes,
)
```
We should support an analogous concept for torch.compile(fullgraph=True)
Note that we cannot easily support this when fullgraph=False, because a graph break inside the region will result in a bunch of intermediate tensors that won't have accurate dynamic/not dynamic annotations.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @bobrenjc93
| true
|
2,793,022,768
|
Improve cleanup of cancelled jobs on s390x for tests too
|
AlekseiNikiforovIBM
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/s390"
] | 3
|
COLLABORATOR
|
Follow up to https://github.com/pytorch/pytorch/pull/144149
| true
|
2,792,940,204
|
[XPU] Nightly binary builds for XPU Linux and Windows are failing since 01.11.2025
|
atalman
|
closed
|
[
"module: binaries",
"triaged",
"module: xpu"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
CD XPU build nightly failures since 01.11.2025
Linux XPU:
https://github.com/pytorch/pytorch/actions/runs/12722381963/job/35468425478
Windows XPU:
https://github.com/pytorch/pytorch/actions/runs/12722382018/job/35469528419
Linux failure log:
```
2025-01-11T10:43:02.3892539Z [778/780] Building CXX object src/sycl/CMakeFiles/dnnl_sycl.dir/sycl_utils.cpp.o
2025-01-11T10:43:02.3892801Z In file included from /pytorch/third_party/ideep/mkl-dnn/src/sycl/sycl_utils.cpp:17:
2025-01-11T10:43:02.3893069Z In file included from /pytorch/third_party/ideep/mkl-dnn/src/sycl/sycl_utils.hpp:23:
2025-01-11T10:43:02.3893812Z /pytorch/third_party/ideep/mkl-dnn/src/gpu/intel/ocl/ocl_gpu_engine.hpp:48:14: warning: 'dnnl::impl::gpu::intel::ocl::ocl_gpu_engine_t::create_stream' hides overloaded virtual function [-Woverloaded-virtual]
2025-01-11T10:43:02.3894013Z 48 | status_t create_stream(stream_t **stream, cl_command_queue queue);
2025-01-11T10:43:02.3894105Z | ^
2025-01-11T10:43:02.3895135Z /pytorch/third_party/ideep/mkl-dnn/src/common/engine.hpp:89:34: note: hidden overloaded virtual function 'dnnl_engine::create_stream' declared here: type mismatch at 2nd parameter ('dnnl::threadpool_interop::threadpool_iface *' vs 'cl_command_queue' (aka '_cl_command_queue *'))
2025-01-11T10:43:02.3895457Z 89 | virtual dnnl::impl::status_t create_stream(dnnl::impl::stream_t **stream,
2025-01-11T10:43:02.3895549Z | ^
2025-01-11T10:43:02.3895813Z In file included from /pytorch/third_party/ideep/mkl-dnn/src/sycl/sycl_utils.cpp:20:
2025-01-11T10:43:02.3896502Z /pytorch/third_party/ideep/mkl-dnn/src/sycl/sycl_engine_base.hpp:63:14: warning: 'dnnl::impl::sycl::sycl_engine_base_t::create_stream' hides overloaded virtual function [-Woverloaded-virtual]
2025-01-11T10:43:02.3896703Z 63 | status_t create_stream(stream_t **stream, ::sycl::queue &queue);
2025-01-11T10:43:02.3896783Z | ^
2025-01-11T10:43:02.3897753Z /pytorch/third_party/ideep/mkl-dnn/src/common/engine.hpp:89:34: note: hidden overloaded virtual function 'dnnl_engine::create_stream' declared here: type mismatch at 2nd parameter ('dnnl::threadpool_interop::threadpool_iface *' vs '::sycl::queue &')
2025-01-11T10:43:02.3897979Z 89 | virtual dnnl::impl::status_t create_stream(dnnl::impl::stream_t **stream,
2025-01-11T10:43:02.3898079Z | ^
2025-01-11T10:43:02.3898180Z 2 warnings generated.
2025-01-11T10:43:02.3898606Z [779/780] Building CXX object src/gpu/intel/jit/CMakeFiles/dnnl_gpu_intel_jit.dir/gemm/gen_gemm_kernel_generator.cpp.o
2025-01-11T10:43:02.3898759Z [780/780] Linking CXX static library src/libdnnl.a
2025-01-11T10:43:02.3898882Z ninja: build stopped: subcommand failed.
2025-01-11T10:43:02.3898889Z
2025-01-11T10:43:02.3898981Z real 16m54.645s
2025-01-11T10:43:02.3899085Z user 161m13.825s
2025-01-11T10:43:02.3899171Z sys 9m1.918s
2025-01-11T10:43:02.3909313Z ##[error]Process completed with exit code 1.
2025-01-11T10:43:02.3975857Z ##[group]Run pytorch/test-infra/.github/actions/teardown-linux@main
```
Windows failure log:
```
2025-01-11T12:42:07.9866038Z C:\actions-runner\_work\pytorch\pytorch\pytorch\third_party\ideep\mkl-dnn\src\common\engine.hpp(89,34): note: hidden overloaded virtual function 'dnnl_engine::create_stream' declared here: type mismatch at 2nd parameter ('dnnl::threadpool_interop::threadpool_iface *' vs '::sycl::queue &')
2025-01-11T12:42:07.9866059Z
2025-01-11T12:42:07.9866277Z 89 | virtual dnnl::impl::status_t create_stream(dnnl::impl::stream_t **stream,
2025-01-11T12:42:07.9866283Z
2025-01-11T12:42:07.9866369Z | ^
2025-01-11T12:42:07.9866373Z
2025-01-11T12:42:07.9866465Z 2 warnings generated.
2025-01-11T12:42:07.9866469Z
2025-01-11T12:42:07.9866906Z [780/781] Building CXX object src\gpu\intel\jit\CMakeFiles\dnnl_gpu_intel_jit.dir\gemm\gen_gemm_kernel_generator.cpp.obj
2025-01-11T12:42:07.9867039Z [781/781] Linking CXX static library src\dnnl.lib
2025-01-11T12:42:07.9867149Z ignoring unknown argument: -fsycl
2025-01-11T12:42:07.9867153Z
2025-01-11T12:42:07.9867291Z ignoring unknown argument: -Wno-unknown-argument
2025-01-11T12:42:07.9867300Z
2025-01-11T12:42:07.9867461Z ignoring unknown argument: -Qoption,link,/machine:x64
2025-01-11T12:42:07.9867465Z
2025-01-11T12:42:07.9867469Z
2025-01-11T12:42:07.9867578Z ninja: build stopped: subcommand failed.
2025-01-11T12:42:07.9867756Z -- Building version 2.7.0.dev20250111+xpu
2025-01-11T12:42:07.9871258Z cmake -GNinja -DBUILD_ENVIRONMENT=windows-binary-wheel -DBUILD_PYTHON=True -DBUILD_PYTHONLESS= -DBUILD_TEST=True -DBUILD_TYPE=release -DCMAKE_BUILD_TYPE=Release -DCMAKE_EXE_LINKER_FLAGS=/FORCE:MULTIPLE -DCMAKE_GENERATOR=Ninja -DCMAKE_INSTALL_PREFIX=C:\actions-runner\_work\pytorch\pytorch\pytorch\torch -DCMAKE_MODULE_LINKER_FLAGS=/FORCE:MULTIPLE -DCMAKE_PREFIX_PATH=C:\actions-runner\_work\pytorch\pytorch\pytorch\.ci\pytorch\windows\conda\envs\py310\Lib\site-packages;C:\Program Files (x86)\Intel\oneAPI\compiler\latest; -DCMAKE_SHARED_LINKER_FLAGS=/FORCE:MULTIPLE -DINSTALL_TEST=0 -DPython_EXECUTABLE=C:\actions-runner\_work\pytorch\pytorch\pytorch\.ci\pytorch\windows\conda\envs\py310\python.exe -DTORCH_BUILD_VERSION=2.7.0.dev20250111+xpu -DUSE_CUDA=0 -DUSE_FBGEMM=1 -DUSE_GLOO_WITH_OPENSSL=ON -DUSE_GOLD_LINKER=OFF -DUSE_INTEL_LLVM=0 -DUSE_NUMPY=True -DUSE_SCCACHE=0 -DUSE_SPLIT_BUILD= C:\actions-runner\_work\pytorch\pytorch\pytorch
```
### Versions
2.7.0
cc @seemethere @malfet @osalpekar @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,792,931,947
|
TORCH_PYTHON_API contains ABI breaking changes in same version 2.6.0a0
|
akshaver
|
closed
|
[
"module: binaries",
"module: cpp",
"module: abi",
"triaged"
] | 11
|
NONE
|
### 🐛 Describe the bug
The function signature THPVariable_Wrap changes between versions with the same major.minor.alpha. Specifically
* 2.6.0a0+ecf3bae40a -- TORCH_PYTHON_API PyObject\* THPVariable_Wrap(at::TensorBase var);
* 2.6.0a0+df5bbc09d1 -- TORCH_PYTHON_API PyObject\* THPVariable_Wrap(const at::TensorBase& var);
This causes breaking changes in any application that is linked against this API, because the name mangling for the method will change:
* 2.6.0a0+ecf3bae40a -- \_Z16THPVariable_WrapN2at10TensorBaseE
* 2.6.0a0+df5bbc09d1 -- \_Z16THPVariable_WrapRKN2at10TensorBaseE
If externs were used, this might not be an issue. However, that is not used for this API.
Impact is that any application compiled against 2.6.0a0 may not work with other versions at 2.6.0a0.
### Versions
2.6.0a0+ecf3bae40a
2.6.0a0+df5bbc09d1
cc @seemethere @malfet @osalpekar @atalman @jbschlosser
| true
|
2,792,872,130
|
RuntimeError "global alloc not supported yet" when using TorchScript optimization.
|
LucaBonfiglioli
|
open
|
[
"high priority",
"triage review",
"oncall: jit"
] | 2
|
NONE
|
### 🐛 Describe the bug
Calling the `forward` method on TorchScript models can, under some specific conditions, raise a `RuntimeError` with message: "Global alloc not supported yet.".
I think this is linked to an old issue: https://github.com/pytorch/pytorch/issues/69078, however, I managed to consistently reproduce this error.
The code that reproduces the bug is quite long and needs some explanation. It was taken from a very complex 3D pose estimation model with lots of pre and post processing whose code was largely generated by `torch.fx`. In the post-processing part, there is a class `ComputeCentroids` that computes the center of mass of queried instances in a batch of segmentation masks that appears to be causing the error.
The `ComputeCentroids` was tested with both cpu and gpu devices, with and without `torch::jit::script` and it appears to be working as desired, even with empty input queries.
The error is raised only if **all** the three following conditions apply:
- The inference device is set to "cuda".
- The torch jit optimization is turned on, as suggested by @OlofHarrysson in https://github.com/pytorch/pytorch/issues/69078
- The first inference is performed with an empty query. Maybe something goes wrong in the torchscript profiling executor?
```python
from typing import cast
import torch
import torch.nn as nn
from torch import Tensor
class ComputeCentroids(nn.Module):
def forward(self, b_idx: Tensor, i_idx: Tensor, segm: Tensor) -> Tensor:
dev = segm.device
B, H, W = segm.shape
N = int(segm.max()) + 1
hh, ww = torch.arange(H, device=dev), torch.arange(W, device=dev)
i, j = torch.meshgrid(hh, ww, indexing="ij")
xy = torch.stack([j, i], dim=-1).float().view(-1, 2).repeat(B, 1)
segm_f = (segm.view(B, -1) + torch.arange(B, device=dev)[:, None] * N).view(-1)
eq = segm_f[:, None] == (i_idx + b_idx * N)[None]
c_xy = (eq[..., None] * xy[:, None]).sum(0) / eq[..., None].sum(0)
c_xy.nan_to_num_(-1.0)
return c_xy
def zero_init() -> dict[str, Tensor]:
b_idx = torch.zeros(0, device=dev)
x_idx = torch.zeros(0, device=dev)
segm = torch.zeros(1, 256, 256, device=dev)
return {"b_idx": b_idx, "i_idx": x_idx, "segm": segm}
def random_init() -> dict[str, Tensor]:
b_idx = torch.tensor([0, 0, 0, 0], device=dev)
i_idx = torch.tensor([0, 1, 2, 3], device=dev)
segm = torch.randint(0, 10, (1, 256, 256), device=dev)
return {"b_idx": b_idx, "i_idx": i_idx, "segm": segm}
if __name__ == "__main__":
compute_cxy = cast(ComputeCentroids, torch.jit.script(ComputeCentroids()))
# Bug can be reproduced if all the following conditions are verified:
# - Device is set to "cuda".
# - Optimized execution is activated.
# - First inference pass is the result of zero_init().
dev = "cuda:0" # "cpu"
optimize = True # False
zero_init_first = True # False
with torch.jit.optimized_execution(optimize): # type: ignore
if zero_init_first:
compute_cxy(**zero_init())
for _ in range(5):
compute_cxy(**random_init())
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-2ubuntu1~20.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.15 (main, Sep 7 2024, 18:35:33) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 550.127.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
Stepping: 13
CPU MHz: 3600.000
CPU max MHz: 5000,0000
CPU min MHz: 800,0000
BogoMIPS: 7200.00
Virtualization: VT-x
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] torch==2.5.1+cu118
[pip3] torchaudio==2.5.1+cu118
[pip3] torchvision==0.20.1+cu118
[pip3] triton==3.1.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,792,692,070
|
Flakybot fails to fetch test ownership information
|
malfet
|
closed
|
[
"triaged",
"module: regression",
"module: devx"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
See https://github.com/pytorch/pytorch/issues/144900 for example that were reported about test_nestedtensor.py, but it failed to apply `module: nestedtensor` label despite clear owneship.
Same with https://github.com/pytorch/pytorch/issues/144963 - why doens't it have `oncall: quantization` label?
### Versions
CI
cc @ZainRizvi @kit1980 @huydhn @clee2000
| true
|
2,792,663,145
|
DISABLED test_run_decompositions_same_handle_id (__main__.TestNumericDebugger)
|
pytorch-bot[bot]
|
open
|
[
"oncall: quantization",
"triaged",
"module: flaky-tests",
"skipped"
] | 5
|
NONE
|
Platforms: mac, macos, rocm, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_run_decompositions_same_handle_id&suite=TestNumericDebugger&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35698516196).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_run_decompositions_same_handle_id`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_quantization.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr @malfet
| true
|
2,792,608,409
|
[BE] Add missing throw of `std::runtime_error` in scrc/cuda/utils.cpp
|
rec
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144962
| true
|
2,792,537,803
|
CUDAGraph outputs will be overwritten by a subsequent run?
|
wbigat
|
open
|
[
"module: cuda",
"triaged",
"module: cuda graphs",
"oncall: pt2"
] | 8
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Hello, I have some doubts about the following cudagraph case.
I submitted another issue, #144386
```
import torch
def test_cuda_graph_output_overwritten():
class MLP(torch.nn.Module):
def __init__(self):
super().__init__()
self.ln = torch.nn.LayerNorm(6)
def forward(self, input):
ln = self.ln(input)
return ln
model = MLP().cuda()
compiled_model = torch.compile(mode="reduce-overhead")(model)
compiled_model(torch.randn([2, 6], device="cuda"))
@torch.compile(mode="reduce-overhead")
def my_model(x):
y = torch.matmul(x, x)
return y
x = torch.randn(10, 10, device="cuda")
y1 = my_model(x)
y2 = my_model(x)
print(y1)
# RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run.
test_cuda_graph_output_overwritten()
```
It was updated just the other day by the following PR
```
https://github.com/pytorch/pytorch/pull/144793/files
```
It was a successful case, and the error displayed on the doc cannot be reproduced.
What I want to know is whether the CUDAGraph output will be overwritten by subsequent runs. I found that the doc did not match the actual test results. I don't know if the doc was written wrong or the test case was designed incorrectly.
cc @ptrblck @msaroufim @eqy @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng @chauhang @Edenzzzz
### Versions
torch 2.4.1
NVIDIA-SMI 560.35.05 Driver Version: 560.35.05 CUDA Version: 12.6
| true
|
2,792,415,398
|
Memory-efficient attention is not selected if inputs's ndim != 4
|
gau-nernst
|
open
|
[
"triaged",
"module: sdpa"
] | 1
|
NONE
|
### 🐛 Describe the bug
```python
from contextlib import nullcontext
import torch
import torch.nn.functional as F
from torch.nn.attention import SDPBackend, sdpa_kernel
# shape used in FLUX's VAE
seq_len = 128 * 128
head_dim = 512
# ctx = sdpa_kernel([SDPBackend.EFFICIENT_ATTENTION])
ctx = nullcontext()
torch.cuda.reset_peak_memory_stats()
shape = (1, 1, 1, seq_len, head_dim)
q, k, v = [torch.randn(shape, dtype=torch.bfloat16, device="cuda") for _ in range(3)]
with ctx:
F.scaled_dot_product_attention(q, k, v)
print(f"{shape}: {torch.cuda.max_memory_allocated() / 1e9:.2f} GB")
torch.cuda.reset_peak_memory_stats()
shape = (1, 1, seq_len, head_dim)
q, k, v = [torch.randn(shape, dtype=torch.bfloat16, device="cuda") for _ in range(3)]
with ctx:
F.scaled_dot_product_attention(q, k, v)
print(f"{shape}: {torch.cuda.max_memory_allocated() / 1e9:.2f} GB")
torch.cuda.reset_peak_memory_stats()
shape = (1, seq_len, head_dim)
q, k, v = [torch.randn(shape, dtype=torch.bfloat16, device="cuda") for _ in range(3)]
with ctx:
F.scaled_dot_product_attention(q, k, v)
print(f"{shape}: {torch.cuda.max_memory_allocated() / 1e9:.2f} GB")
torch.cuda.reset_peak_memory_stats()
shape = (seq_len, head_dim)
q, k, v = [torch.randn(shape, dtype=torch.bfloat16, device="cuda") for _ in range(3)]
with ctx:
F.scaled_dot_product_attention(q, k, v)
print(f"{shape}: {torch.cuda.max_memory_allocated() / 1e9:.2f} GB")
```
```
(1, 1, 1, 16384, 512): 2.61 GB
(1, 1, 16384, 512): 0.11 GB
(1, 16384, 512): 2.61 GB
(16384, 512): 2.61 GB
```
If I use `ctx = sdpa_kernel([SDPBackend.EFFICIENT_ATTENTION])`, only the ones where `ndim == 4` will not error out.
**Expected behavior**: memory-efficient attention should be selected.
Possibly related: #127523 (but I don't use attention mask here)
cc: @drisspg
### Versions
torch==2.7.0.dev20250105+cu126
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,792,415,036
|
Introduce a new API isAcceleratorExcluded
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"module: accelerator"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143783
* __->__ #144959
cc @albanD @EikanWang
| true
|
2,792,335,389
|
Let `tensor_a.new_tensor()` be on `tensor_a.device` by default
|
oraluben
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: bc breaking",
"module: python frontend"
] | 8
|
CONTRIBUTOR
|
Fixes #144957
Closes #73838 cc @albanD @ezyang
Currently, `tensor_a.new_tensor()` will return a on-cpu tensor no matter where is `tensor_a`. This differs from the document and is a side-effect of https://github.com/pytorch/pytorch/pull/41984.
See #144957 how current logic breaks dynamo.
This PR restore the documented behavior and add tests for `new_tensor`.
| true
|
2,792,275,105
|
Inconsistency of `tensor.new_tensor(data)` between eager and dynamo
|
oraluben
|
closed
|
[
"triaged",
"module: numpy",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
import numpy
from contextlib import AbstractContextManager
c = torch.compile
dev = 'cuda:0'
class expected(AbstractContextManager):
def __init__(self, expected_exception_cls=None, subclass=False):
self.expected = expected_exception_cls
self.accept_subclass = subclass
def __exit__(self, exc_type, exc_value, traceback):
if self.expected is not None:
assert exc_type is not None, 'Expected exception not raised'
if issubclass(exc_type, self.expected) if self.accept_subclass else exc_type == self.expected:
return True
return False
def foo(a: numpy.ndarray, b: torch.Tensor):
a = b.new_tensor(a)
return torch.cat([a, b], dim=-1)
foo(
numpy.array([ 1 ]),
torch.randint(0, 10, [1], device=dev),
)
with expected(torch._dynamo.exc.TorchRuntimeError):
c(foo)(
numpy.array([ 1 ]),
torch.randint(0, 10, [1], device=dev),
)
with expected(RuntimeError):
foo(
torch.randint(0, 10, [1]),
torch.randint(0, 10, [1], device=dev),
)
with expected(torch._dynamo.exc.TorchRuntimeError):
c(foo)(
torch.randint(0, 10, [1]),
torch.randint(0, 10, [1], device=dev),
)
```
There's 4 calls here: {`tensor.new_tensor(ndarray)`,`tensor.new_tensor(tensor)`} with {`eager`/`dynamo`}, only the first one works without raising an exception.
#73838 said that `tensor_a.new_tensor(tensor_b)` returning an tensor on `tensor_b.device` is a side-effect, not an intentional change.
With `torch.compile`, `ndarray`s are converted to `FakeTensor` and treated like tensor, not data. So the device will be the default value `cpu`, not `tensor_a.device`, causing the inconsistency here.
This cause an issue when we tried to compile an existing model that relies on the documented behaviour, i.e. `tensor_a.new_tensor(tensor_b)` should an tensor on `tensor_a.device`.
While there're other possible fixes for this like handle this case in dynamo, I'd prefer to get back to the documented behaviour: #144958
### Versions
verified on main, unrelated to envs.
cc @mruberry @rgommers @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,792,201,575
|
[Dynamo] Allow `format()` to handle int
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 14
|
CONTRIBUTOR
|
Fixes #144830
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,792,186,646
|
ModuleNotFoundError: No module named 'torch.privateuseone'
|
xiangxinhello
|
open
|
[
"module: cpp",
"triaged",
"module: PrivateUse1"
] | 24
|
NONE
|
### 🐛 Describe the bug
When I add Backend::PrivateUse1, it throws an error ModuleNotFoundError: No module named 'torch.privateuseone'
import torch
a = torch.ones((3,3), device="privateuseone")
std::vector<std::pair<Backend, ScalarType>> all_declared_types() {
std::vector<std::pair<Backend, ScalarType>> ret;
// NOTE: Do not add more types here. This list controls the creation
// of legacy tensor types e.g. torch.cuda.FloatTensor which are
// maintained for backwards-compatibility only.
auto backends = {
Backend::PrivateUse1, Backend::CPU, Backend::CUDA, Backend::SparseCPU, Backend::SparseCUDA};

### Versions
PyTorch version: 2.5.0a0+gita8d6afb
Is debug build: True
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] optree==0.13.1
[pip3] torch==2.5.0a0+gita8d6afb
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.2.1 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.0a0+gita8d6afb dev_0 <develop>
cc @jbschlosser @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens
| true
|
2,792,117,280
|
[torch.export] _insert_copy_for_mutations can't generate proper copy nodes for pure inplace ops
|
GodHforever
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 3
|
NONE
|
### 🐛 Describe the bug
I am using the simplest `nn.Relu(inpleace=True)` to make a call to torch.export, and the following error occurs:
```
RuntimeError: Could not find input in either buffer or input nodes
```
My test code is as follows:
```
def ori_test():
x = torch.rand(2,3)
m = torch.nn.ReLU(inplace=True).eval()
m = torch.export.export(m, (x,))
mm = m.module() # error occurs
```
After debugging, I realised that the root cause was that `torch.export.export` was modifying the graph in a way that didn't strictly correspond to the rules in `class Graph`.
The generated graph is as follows:
```
graph():
%arg0_1 : [num_users=1] = placeholder[target=arg0_1]
%relu : [num_users=1] = call_function[target=torch.ops.aten.relu.default](args = (%arg0_1,), kwargs = {})
return (relu, relu)
```
It's ok. However, in `placeholder_naming_pass`, it get original args name "input" in aten and modified the structure of the graph into this
```
graph():
%input : [num_users=1] = placeholder[target=input]
%relu : [num_users=1] = call_function[target=torch.ops.aten.relu.default](args = (%input,), kwargs = {})
return (relu, relu)
```
**str "input" is confilicted with `builtins.__dict__`**. Thus, when it comes into `_unlift_exported_program_lifted_states`, it calls "copy.deepcopy", which is a method of `Graph`. In the process of copying, `_is_illegal_name` checks for naming conflicts, resulting in the diagram being modified as follows:
```
graph():
%input_1 : [num_users=1] = placeholder[target=input]
%relu : [num_users=1] = call_function[target=torch.ops.aten.relu.default](args = (%input_1,), kwargs = {})
return (relu, relu)
```
**This ultimately causes `_insert_copy_for_mutations` to fail to insert the copy node properly due to `input_name_to_node` mismatch.**
If possible, I think the same appropriate check should be added to `placeholder_naming_pass` to avoid this, although it may not be fully consistent with the naming in the original function.
Can any of the team members give some advice?
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.28.4
Libc version: glibc-2.31
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8369HB CPU @ 3.30GHz
Stepping: 11
CPU MHz: 3800.073
CPU max MHz: 4200.0000
CPU min MHz: 1200.0000
BogoMIPS: 6600.06
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 66 MiB
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 ida arat avx512_vnni
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.5.0
[pip3] numpy==1.26.3
[pip3] torch==2.5.1+cpu
[pip3] torchaudio==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[conda] intel-extension-for-pytorch 2.5.0 pypi_0 pypi
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 1.26.3 pypi_0 pypi
[conda] torch 2.5.1+cpu pypi_0 pypi
[conda] torchaudio 2.5.1+cpu pypi_0 pypi
[conda] torchvision 0.20.1+cpu pypi_0 pypi
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,792,112,790
|
[Inductor UT] Refactor FlexAttention UT and add CPU tests
|
jianan-gu
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"module: inductor",
"ciflow/inductor",
"no-stale"
] | 11
|
CONTRIBUTOR
|
This PR extends and refines all rest UTs for CPU and more devices in `test/inductor/test_flex_attention.py` and `test/inductor/test_flex_decoding.py`, as a follow-up to https://github.com/pytorch/pytorch/pull/141453
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,791,986,137
|
Fuzzer Improvements
|
exclamaforte
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Added more tests and cleaned up.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,791,873,360
|
Use torch with statement in torch distributed module
|
guangyey
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (ddp)",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144951
# Motivation
In https://github.com/pytorch/pytorch/pull/137678, we help use the device-agnostic APIs to generalize distributed module. As this [comment](https://github.com/pytorch/pytorch/pull/137678#discussion_r1828645683) said, we will use the with statement of `torch.Stream` once https://github.com/pytorch/pytorch/pull/140138 is landed.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,791,853,174
|
[Intel CPU] Fix issue #143484.
|
RanTao123
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fix issue in https://github.com/pytorch/pytorch/issues/143484.
The first dimension of w_ih and w_hh must be greater than or equal to 3 in order to be chunked into 3 parts.
| true
|
2,791,837,624
|
DISABLED test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 6
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35693621761).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 11166, in test_config_option_dont_assume_alignment_cudagraphs
res = fn_c(inp)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 11143, in fn
def fn(x):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1211, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 309, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 100, in g
return f(*args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1823, in forward
fw_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 489, in wrapper
return compiled_fn(runtime_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 671, in inner_fn
outs = compiled_fn(args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1228, in run
return compiled_fn(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 397, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 427, in cudagraphify
return manager.add_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2255, in add_function
return fn, fn(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1949, in run
out = self._run(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2057, in _run
out = self.run_eager(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2221, in run_eager
return node.run(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 635, in run
check_memory_pool(self.device_index, self.cuda_graphs_pool, refs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1754, in check_memory_pool
if torch._C._cuda_checkPoolLiveAllocations(device, pool_id, unique_storages):
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,791,837,550
|
DISABLED test_dropout_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 12
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_dropout_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35694427782).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_dropout_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 8288, in test_dropout
result1 = fn1(x)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 8283, in fn1
@torch.compile(backend="inductor")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1211, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 322, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 671, in inner_fn
outs = compiled_fn(args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 489, in wrapper
return compiled_fn(runtime_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1228, in run
return compiled_fn(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 397, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 427, in cudagraphify
return manager.add_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2255, in add_function
return fn, fn(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1949, in run
out = self._run(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2057, in _run
out = self.run_eager(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2221, in run_eager
return node.run(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 635, in run
check_memory_pool(self.device_index, self.cuda_graphs_pool, refs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1754, in check_memory_pool
if torch._C._cuda_checkPoolLiveAllocations(device, pool_id, unique_storages):
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_dropout_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,791,837,465
|
DISABLED test_aoti_eager_support_out_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 10
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_support_out_dynamic_shapes_cuda&suite=DynamicShapesCodegenGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35694427297).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_support_out_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 975, in test_aoti_eager_support_out
res_tensor = torch.clamp(
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_aoti_eager_support_out_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,791,832,224
|
In PyTorch, do matrix multiplication functions internally call cublasGEMM?
|
Linus-Voss
|
closed
|
[] | 1
|
NONE
|
In PyTorch, do matrix multiplication functions such as torch.matmul and torch.nn.functional.linear internally call cublasGEMM? If so, where can I find these calls, or is there any documentation that explains these calls?
| true
|
2,791,827,141
|
fix acquire pattern in topk
|
ngimel
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 3
|
COLLABORATOR
|
Similar to #128455, topk needs another threadfence to complete acquire pattern.
| true
|
2,791,806,686
|
Replacing explicit backend search with api call
|
AnantGulati
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 34
|
CONTRIBUTOR
|
Following up on PR #138216
In this PR we are modifying the internal structure of common distributed class DistributedTestBase to use an inbuilt function get_default_backend_for_device (#140536).
We have also refactored test file test_functional_api to remove all explicit calls to test_hpu. to add support for a new device to this file a user must only add the device to list of devices. This supports both in-tree and out of tree devices
This PR also adds a setter for world size inside multiprocess test case. This allows for test cases to redefine world size for different test cases in the same class and removes the need to add a new class simply to change properties like world size as shown in test_functional_api
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,791,799,412
|
cache pattern matcher graphs
|
xmfan
|
closed
|
[
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144943
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,791,786,236
|
[ROCm][TunableOp] Improve identification of fastest solution
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 6
|
COLLABORATOR
|
This PR addresses some stability issues with identifying the fastest solution on AMD GPUs, particularly the MI300.
Changes include:
- An improved timer, StreamTimerNoSync
- More aggressive skipping of slow solutions
- Additional statistics that can be used for diagnostics PYTORCH_TUNABLEOP_VERBOSE=3
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,791,731,891
|
[RFC] Add CPP INT8 SDPA Template for Inductor CPU
|
Valentine233
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 18
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
## Motivation
PyTorch Template is now a common method to implement the target kernel with high flexibility. We are considering to implement the Int8 SDPA CPU kernel by using template in PyTorch. With the method, the kernel code is generated from the corresponding template during compiling, and no explicit new OP needs to be added. In the future, by taking advantage of the template, it would also be more flexible to tune the optimized kernel with different parallel strategies or block sizes through benchmarking. This RFC proposes the approaches to implement the template-based method.
## Approaches
We propose a template-based method to implement the Int8 SDPA CPU kernel. Here are the design for the main components.
### Pattern Match
During the post grad fusion pass, we register a lowering pattern for Int8 SDPA. If the corresponding pattern hits, it can be replaced by `int8_sdpa_lowering` lowering function, which then further lowerings into the template.
### CPP INT8 SDPA Template
We create a CPP Int8 SDPA template `CppInt8SdpaTemplate` by inheriting the CPP flex attention template `CppFlexAttentionTemplate`. We tend to reuse the common parts in flex attention template as much as possible. Note that the CPP Int8 SDPA template does not need the modification-related inputs or member functions, as Int8 SDPA only needs the default one, simply adding the attention mask, for now.
#### Inputs
- Besides the SDPA typical inputs like query/key/value, extra zero points and scales need to be added for the quantization case.
- The `score_mod` and `mask_mod` are not needed.
#### Member functions
- The functions `add_choices` and `render` are overridden to support the int8 specific case.
- A new function `select_strategy` is added to generate the kernel with various parallel loop strategies or block sizes, according to the heuristic method given device info and input shapes.
- The modification-related functions like `apply_score_mod` are not needed.
#### Template codes
- Reuses the common codes in flex attention one.
- Adds more specific functions for data type int8, such as compensation functions.
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,791,725,668
|
[WIP] Move XNNPACKQuantizer from PyTorch to ExecuTorch
|
digantdesai
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"skip-pr-sanity-checks",
"module: dynamo",
"ciflow/inductor"
] | 25
|
CONTRIBUTOR
|
Summary:
This replicates XNNPACKQuantizer from PyTorch to ExecuTorch.
Rationale:
Main motivation is to avoid pytorch pin update in OSS after updating XNNPACKQuantizer, which can be rather frequent.
Other impact and considerations:
PT2e flow (which lives in PyTorch) relies havily on XNNPACKQuantizer for a "example" implementation for quantizer and more importantly tests. Fow now, we will keep the torch.ao.quantization.xnnpack_quantizer as is but mark is as not BC, and deprecated to discourace future new dependencies on it.
Other OSS repository using XNNPACKQuantizer from PyTorch now have to take an additional dependency on ExecuTorch.
Differential Revision: D68191752
cc @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,791,706,510
|
[inductor] [bug fix] align `conv` with eager when handling `uint`
|
shaoyuyoung
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
Fixes #144314
similar to #144313.
I add the error checking in `meta_registration`.
ut
```
pytest -s -v test/inductor/test_torchinductor.py -k test_conv_errors_with_uint
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,791,681,642
|
[Inductor][FlexAttention] Supports dynamic shapes with custom kernel options
|
yanboliang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144938
Fixes #144815
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,791,625,992
|
[inductor][cpu]float32 dynamic shape maml_omniglot performance regression in 2025-01-13 nightly release
|
zxd1997066
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>dynamic shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>maml_omniglot</td>
<td>single</td>
<td>5</td>
<td>1.991409</td>
<td>0.0011490320000000001</td>
<td>0.0022881926660880004</td>
<td>9.557705</td>
<td>5</td>
<td>2.569708</td>
<td>0.000891765</td>
<td>0.00229157565462</td>
<td>9.459977</td>
<td>0.77</td>
<td>1.0</td>
<td>0.78</td>
<td>0.99</td>
</tr>
</tbody>
</table>
<p>dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>maml_omniglot</td>
<td>single</td>
<td>5</td>
<td>2.090643</td>
<td>0.001101955</td>
<td>0.002303794507065</td>
<td>6.568006</td>
<td>5</td>
<td>2.732528</td>
<td>0.000844825</td>
<td>0.0023085079675999997</td>
<td>6.547288</td>
<td>0.77</td>
<td>1.0</td>
<td>0.77</td>
<td>1.0</td>
</tr>
</tbody>
</table>
the last good commit: f8fcb9e7d38b82844d72ae32c27d1592db27a8e2
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench maml_omniglot float32 first dynamic
Testing with dynamic shapes.
Testing with inductor.
single-thread testing....
loading model: 0it [00:00, ?it/s]
cpu eval maml_omniglot
running benchmark: 100%|██████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 249.55it/s]
1.778x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,maml_omniglot,5,1.777746,1.250640,27.412123,0.846500,47.260877,55.830938,14,1,0,0,0,0,1
```
the bad commit: 28b4992e7a60bb3fbb07c591099fa810557b4e57
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench maml_omniglot float32 first dynamic
Testing with dynamic shapes.
Testing with inductor.
single-thread testing....
loading model: 0it [00:00, ?it/s]
cpu eval maml_omniglot
running benchmark: 100%|██████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 224.95it/s]
1.434x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,maml_omniglot,5,1.433554,1.590300,30.843010,0.770410,47.418573,61.549773,14,1,0,0,0,0,1
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>766a5e3a</td>
<td>main</td>
<td>766a5e3a</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>e0f67405a154e7f9ce1ca9533cbc1d156fe075d7</td>
<td>main</td>
<td>f2d6cfa6775601df5a038f7a4d0b37da75a53ed9</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh single inference performance torchbench maml_omniglot float32 first dynamic
Suspected guilty commit: 28b4992e7a60bb3fbb07c591099fa810557b4e57
[torchbench-maml_omniglot-inference-float32-dynamic-default-single-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/18433106/torchbench-maml_omniglot-inference-float32-dynamic-default-single-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129
| true
|
2,791,623,115
|
DISABLED test_compile_forward_clone_cpu_float32 (__main__.TestNestedTensorOpInfoCPU)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"module: nestedtensor",
"skipped"
] | 3
|
NONE
|
Platforms: asan, linux, mac, macos, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_clone_cpu_float32&suite=TestNestedTensorOpInfoCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35684033086).
Over the past 3 hours, it has been determined flaky in 43 workflow(s) with 0 failures and 43 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_clone_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
cc @clee2000 @wdvr @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,791,623,013
|
DISABLED test_compile_forward_select_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"module: nestedtensor",
"skipped"
] | 4
|
NONE
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_select_cuda_float32&suite=TestNestedTensorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35686125481).
Over the past 3 hours, it has been determined flaky in 20 workflow(s) with 0 failures and 20 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_select_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
cc @clee2000 @wdvr @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,791,622,966
|
DISABLED test_pt2_traceable_aot_eager_cpu_float8_e5m2 (__main__.TestFloat8DtypeCPUOnlyCPU)
|
pytorch-bot[bot]
|
closed
|
[
"oncall: quantization",
"module: flaky-tests",
"skipped"
] | 10
|
NONE
|
Platforms: asan, linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pt2_traceable_aot_eager_cpu_float8_e5m2&suite=TestFloat8DtypeCPUOnlyCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35681075767).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pt2_traceable_aot_eager_cpu_float8_e5m2`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr
| true
|
2,791,622,922
|
DISABLED test_run_decompositions_map_handle_to_new_nodes (__main__.TestNumericDebugger)
|
pytorch-bot[bot]
|
open
|
[
"oncall: quantization",
"triaged",
"module: flaky-tests",
"module: macos",
"skipped"
] | 2
|
NONE
|
Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_run_decompositions_map_handle_to_new_nodes&suite=TestNumericDebugger&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35684218376).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_run_decompositions_map_handle_to_new_nodes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr @malfet @albanD
| true
|
2,791,584,852
|
[inductor][cpu]amp fp16 llama dynamic shape cpp wrapper performance regression in 2025-01-07 nightly release
|
zxd1997066
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 9
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>amp fp16 dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>llama</td>
<td>multiple</td>
<td>32</td>
<td>2.211985</td>
<td>0.021930711999999998</td>
<td>0.048510405983319994</td>
<td>39.177836</td>
<td>32</td>
<td>2.507979</td>
<td>0.018847622</td>
<td>0.04726944017593801</td>
<td>41.306366</td>
<td>0.88</td>
<td>0.97</td>
<td>0.86</td>
<td>1.05</td>
</tr>
<tr>
<td>torchbench</td>
<td>llama</td>
<td>single</td>
<td>1</td>
<td>3.950647</td>
<td>0.01318508</td>
<td>0.05208959674676</td>
<td>37.938252</td>
<td>1</td>
<td>4.542274</td>
<td>0.011483397</td>
<td>0.05216073562477799</td>
<td>40.390422</td>
<td>0.87</td>
<td>1.0</td>
<td>0.87</td>
<td>1.06</td>
</tr>
</tbody>
</table>
the last good commit: e88d06f54eeb80669a8a97322cf55c4da0519f08
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench llama amp_fp16 first dynamic cpp
Testing with dynamic shapes.
Testing with cpp wrapper.
Testing with inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval llama
running benchmark: 100%|███████████████████████████████████████████████████████████████████| 50/50 [00:03<00:00, 14.90it/s]
2.818x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,llama,32,2.817930,19.087040,33.137240,0.947519,340.724531,359.596442,531,1,0,0,0,0,0
```
the bad commit: b5b419d6276e5f0a9df623b45e9fb478f93ecc4b
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench llama amp_fp16 first dynamic cpp
running benchmark: 100%|███████████████████████████████████████████████████████████████████| 50/50 [00:03<00:00, 13.42it/s]
2.532x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,llama,32,2.532279,22.605699,37.389425,0.928050,340.260454,366.640333,531,1,0,0,0,0,0
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>766a5e3a</td>
<td>main</td>
<td>766a5e3a</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>f2d6cfa6775601df5a038f7a4d0b37da75a53ed9</td>
<td>main</td>
<td>cf0b72c4ab960a847758132cc501cf793926e070</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance torchbench llama amp_fp16 first dynamic cpp
Suspected guilty commit: b5b419d6276e5f0a9df623b45e9fb478f93ecc4b
[torchbench-llama-inference-amp_fp16-dynamic-cpp-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/18432852/torchbench-llama-inference-amp_fp16-dynamic-cpp-multiple-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129 @CaoE
| true
|
2,791,574,555
|
Error loading "torch\lib\aoti_custom_ops.dll" or one of its dependencies, when importing Torch, when building from Source on Windows 11 with cuDNN.
|
Panchovix
|
open
|
[
"module: build",
"module: windows",
"triaged"
] | 15
|
NONE
|
### 🐛 Describe the bug
Hi there, thanks for the great work.
When I build from source on Windows 11, CUDA 12.6, VS 2022, and specifying to use cuDNN (either 9.5.1 or 9.6.0), it gives this next error
```
>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 999, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "C:\Users\User\Desktop\pytorch_compile\pytorch\Miniconda3\Lib\site-packages\torch\__init__.py", line 274, in <module>
_load_dll_libraries()
File "C:\Users\User\Desktop\pytorch_compile\pytorch\Miniconda3\Lib\site-packages\torch\__init__.py", line 270, in _load_dll_libraries
raise err
OSError: [WinError 126] No se puede encontrar el módulo especificado. Error loading "C:\Users\Pancho\Desktop\pytorch_compile\pytorch\Miniconda3\Lib\site-packages\torch\lib\aoti_custom_ops.dll" or one of its dependencies.
```
Dependencies doesn't say that a .DLL is missing

And procmon shows

I did have to set on the CMake file:
```
set(CUDNN_LIBRARY_PATH "C:/Program Files/NVIDIA/CUDNN/v9.6/lib/12.6/x64/cudnn64_9.lib")
set(CUDNN_INCLUDE_PATH "C:/Program Files/NVIDIA/CUDNN/v9.6/include/12.6")
```
Else it wouldn't detect it, even if having those env variables set on the Path. Related https://github.com/pytorch/pytorch/issues/114054
Paths are
[Paths.txt](https://github.com/user-attachments/files/18432859/Paths.txt)
Cmake config is
[CMakeCache.txt](https://github.com/user-attachments/files/18432788/CMakeCache.txt)
When not setting up cuDNN, torch works abeit very slowly for image diffusion pipelines.
Commit used was 834086c, and used mostly the `.\.ci\pytorch\win-test-helpers\build_pytorch.bat` file.
### Versions
Not applicable (can't import torch)
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,791,547,690
|
Profile compile_inner instead of _compile_inner
|
laithsakka
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: title
Test Plan: NA
Reviewed By: jamesjwu
Differential Revision: D67990492
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,791,484,504
|
[Pipelining] Improve shape inference debug logging
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (pipeline)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144834
* __->__ #144929
Remove log that just said "running forward" since that is not so useful
in itself, replace with somewhat equivalent log that reports both input
and output shapes after running forward.
Note: enabled by `TORCH_LOGS=+pp`
Example:
```
[rank0]:V0115 13:28:58.282000 3908366 torch/distributed/pipelining/stage.py:1400] Shape inference: stage 0 inputs (tensor(..., device='meta', size=(1, 64), dtype=torch.int64),), outputs (tensor(..., device='meta', size=(1, 64, 256), dtype=torch.bfloat16),)
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,791,465,075
|
Add API to detect if activation checkpointing is enabled in the current region or not
|
danielvegamyhre
|
open
|
[
"module: activation checkpointing",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
I've been developing an experimental feature for torchao and doing a [PoC integration](https://github.com/pytorch/torchtitan/pull/778) in torchtitan. The implementation is based on custom Triton kernels, and we need to execute different kernels at different points in forward/backward depending on if AC is enabled or not.
At a high level:
- If activation checkpointing is enabled, we may want to optimize for peak memory usage and not precompute + save certain tensors for backward.
- If activation checkpointing is not enabled, we may want to optimize for throughput and precompute some tensors for backward pass during the forward pass, if there is a way to do efficiently.
After searching for a way to do this online, and then checking with @soulitzer, I found that pytorch currently provides no API to detect if the current region is using activation checkpointing or not. This would be a very useful feature for use cases like the one above.
### Alternatives
As an alternative/workaround, I implemented an explicit flag in my prototype code to indicate if we should optimize for peak memory usage in this particular FP8 linear layer or not, and [execute kernels conditionally based on that flag](https://github.com/pytorch/ao/blob/5e59b510b97d5a1cd08da59b1f6b2df6a1d8cdfd/torchao/prototype/float8nocompile/float8nocompile_linear.py#L72).
However, this is somewhat of a hack and hurts composability with AC. It relies on the user remembering to set this flag if they are using AC in this layer, and requires the user to implement [helper functions](https://github.com/pytorch/torchtitan/pull/778/files#diff-7792012777a5a91b75304ed92ff6414b2f414e1a92a20c7ce9f64b54fb3c7d4bR112-R119) for more advanced AC strategies like selective per layer AC.
### Additional context
_No response_
cc @soulitzer
| true
|
2,791,461,921
|
XFAIL test_save_load_checkpoint
|
huydhn
|
closed
|
[
"oncall: distributed",
"Merged",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/137771
The issue keeps showing up and rerun disable tests couldn't reproduce the issue. So, XFAIL it while waiting for distributed team to investigate.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.