id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,932,626,915
|
Lintunner running on newly added files despite being explicitly excluded in .lintrunner.toml
|
TovlyFB
|
closed
|
[
"module: ci",
"module: lint",
"triaged",
"module: devx"
] | 1
|
CONTRIBUTOR
|
In my [PR 148936](https://github.com/pytorch/pytorch/pull/148936), lintrunner is [failing with CLANGTIDY](https://github.com/pytorch/pytorch/actions/runs/13927137669/job/38974556917?pr=148936) despite me adding the newly added files to the `exclude_patterns` of the CLANGTIDY rule in `.lintrunner.toml`. Per @malfet, these CUDA files should not be linted against CLANGTIDY, but I can't figure out a way to exclude them. It also seems like they may be occuring on generated or included files (linter states error is in `usr/include/c++/11/cmath`) which I think also shouldn't be linted against. How can I resolve the linter error or at least understand what's causing it?
# [lintrunner error](https://github.com/pytorch/pytorch/actions/runs/13927137669/job/38974556917?pr=148936)
And a bunch more similar ones. See the [failing job itself](https://github.com/pytorch/pytorch/actions/runs/13927137669/job/38974556917?pr=148936) for all the errors.
```
>>> Lint for ../../usr/include/c++/11/cmath:
Error (CLANGTIDY) [clang-diagnostic-error]
constexpr function 'fpclassify' without __host__ or __device__ attributes
cannot overload __device__ function with the same signature; add a
__host__ attribute, or build with -fno-cuda-host-device-constexpr
534 |
535 |#ifndef __CORRECT_ISO_CPP11_MATH_H_PROTO_FP
536 | constexpr int
>>> 537 | fpclassify(float __x)
538 | { return __builtin_fpclassify(FP_NAN, FP_INFINITE, FP_NORMAL,
539 | FP_SUBNORMAL, FP_ZERO, __x); }
540 |
```
# What I've tried
1. I tried adding the new .cuh and .cu files from my PR to the exclude section of the CLANGTIDY rule in `.lintrunner.toml`, as is shown in the PR.
2. I tried narrowing the scope of lintrunner in [PR 149345](https://github.com/pytorch/pytorch/pull/149345). However this didn't work as it stopped lintrunner from linting any files, e.g. I tested purposefully adding a lint error in that PR and it wasn't caught.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @ZainRizvi @kit1980 @huydhn @clee2000
| true
|
2,932,603,851
|
[inductor] Add a helper for convert index_dtype to torch dtype
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149531
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,932,595,881
|
[CI][docker] Remove vulkan and swiftshader from docker builds
|
clee2000
|
closed
|
[
"Merged",
"module: vulkan",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Probably should have been removed with https://github.com/pytorch/pytorch/pull/139354/files?
Should I also remove mentions of them from build.sh and test.sh?
| true
|
2,932,553,924
|
Fakify torchbind objects in compile_fx and add tests for SigridTransformsInstanceTorchBind
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
We need to properly fakify torchbind objects, including the ones in graph module attributes, so the resgitered fake implementation works properly.
- _fakify_script_objects in `compile_fx`
- Allow fake torchbind objects in `torchbind_constants`
Remove `node.meta["unbacked_bindings"]` for `aot_compile` in `compile_fx`. Otherwise `ShapeProp` will fail when trying to resolve the `unbacked_bindings` of `with_effect` tokens.
Update `sigrid_transforms_test` to use the latest `torch._inductor.aot_compile` API.
Add a test for `Fakify torchbind objects in compile_fx and add tests for SigridTransformsInstanceTorchBind` in `e2e_test`.
Test Plan:
```
buck run //caffe2/torch/fb/sparsenn:sigrid_test -- -r test_transform_torch_bind
buck run //sigmoid/inference/test:e2e_test_cpu -- -r SigridTransforms
buck2 run mode/dev-nosan sigmoid/inference/ts_migration:pt2i_readiness_main -- --model_id 545017754 --test_suite ads_all --mode test_preproc
```
Differential Revision: D70013257
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,932,458,473
|
Fix dynamic shapes repordering bug
|
tugsbayasgalan
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149528
WHen we create constraints, we look at the ordering of kwargs according to model signature. But when we trace, we use the ordering that is created based on how user passes in their kwargs. As a result, constraints and dynamic shapes end up having a different order causing issues when they have different dynamic tensor specs.
Differential Revision: [D71478578](https://our.internmc.facebook.com/intern/diff/D71478578)
| true
|
2,932,455,816
|
GHA request labels should represent independent fleet of runners
|
jeanschmidt
|
open
|
[
"module: ci",
"triaged",
"enhancement",
"needs design"
] | 3
|
CONTRIBUTOR
|
Currently we identified that a few runners are provided by multiple vendors/organizations and use the same label.
* linux.s390x
* linux.idc.xpu
* linux.rocm.gpu.2
* macos-m2-15 (and mac label standards)
We need to identify the labels that are reused across fleets and define a new standard that better reflect where the runners are hosted.
The reasoning for this is related to the SLO agreement and the monitoring tooling that is available to us is based on the label requested by jobs. AFAIK this limitation comes from GH side that only reports the requested label for a job in its job API.
We can automate the distribution of load across multiple organizations/providers/fleets by using experiment and runner determinator.
| true
|
2,932,444,267
|
Add release branch push triggers to rocm-mi300.yml
|
pytorchbot
|
closed
|
[
"module: rocm",
"open source",
"topic: not user facing",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 1
|
COLLABORATOR
|
When we added the rocm-mi300.yml earlier this year, we had lower capacity and we were just pipecleaning the workflow, so we set the trigger to only respond to pushes to main branch. But now we have more stability as well as capacity, and we would really like to ensure that the release branch is being tested on MI300s as well.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,932,374,513
|
Pin auditwheel to 6.2.0
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Observing aarch64 failure in nightly:
https://github.com/pytorch/pytorch/actions/runs/13917778961/job/38943911228
Similar to: https://github.com/pytorch/vision/pull/8982
```
2025-03-18T08:44:58.4128744Z Repairing Wheel with AuditWheel
2025-03-18T08:44:58.5440988Z INFO:auditwheel.main_repair:Repairing torch-2.8.0.dev20250318+cpu-cp39-cp39-linux_aarch64.whl
2025-03-18T08:45:20.3393288Z Traceback (most recent call last):
2025-03-18T08:45:20.3393732Z File "/opt/python/cp39-cp39/bin/auditwheel", line 8, in <module>
2025-03-18T08:45:20.3394115Z sys.exit(main())
2025-03-18T08:45:20.3394559Z File "/opt/_internal/cpython-3.9.21/lib/python3.9/site-packages/auditwheel/main.py", line 53, in main
2025-03-18T08:45:20.3395064Z result: int | None = args.func(args, p)
2025-03-18T08:45:20.3395626Z File "/opt/_internal/cpython-3.9.21/lib/python3.9/site-packages/auditwheel/main_repair.py", line 203, in execute
2025-03-18T08:45:20.3396163Z out_wheel = repair_wheel(
2025-03-18T08:45:20.3396657Z File "/opt/_internal/cpython-3.9.21/lib/python3.9/site-packages/auditwheel/repair.py", line 84, in repair_wheel
2025-03-18T08:45:20.3397184Z raise ValueError(msg)
2025-03-18T08:45:20.3397620Z ValueError: Cannot repair wheel, because required library "libarm_compute.so" could not be located
2025-03-18T08:45:20.3678843Z Traceback (most recent call last):
2025-03-18T08:45:20.3679267Z File "/pytorch/.ci/aarch64_linux/aarch64_wheel_ci_build.py", line 236, in <module>
2025-03-18T08:45:20.3680988Z pytorch_wheel_name = complete_wheel("/pytorch/")
2025-03-18T08:45:20.3681449Z File "/pytorch/.ci/aarch64_linux/aarch64_wheel_ci_build.py", line 141, in complete_wheel
2025-03-18T08:45:20.3681976Z check_call(["auditwheel", "repair", f"dist/{wheel_name}"], cwd=folder)
2025-03-18T08:45:20.3682860Z File "/opt/python/cp39-cp39/lib/python3.9/subprocess.py", line 373, in check_call
2025-03-18T08:45:20.3683308Z raise CalledProcessError(retcode, cmd)
2025-03-18T08:45:20.3684034Z subprocess.CalledProcessError: Command '['auditwheel', 'repair', 'dist/torch-2.8.0.dev20250318+cpu-cp39-cp39-linux_aarch64.whl']' returned non-zero exit status 1.
2025-03-18T08:45:20.3790063Z ##[error]Process completed with exit code 1.
2025-03-18T08:45:20.3862012Z ##[group]Run pytorch/test-infra/.github/actions/teardown-linux@main
2025-03-18T08:45:20.3862448Z with:
```
Please note aarch64 CUDA failures are related to: https://github.com/pytorch/pytorch/pull/149351
| true
|
2,932,293,420
|
[codemod] Fix clang-tidy command line doc comments
|
scramsby
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary:
Fixes the comments to match the latest updates to the checked-in tools.
Search/replace applied in this order:
* `# /fbsource/tools/lint/clangtidy/clang-tidy-platform010 -list-checks` -> `# ~/fbsource/tools/lint/clangtidy/clang-tidy-platform010-clang-17 -list-checks`
* `# ~/fbsource/tools/lint/clangtidy/clang-tidy-platform010 -list-checks` -> `# ~/fbsource/tools/lint/clangtidy/clang-tidy-platform010-clang-17 -list-checks`
* `fbsource/tools/lint/clangtidy/clang-tidy-platform010 -list-checks` -> `fbsource/tools/lint/clangtidy/clang-tidy-platform010-clang-17 -list-checks`
Test Plan: CI
Reviewed By: johnkearney
Differential Revision: D71431516
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,932,279,253
|
DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39026440392).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 327, in test_binary_op_with_scalar_self_support
self._binary_test(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 263, in _binary_test
actual = op(inputs, self.is_cuda, is_fastpath)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_pow', keys=('aten::_foreach_pow', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,932,279,096
|
DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39026442978).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 327, in test_binary_op_with_scalar_self_support
self._binary_test(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 263, in _binary_test
actual = op(inputs, self.is_cuda, is_fastpath)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_pow', keys=('aten::_foreach_pow', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 1: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float16]], args=(3), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,932,278,773
|
DISABLED test_lazy_module4 (__main__.NNModuleTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_lazy_module4&suite=NNModuleTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39028613778).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 5 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_lazy_module4`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_modules.py", line 1601, in test_lazy_module4
self.assertTrue(torch.allclose(ref, res))
File "/opt/conda/envs/py_3.9/lib/python3.9/unittest/case.py", line 688, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
python test/dynamo/test_modules.py NNModuleTests.test_lazy_module4
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_modules.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,932,278,665
|
DISABLED test_lazy_module2 (__main__.NNModuleTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_lazy_module2&suite=NNModuleTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39036676376).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_lazy_module2`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_modules.py", line 1589, in test_lazy_module2
self.assertTrue(torch.allclose(ref, res))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_modules.py NNModuleTests.test_lazy_module2
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_modules.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,932,278,467
|
DISABLED test_lazy_module3_cuda (__main__.NNModuleTestsDeviceCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_lazy_module3_cuda&suite=NNModuleTestsDeviceCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39036676376).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_lazy_module3_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_modules.py", line 1797, in test_lazy_module3
self.assertTrue(torch.allclose(ref, res))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_modules.py NNModuleTestsDeviceCUDA.test_lazy_module3_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_modules.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,932,201,749
|
Try to enforce signature ordering
|
tugsbayasgalan
|
open
|
[
"ciflow/inductor",
"release notes: export"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149518
| true
|
2,932,112,603
|
Add release branch push triggers to rocm-mi300.yml
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm"
] | 8
|
COLLABORATOR
|
When we added the rocm-mi300.yml earlier this year, we had lower capacity and we were just pipecleaning the workflow, so we set the trigger to only respond to pushes to main branch. But now we have more stability as well as capacity, and we would really like to ensure that the release branch is being tested on MI300s as well.
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,932,103,964
|
```StateDictOptions``` in combination with ```cpu_offload=True``` and ```strict=False``` not working
|
psinger
|
open
|
[
"oncall: distributed",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
When running the following for distributed weight loading:
```
options = StateDictOptions(
full_state_dict=True,
broadcast_from_rank0=True,
strict=False,
cpu_offload=True,
)
set_model_state_dict(model=model, model_state_dict=weights, options=options)
```
I am getting `KeyError`for keys that are not in the model.
I believe it has to do with not checking for strict at this point:
https://github.com/pytorch/pytorch/blob/main/torch/distributed/_state_dict_utils.py#L656
Which only appears to be done afterwards.
### Versions
Current main
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @pradeepfn
| true
|
2,932,099,772
|
[Inductor Cutlass backend] Fix imports and compilation of Cutlass SM100 Kernels
|
kadeng
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary: Fixes the import and compilation of Cutlass SM100 Kernels.
Test Plan: Cutlass backend unit tests, running benchmarks/inductor_backends/cutlass.py
Differential Revision: D71196747
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,932,088,373
|
DTensor: more generically support CompositeImplicitAutograd ops under inference mode
|
bdhirsh
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 3
|
CONTRIBUTOR
|
Today, if you run DTensor (or any tensor subclass) under __torch_dispatch__, you will start seeing `CompositeImplicitAutograd` ops show up in the torch_dispatch.
"handling" these ops is trivial: you can just tell them to decompose into their constituent ops. Normally this decomposing happens in autograd, above DTensor, but inference_mode turns autograd off, forcing the subclass to handle the op directly.
It looks like previously we manually added a few CompositeImplicitAutograd entries to DTensor (e.g. linear), but this PR tries to support these ops a bit more generically.
The main difference is that DTensor now needs to check if a given op is `CompositeImplicitAutograd` before attempting to run sharding prop. I ran a quick microbenchmark for the below code with `timeit`, which gave me overhead on the order of ~1us, which is hopefully not too bad for eager mode:
```
def fast_function():
return torch._C._dispatch_has_kernel_for_dispatch_key(op_call.name(), torch._C.DispatchKey.CompositeImplicitAutograd)
import timeit
time_taken = timeit.timeit(fast_function, number=1000)
# printed 0.12..., aka 1.2us
print(f'func={str(op_call)}, time={str(time_taken)}')
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149411
* #149652
* __->__ #149514
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,932,052,552
|
[ROCm] Enable more inductor UTs
|
jataylo
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/inductor-rocm",
"ciflow/inductor-periodic",
"ciflow/rocm-mi300"
] | 7
|
COLLABORATOR
|
Primarily enable inductor fp8 tests, also enable other inductor tests
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,932,034,355
|
Added _fused_sdp_choice_stub dispatcher support for HPU device
|
pralay-das
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: sdpa"
] | 9
|
CONTRIBUTOR
|
Currently for HPU device we don't have any support for _fused_sdp_choice_stub dispatcher function, so for `scaled_dot_product_attention` function by default selecting the `MATH Backend` using `_fused_sdp_choice_stub` for HPU device. With this PR we have enabled support for `_fused_sdp_choice_stub` dispatcher function, so that we can invoke any backend (for example math, flash_attention, efficient_attention, cudnn_attention, overrideable) according to user choice for HPU device.
| true
|
2,932,016,919
|
[XPU] Update triton commit to fix to fix level_zero not found by env var LEVEL_ZERO_V1_SDK_PATH.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ciflow/xpu"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149511
| true
|
2,932,015,883
|
Fix with effect lowering for list return type
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: - For `torch.ops.higher_order.with_effects`'s lowering, we should not extract the items out of an list (i.e. `*result` vs `result`). The `get_attr` nodes consider the result to be in the list format.
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r test_torchbind_aot_compile
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r list_return
buck run //caffe2/torch/fb/sparsenn:sigrid_test -- -r test_transform_torch_bind # tested together with D70013257
buck run fbcode//mode/dev-nosan //caffe2/test:test_export -- -r test_custom_obj
```
Reviewed By: angelayi
Differential Revision: D71346024
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,931,738,760
|
`torch.compile` has a graph break when one of the `out_dims` of `torch.vmap` is set to `None`
|
sses7757
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 2
|
NONE
|
### 🐛 Describe the bug
I want to `torch.compile` a vmapped function (`torch.vmap(..., in_dims=(None, 0), out_dims=(None, 0))`) with the default "inductor" backend and `fullgraph=True`; however, it failed due to a graph break caused by the `torch._C._functorch.is_batchedtensor` function, which was invoked by `torch.vmap`.
This problem seems to be caused by setting an `out_dim` to `None` since the `is_batchedtensor` function will not be invoked otherwise.
I have searched for [the existing and past issues](https://github.com/pytorch/pytorch/issues); however, I failed to find issues related to `torch.compile` and `is_batchedtensor`/`out_dims`.
## Minimal reproducer
```
import torch
def test(x: torch.Tensor, y: torch.Tensor):
return x, y * 2
vmap_test = torch.vmap(test, in_dims=(None, 0), out_dims=(None, 0))
compiled_vmap_test = torch.compile(vmap_test, fullgraph=True)
print(compiled_vmap_test(torch.rand(3), torch.rand(3, 4)))
```
## Ablation
I have tried all of the ablations in https://pytorch.org/docs/main/torch.compiler_troubleshooting.html#reporting-issues. However, I got the same error as long as `fullgraph=True`.
### Error logs
```
Traceback (most recent call last):
File "c:\Users\admin\Documents\python_tests\unit_test\problems\test.py", line 8, in <module>
print(compiled_vmap_test(torch.rand(3), torch.rand(3, 4)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 547, in __call__
return _compile(
^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\convert_frame.py", line 662, in transform
tracer.run()
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2868, in run
super().run()
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\higher_order_ops.py", line 1598, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2341, in CALL
self._call(inst)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2341, in CALL
self._call(inst)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2341, in CALL
self._call(inst)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\torch.py", line 953, in call_function
tensor_variable = wrap_fx_proxy(
^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\builder.py", line 2153, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\builder.py", line 2219, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\builder.py", line 2317, in _wrap_fx_proxy
return handle_traced_output(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\variables\builder.py", line 2517, in handle_traced_output
unimplemented(
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_dynamo\exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor bool call_function <built-in method is_batchedtensor of PyCapsule object at 0x000001AAFF3C9470>
from user code:
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_functorch\apis.py", line 203, in wrapped
return vmap_impl(
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_functorch\vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_functorch\vmap.py", line 480, in _flat_vmap
return _unwrap_batched(batched_outputs, out_dims, vmap_level, batch_size, func)
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_functorch\vmap.py", line 222, in _unwrap_batched
_maybe_remove_batch_dim(
File "C:\Users\admin\Documents\python_tests\.venv\Lib\site-packages\torch\_functorch\vmap.py", line 167, in _maybe_remove_batch_dim
if isinstance(batched_output, torch.Tensor) and is_batchedtensor(
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 家庭中文版 (10.0.26100 64 位)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.26100-SP0
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 566.36
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 13th Gen Intel(R) Core(TM) i9-13900K
Manufacturer: GenuineIntel
Family: 207
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3000
MaxClockSpeed: 3000
L2CacheSize: 32768
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,931,635,300
|
Adam optimizer ValueError: beta1 as a Tensor
|
Vetti420
|
open
|
[
"needs reproduction",
"module: optimizer",
"triaged"
] | 9
|
NONE
|
### 🐛 Describe the bug
I got this error, if I set capturable=True
ValueError: beta1 as a Tensor is not supported for capturable=False and foreach=True
But worked for this,
config.optimizer =
{
"foreach": False,
"capturable": False
}
### Versions
v2.7.0
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,931,624,315
|
Switch s390x tests to blocklist
|
AlekseiNikiforovIBM
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/s390"
] | 6
|
COLLABORATOR
|
Switch s390x tests to blocklist
| true
|
2,931,596,711
|
[ROCm] [Perf Testing] Remove num_warps restrictions on ROCm for perf
|
jataylo
|
open
|
[
"module: rocm",
"open source",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/inductor-perf-test-nightly-rocm"
] | 2
|
COLLABORATOR
|
Perf testing
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,931,573,933
|
Parallelize sort
|
annop-w
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"Reverted",
"topic: not user facing",
"ciflow/inductor",
"ci-no-td"
] | 16
|
CONTRIBUTOR
|
PR #142391 erroneously used `USE_OMP` instead of `USE_OPENMP`.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,931,489,127
|
[Inductor] Adjust boundary checking of dimensions using YBLOCK
|
kundaMwiza
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 5
|
CONTRIBUTOR
|
Apply the same logic introduced in https://github.com/pytorch/pytorch/pull/139751 to triton kernels using block ptrs. Here, if ynumel / YBLOCK > max_y_grids, dimensions dependent on YBLOCK need to be boundary checked, even if the block shape in such dimensions is a multiple of an expression in YBLOCK. This is because ynumel / YBLOCK % get_max_y_grids() may not be zero, so redundant programs will be launched that will attempt to read / write OOB.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,931,482,682
|
fix ValueError issue
|
FlintWangacc
|
open
|
[
"triaged",
"open source",
"release notes: fx",
"fx"
] | 2
|
NONE
|
fix following issue:
ValueError: code: co_varnames is too small
Fixes #149497
In `symbolic_trace`, It will crash with following stack.
```shell
Traceback (most recent call last):
File "/home/hmsjwzb/work/models/QWEN/./qwen5.py", line 55, in <module>
traced_model = torch.fx.symbolic_trace(model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 1314, in symbolic_trace
graph = tracer.trace(root, concrete_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 788, in trace
fn, args = self.create_args_for_root(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 679, in create_args_for_root
root_fn = _patch_function(root_fn, len(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 184, in _patch_function
new_code = CodeType(*co_args) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^
ValueError: code: co_varnames is too small
```
Here is the python script caused this crash.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch_mlir import fx
model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "What are the benefits of using AI in healthcare?"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
model.eval()
traced_model = torch.fx.symbolic_trace(model)
m = fx.export_and_import(traced_model, (input_ids,), enable_ir_printing=True,
enable_graph_printing=True)
with open("qwen1.5b_s.mlir", "w") as f:
f.write(str(m))
```
I think it is a mistake.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,931,401,399
|
[dynamo] register_module_forward_pre_hook lead to compiled model produce wrong inference results
|
Cookiee235
|
closed
|
[
"high priority",
"triaged",
"actionable",
"module: correctness (silent)",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025",
"ubn"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Given the same inputs, the inference results for the compiled models were not equivalent to the original model before/after the execution of `register_module_forward_pre_hook(pre_hook)` ,
Such results are bizarre!
```python
import torch
model = torch.nn.Sequential(
torch.nn.Linear(10, 5),
torch.nn.ReLU(),
torch.nn.Linear(5, 2)
)
inputs = torch.arange(10, dtype=torch.float32).unsqueeze(0)
res1 = model(inputs)
print(f"original inference results: {res1}")
def pre_hook(module, input):
modified_input = input[0] + 1.0
return (modified_input,)
handle = torch.nn.modules.module.register_module_forward_pre_hook(pre_hook)
res2 = model(inputs)
print(f"inference results after hook: {res2}")
#handle.remove()
compiled_model = torch.compile(model, backend='inductor')
with torch.no_grad():
compiled_out = compiled_model(inputs)
print(f"inference results with compiled model {compiled_out}")
torch.testing.assert_close(res2, compiled_out)
```
### Outputs
```
original inference results: tensor([[-0.8701, 0.1359]], grad_fn=<AddmmBackward0>)
inference results after hook: tensor([[-1.4718, 0.5898]], grad_fn=<AddmmBackward0>)
inference results with compiled model tensor([[-1.4539, 0.4481]])
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0319/torch.linalg.matrix_rank.py", line 23, in <module>
torch.testing.assert_close(res2, compiled_out)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 2 / 2 (100.0%)
Greatest absolute difference: 0.14176997542381287 at index (0, 1) (up to 1e-05 allowed)
Greatest relative difference: 0.3164082467556 at index (0, 1) (up to 1.3e-06 allowed)
```
### Error logs
_No response_
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.16
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames @bdhirsh
| true
|
2,931,304,833
|
Inductor produce significantly different inference results with the originl original model
|
Cookiee235
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = torch.nn.Linear(3, 3)
self.linear.weight = torch.nn.Parameter(torch.eye(3))
self.linear.bias = torch.nn.Parameter(torch.zeros(3))
def forward(self, x):
x = self.linear(x)
x = torch.nn.functional.tanh(x)
inv_x, info = torch.linalg.inv_ex(x, check_errors=True)
return inv_x
model = Model()
inputs = torch.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]])
res = model(inputs)
res2 = model(inputs)
torch.testing.assert_close(res, res2)
compiled_model = torch.compile(model, backend='inductor')
with torch.no_grad():
compiled_out = compiled_model(inputs)
torch.testing.assert_close(res, compiled_out)
```
### Error logs
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0319/torch.nn.functional.tanh.py", line 25, in <module>
torch.testing.assert_close(res, compiled_out)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 9 / 9 (100.0%)
Greatest absolute difference: 640.109375 at index (1, 1) (up to 1e-05 allowed)
Greatest relative difference: 0.006599230691790581 at index (0, 0) (up to 1.3e-06 allowed)
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.16
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Notaffected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,931,250,561
|
[RFC] : Dynamically Quantized 8-bit Matrix Multiplication support
|
nikhil-arm
|
open
|
[
"oncall: quantization",
"enhancement"
] | 6
|
COLLABORATOR
|
# Dynamically Quantized 8-bit Matrix Multiplication support
## Background
PyTorch currently supports 4-bit dynamic quantized matrix multiplication via two operations:
- **`torch.ops.aten._dyn_quant_pack_4bit_weight`**
Packs the quantized weights, scales, and (optional) bias for a Linear layer into a single tensor
- **`torch.ops.aten._dyn_quant_matmul_4bit`**
Performs matrix multiplication using the packed 4-bit quantized weights and 8-bit dynamically quantized activations .
These operations enable efficient low-precision computation, reducing both memory usage and computational cost.
We intends to integrate the Kleidai 8-bit dynamic quantized matmul kernels into PyTorch. The 8-bit kernel follows a similar paradigm as the 4-bit implementation by requiring the quantized weights, scales, and bias to be packed together before executing the matrix multiplication kernel.
This enhancement will offer additional quantization precision, providing users with more options to balance performance and accuracy.
## Problem Statement
Integrating 8-bit dynamic quantized matrix multiplication introduces some challenges:
1. **Separate Operators Approach:**
- **What it does:**
Introduces two new operators:
- `torch.ops.aten._dyn_quant_pack_8bit_weight`
- `torch.ops.aten._dyn_quant_matmul_8bit`
- **Challenge:**
While this approach provides a clear separation between the 4-bit and 8-bit implementations—allowing targeted optimizations for each—it does increase the number of operators in ATen. Historically, there has been a preference to minimize the operator set to reduce potential maintenance complexity.
2. **Unified n-Bit Operators Approach:**
- **What it does:**
Generalizes the current 4-bit operators by adding a parameter (e.g., `bit_width`) to support multiple bit-widths. This unified operator can dispatch to either the 4-bit or 8-bit kernel based on the value of the parameter.
- **Challenge:**
This approach requires modifying the existing operator signatures which makes them more generalised and better suited for future expansion scope but we need to be careful not to break backwards compatibility in exiting integrations.
## Proposed Approaches
Below, we outline two potential approaches to integrate 8-bit dynamic quantized matrix multiplication:
---
### Approach 1: Separate 8-Bit Operators
This approach introduces dedicated operators for 8-bit dynamic quantization, keeping the 4-bit and 8-bit code paths distinct.
#### 1. `torch.ops.aten._dyn_quant_pack_8bit_weight`
**Description:**
Packs the 8-bit quantized weights, scales, and bias for a Linear layer into a compact format using 8-bit symmetric quantization.
**Parameters:**
- **`weight`** (`Tensor`): The original weights of the Linear layer.
- **`scales_and_zeros`** (`Tensor`): A tensor containing the quantization scales (and possibly zero-points) for each group.
- **`bias`** (`Tensor`, optional): The bias tensor for the Linear layer.
- **`groupsize`** (`int`): The number of channels per group. Must be a multiple of 32 or equal to `in_features`.
- **`in_features`** (`int`): The number of input features.
- **`out_features`** (`int`): The number of output features.
**Returns:**
A tensor representing the packed 8-bit weights and associated quantization parameters, ready for use in matrix multiplication.
---
#### 2. `torch.ops.aten._dyn_quant_matmul_8bit`
**Description:**
Performs matrix multiplication using the 8-bit quantized weights.
**Parameters:**
- **`input`** (`Tensor`): The input tensor for matrix multiplication, typically with shape `[batch_size, in_features]`.
- **`packed_weights`** (`Tensor`): The packed 8-bit weights, as produced by `torch.ops.aten._dyn_quant_pack_8bit_weight`.
- **`groupsize`** (`int`): The number of channels per group.
- **`in_features`** (`int`): The number of input features.
- **`out_features`** (`int`): The number of output features.
**Returns:**
A tensor containing the result of the matrix multiplication with shape `[batch_size, out_features]`.
**Pros:**
- Provides a clear separation between implementations, facilitating targeted optimization of each precision.
**Cons:**
- Increases the number of operators in ATen, which has historically been a point of caution due to concerns over API bloat and maintenance complexity.
---
### Approach 2: Unified n-Bit Operators
This approach extends the existing 4-bit operators to support both 4-bit and 8-bit precisions via a new `bit_width` parameter. The operator dispatches to the appropriate kernel based on the provided bit-width.
#### 1. `torch.ops.aten._dyn_quant_pack_weight`
**Description:**
A unified packing operation that supports both 4-bit and 8-bit quantization by accepting a `bit_width` parameter.
**Parameters:**
- **`weight`** (`Tensor`): The original weights of the Linear layer.
- **`scales_and_zeros`** (`Tensor`): A tensor containing quantization scales (and zero-points if necessary) for each group.
- **`bias`** (`Tensor`, optional): The bias tensor for the Linear layer.
- **`groupsize`** (`int`): The number of channels per group.
- **`in_features`** (`int`): The number of input features.
- **`out_features`** (`int`): The number of output features.
- **`bit_width`** (`int`): The quantization precision. Accepted values are 4 or 8 (defaults to 4 if not specified).
**Returns:**
A tensor representing the packed weights and quantization parameters in the specified precision.
---
#### 2. `torch.ops.aten._dyn_quant_matmul`
**Description:**
Performs matrix multiplication using the unified operator that supports both 4-bit and 8-bit quantized weights.
**Parameters:**
- **`input`** (`Tensor`): The input tensor for matrix multiplication, typically with shape `[batch_size, in_features]`.
- **`packed_weights`** (`Tensor`): The packed weights as produced by `torch.ops.aten._dyn_quant_pack_weight`.
- **`groupsize`** (`int`): The number of channels per group.
- **`in_features`** (`int`): The number of input features.
- **`out_features`** (`int`): The number of output features.
- **`bit_width`** (`int`): Specifies the bit-width of the quantization (4 or 8) and controls the kernel dispatch.
**Returns:**
A tensor containing the result of the matrix multiplication with shape `[batch_size, out_features]`.
**Pros:**
- Offers a more concise and unified API that minimizes code duplication.
- Given that the current 4-bit API is only limitedly used, modifying its signature poses a relatively low risk to the user base, while enabling a more scalable solution.
**Cons:**
- Requires modifications to existing operator signatures, which necessitates careful planning to ensure a smooth transition. Clear documentation and defaulting `bit_width` to 4 can mitigate any potential disruptions.
---
## API Usage Examples
### Using Separate 8-Bit Operators:
```python
# 8-bit weight packing
packed_weights_8bit = torch.ops.aten._dyn_quant_pack_8bit_weight(
weight, scales_and_zeros, bias, groupsize, in_features, out_features
)
# 8-bit matrix multiplication
output = torch.ops.aten._dyn_quant_matmul_8bit(
input, packed_weights_8bit, groupsize, in_features, out_features
)
# Packing weights with a specified bit width (4 or 8)
packed_weights = torch.ops.aten._dyn_quant_pack_weight(
weight, scales_and_zeros, bias, groupsize, in_features, out_features, bit_width=8
)
# Matrix multiplication using the unified operator
output = torch.ops.aten._dyn_quant_matmul(
input, packed_weights, groupsize, in_features, out_features, bit_width=8
)
| true
|
2,931,228,436
|
Adapt test_misc.py for HPUs
|
amathewc
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 12
|
CONTRIBUTOR
|
This PR is related to https://github.com/pytorch/pytorch/pull/145476 . That PR had two files (test_functions.py and test_misc.py) . test_functions was causing CI/rebase/merge issues and hence removed for now. This PR contains only test_misc.py.
This is a continuation of https://github.com/pytorch/pytorch/pull/144387 .
## MOTIVATION
We recently integrated support for Intel Gaudi devices (identified as 'hpu') into the common_device_type framework via the pull request at https://github.com/pytorch/pytorch/pull/126970. This integration allows tests to be automatically instantiated for Gaudi devices upon loading the relevant library. Building on this development, the current pull request extends the utility of these hooks by adapting selected CUDA tests to operate on Gaudi devices. Additionally, we have confirmed that these modifications do not interfere with the existing tests on CUDA devices.
Other accelerators can also extend the functionality by adding the device in the devices list. ( For eg: xpu )
## CHANGES
Create a separate class for test functions running on CUDA devices
Extend the functionality of these tests to include HPUs
Use instantiate_device_type_tests with targeted attributes to generate device-specific test instances within the new classes
Apply skipIfHPU decorator to bypass tests that are not yet compatible with HPU devices
cc: @ankurneog , @EikanWang , @yanboliang , @guangyey
PS: Most of these changes were initially part of https://github.com/pytorch/pytorch/pull/147609 , but closed that PR due to merge conflicts. The review comments were handled in this PR.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,931,200,518
|
Fix ValueError issue
|
FlintWangacc
|
closed
|
[
"module: cpu",
"open source",
"module: amp (automated mixed precision)",
"release notes: quantization",
"module: dynamo"
] | 4
|
NONE
|
Fixes #149497
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,931,196,232
|
symbolic_trace failed on deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
FlintWangacc
|
open
|
[
"module: fx",
"oncall: fx"
] | 1
|
NONE
|
### 🐛 Describe the bug
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch_mlir import fx
model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "What are the benefits of using AI in healthcare?"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
model.eval()
traced_model = torch.fx.symbolic_trace(model)
m = fx.export_and_import(traced_model, (input_ids,), enable_ir_printing=True,
enable_graph_printing=True)
with open("qwen1.5b_s.mlir", "w") as f:
f.write(str(m))
```
```shell
Traceback (most recent call last):
File "/home/hmsjwzb/work/models/QWEN/./qwen5.py", line 55, in <module>
traced_model = torch.fx.symbolic_trace(model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 1314, in symbolic_trace
graph = tracer.trace(root, concrete_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 788, in trace
fn, args = self.create_args_for_root(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 679, in create_args_for_root
root_fn = _patch_function(root_fn, len(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 184, in _patch_function
new_code = CodeType(*co_args) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^
ValueError: code: co_varnames is too small
```
### Versions
```shell
Collecting environment information...
PyTorch version: 2.7.0.dev20250310+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 19.1.7 (https://github.com/llvm/llvm-project.git cd708029e0b2869e80abe31ddb175f7c35361f90)
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11+local (heads/3.11-dirty:f0895aa9c1d, Dec 20 2024, 14:17:01) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxscript==0.2.2
[pip3] optree==0.14.0
[pip3] torch==2.7.0.dev20250310+cpu
[pip3] torchvision==0.22.0.dev20250310+cpu
[pip3] triton==3.2.0
[conda] magma-cuda121 2.6.1
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,931,165,260
|
possible output mismatch with torch.compile
|
vpandya-quic
|
closed
|
[
"high priority",
"triage review",
"module: embedding",
"oncall: pt2"
] | 3
|
NONE
|
### 🐛 Describe the bug
I have following test
```python
def test_large_random_embedding():
# Define a simple embedding model
class SimpleEmbeddingModel(torch.nn.Module):
def __init__(self, num_embeddings=10_000, embedding_dim=512):
super(SimpleEmbeddingModel, self).__init__()
self.embedding = torch.nn.Embedding(num_embeddings, embedding_dim)
self.num_embeddings = num_embeddings
def forward(self, x):
return self.embedding(x % self.num_embeddings)
# Initialize model
model = SimpleEmbeddingModel()
compiled_model = SimpleEmbeddingModel()
# Generate large input covering 0 to max long using modulo
max_long = torch.iinfo(torch.long).max
input_data = torch.randint(0, max_long, (1024, 512), dtype=torch.long)
# Compile the model
compiled_model.forward = torch.compile(compiled_model.forward, backend="inductor", fullgraph=False)
# Run inference
output_before = model(input_data)
output_after = compiled_model(input_data)
# Calculate the absolute differences
differences = torch.abs(output_before - output_after)
# Find the maximum difference and its index
max_difference = torch.max(differences)
max_index = torch.argmax(differences)
# Convert the flat index to multi-dimensional index
max_index_multi = torch.unravel_index(max_index, differences.shape)
# Retrieve the values at max_index
value_before = output_before[max_index_multi]
value_after = output_after[max_index_multi]
print(f"Absolute maximum difference: {max_difference.item()}")
print(f"Index of maximum difference: {max_index_multi}")
print(f"Value at max_index in output_before: {value_before.item()}")
print(f"Value at max_index in output_after: {value_after.item()}")
# Validate outputs
assert torch.allclose(output_before, output_after, rtol=0.01, atol=0.01 ), "Outputs differ between original and compiled model."
print("Test passed! Outputs are identical.")
```
should I expect outputs to match between eager and compile mode or my test is broken in some sense?
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.4.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 19.1.7 (++20250114103320+cd708029e0b2-1~exp1~20250114103432.75)
CMake version: version 3.29.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9124 16-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
Stepping: 1
Frequency boost: disabled
CPU max MHz: 3711.9141
CPU min MHz: 1500.0000
BogoMIPS: 5990.55
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 32 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.0
[pip3] onnxruntime==1.16.3
[pip3] onnxscript==0.1.0.dev20240327
[pip3] torch==2.4.1+cpu
[pip3] torch_geometric==2.5.2
[pip3] torch_qaic==0.1.0
[pip3] torch-tb-profiler==0.4.3
[pip3] torchao==0.8.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,931,118,257
|
DISABLED AotInductorTest.FreeInactiveConstantBufferCuda (build.bin.test_aoti_inference)
|
pytorch-bot[bot]
|
open
|
[
"module: flaky-tests",
"skipped",
"oncall: pt2",
"export-triaged",
"oncall: export",
"module: aotinductor"
] | 95
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=AotInductorTest.FreeInactiveConstantBufferCuda&suite=build.bin.test_aoti_inference&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39012167561).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `AotInductorTest.FreeInactiveConstantBufferCuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Expected equality of these values:
initMemory - DATASIZE
Which is: 22508863488
updateMemory2
Which is: 22508797952
/var/lib/jenkins/workspace/test/cpp/aoti_inference/test.cpp:383: C++ failure
```
</details>
Test file path: `` or `test/run_test`
Error: Error retrieving : 400, test/run_test: 404
cc @clee2000 @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
| true
|
2,930,915,425
|
Skip test if torchvision is not available
|
Flamefire
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
The test unconditionally imports torchvision and fails if the isn't installed.
Skip it in this case.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,930,775,781
|
DISABLED [WORKFLOW_NAME] / [PLATFORM_NAME] / [JOB_NAME]
|
Owner-DSH
|
closed
|
[
"module: ci"
] | 1
|
NONE
|
> For example, DISABLED pull / win-vs2022-cpu-py3 / test (default). Once
> created, the job will be disabled within 15 minutes. You can check the
> list of disabled jobs at https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json
> If you need to get this out ASAP instead of waiting for 15 minutes,
> you can manually trigger the workflow at https://github.com/pytorch/test-infra/actions/workflows/update_disabled_tests.yml
> once the issue is created to update the above JSON list right away.
> Noted: you need to have write access to PyTorch repo to disable CI
> jobs. The issue will be rejected otherwise.
## Reason
*Provide a reason why this is needed and when this can be resolved*.
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,930,775,739
|
Android sys
|
Owner-DSH
|
closed
|
[
"module: ci"
] | 0
|
NONE
|
> For example, DISABLED pull / win-vs2022-cpu-py3 / test (default). Once
> created, the job will be disabled within 15 minutes. You can check the
> list of disabled jobs at https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json
> If you need to get this out ASAP instead of waiting for 15 minutes,
> you can manually trigger the workflow at https://github.com/pytorch/test-infra/actions/workflows/update_disabled_tests.yml
> once the issue is created to update the above JSON list right away.
> Noted: you need to have write access to PyTorch repo to disable CI
> jobs. The issue will be rejected otherwise.
## Reason
*Provide a reason why this is needed and when this can be resolved*.
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,930,739,741
|
[Dynamo] Support the torch._C.DisableTorchFunction ctx manager
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149491
* #149490
* #149489
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,930,739,626
|
[Dynamo] add support for torch._C._is_torch_function_all_disabled
|
mlazos
|
closed
|
[
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149491
* __->__ #149490
* #149489
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,930,739,528
|
[Dynamo] Refactor DisableTorchFunction ctx manager
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 5
|
CONTRIBUTOR
|
Refactors the DisableTorchFunction ctx manager to properly model the eager code (no args to the context manager).
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149491
* #149490
* __->__ #149489
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,930,706,167
|
[distributed] fix: use group rank instead of global rank when possible
|
zhc7
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 11
|
CONTRIBUTOR
|
Fixes #149200
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,930,598,356
|
add privateuse1 device type to pre forward hook of fsdp
|
garfield1997
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)"
] | 17
|
CONTRIBUTOR
|
add privateuse1 device type to pre forward hook of fsdp
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,930,562,122
|
Fix index error for reorder_and_filter in gemm template
|
CaoE
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Fixes #149475
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,930,553,604
|
fix et trace collection of all_to_all
|
sanshang-nv
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 25
|
CONTRIBUTOR
|


fix ET trace collection to all_to_all.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,930,517,349
|
[dynamo] Support tensor subclass with overriden tensor methods and properties
|
StrongerXi
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149792
* __->__ #149484
* #149483
* #149482
This fixes most of the "torch.compile X tensor-subclass" issues
encountered in https://github.com/city96/ComfyUI-GGUF/issues/118. The
relevant tensor subclass definition is here:
https://github.com/city96/ComfyUI-GGUF/blob/298192ed60f8ca821c6fe5f8030cae23424cada5/ops.py#L18-L65.
A few things to note about the tensor subclass:
1. it overrides a lot of the `torch.Tensor` methods (e.g., `to`,
`clone`), so this patch updates `TensorWithTFOverrideVariable.var_getattr`
to support that.
2. it overrides the `shape` property, so this patch updates
`TensorWithTFOverrideVariable.var_getattr` to support property as well.
3. it has calls to `torch.Tensor.size`, which returns `torch.Size`,
which gets reconstructed in `torch.Tensor.__torch_function__`, so
this patch adds support for calling `torch.Size(...)` on non-constant
inputs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D71906137](https://our.internmc.facebook.com/intern/diff/D71906137)
| true
|
2,930,517,238
|
[dynamo] Support `torch.Tensor._make_subclass` and tracing through tensor subclass `__new__`
|
StrongerXi
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"ci-no-td"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149792
* #149484
* __->__ #149483
* #149482
This builds off the previous patch in the stack, and fully fixes
https://github.com/huggingface/diffusers/issues/10795.
Essentially, tensor subclass in the issue uses
`torch.Tensor._make_subclass`, which has a pretty simple shallow-copy
plus type change semantics, as far as Dynamo is concerned. So this patch
adds a polyfill for it.
As a result, this allows us to trace through many user-defined `__new__`
in tensor subclass (it's similar to how we trace through user-defined
`__new__` for `UserDefinedClassVariable`), so this patch also faithfully
trace through these `__new__` methods.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D71906139](https://our.internmc.facebook.com/intern/diff/D71906139)
| true
|
2,930,517,130
|
[dynamo] Support Tensor subclass that has dynamic attributes or calls `Parameter.__torch_function__`
|
StrongerXi
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"ci-no-td"
] | 17
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149792
* #149484
* #149483
* __->__ #149482
This fixes most of https://github.com/huggingface/diffusers/issues/10795,
except for `torch.Tensor._make_subclass`, which will be fixed in a
subsequent patch.
The relevant tensor subclass from the aforementioned issue is defined
here: https://github.com/huggingface/diffusers/blob/fbf6b856cc61fd22ad8635547bff4aafe05723f3/src/diffusers/quantizers/gguf/utils.py#L398-L435.
There are two things to note about the tensor subclass:
1. it calls `super().__torch_function__`, which is
`torch._C._disabled_torch_function_impl`, so this patch updates
`SuperVariable.call_method` to handle it (we can't do a simpler
polyfill due to some bug with `var_getattr` raising
`NotImplementedError`, which forgot to restore symbolic context).
2. it sets and reads attributes (`quant_type`), and
defines new methods (`as_data`), so this patch adds support for those.
3. it has a `__init__`, which Dynamo needs to trace through in
`TensorSubclassVariable.call_function`.
Differential Revision: [D71906140](https://our.internmc.facebook.com/intern/diff/D71906140)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,930,517,013
|
[dynamo] fix calling torch function on newly constructed tensor subclass
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149792
* #149484
* #149483
* #149482
* #149791
* __->__ #149481
This patch updates existing `test_return_..._subclass` tests in
`test/dynamo/test_subclasses.py`, so that they end up invoking the
`__torch_function__` method of the newly constructed tensor subclass
instnaces.
This exposes a bug in `TensorVariable.method_as_subclass`, where it
forgot to grab the `__func__` out of `__torch_function__`, which led to
the an error down the line.
This patch fixes `TensorVariable.method_as_subclass` by centralizing how
we extract and wrap torch function, in `build_torch_function_fn`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,930,484,446
|
load_inline no_implicit_headers mode
|
msaroufim
|
closed
|
[
"module: cpp-extensions",
"Merged",
"ciflow/trunk",
"release notes: cpp"
] | 19
|
MEMBER
|
In the kernelBot leaderboard we support people competing with custom cuda extensions via `load_inline()`, however even on toy kernels this can result in cold starts of up to 90s - this problem is primarily responsible for us having to double our timeout values
I performed an investigation here https://github.com/msaroufim/load_inline_slow and the primary cause was that torch/extension.h and torch/types.h add in about 5,000 header files https://github.com/msaroufim/load_inline_slow/blob/main/header-analysis
So we introduce a mode `no_implicit_headers` which forces users to be explicit about exactly what they want to add.
Then there's still an open question around what's the most minimal example implementation we can provide. For the baseline kernel we're showing here, it takes about 1 min to compile
1. There's using TensorBase.h (finicky to get right but can get compilation times down to 5s)
2. Just using Tensor.h (down to 15s)
3. Using Shim.h (did not try yet since the syntax is verbose relative to cuda)
This is my take so far https://gist.github.com/msaroufim/079a8d08ffebd0f91a1c2247eb0ce9e0 for a minimal implementation at 15s but @malfet has a simpler one at only 5s https://gist.github.com/malfet/6f52de932aed35e046952f7e054294df
There's more things I'd like to try moving forward like nvrtc and fancier compilation flags. Typical advice around using precompiled headers does not apply to us because we are mostly interested in cold starts where we tear down the machine after running a kernel
Also in a future PR I'd like to fix issue I've noticed with load_inline
1. It needs a force recompilation mode, I was using this quite a bit myself
2. The cache does not take into account changes in environment so the best way to force a recompilation is to change some string in the file
3. Instead of relying on pybind, can we use TORCH_LIBRARY instead
4. Should we refactor aten a bit to avoid pulling in a large number of headers unnecessarily
Big thank you to @drisspg, @janeyx99 and @albanD for sanity checking these results with me
cc @malfet @zou3519 @xmfan
| true
|
2,930,365,618
|
Remove Ubuntu 18.04 scripts
|
cyyever
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng"
] | 6
|
COLLABORATOR
|
Ubuntu 18.04 end of life reached on May 31, 2023. These code isn't used now.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,930,325,306
|
[Distributed] Add `repr` methods for `ParallelStyle`s
|
shink
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (dtensor)"
] | 9
|
CONTRIBUTOR
|
Fixes #149470
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,930,273,966
|
changing linear layer initialization formula in docs
|
karanjakhar
|
closed
|
[
"open source"
] | 4
|
NONE
|
Fixes #149474
| true
|
2,930,270,273
|
There is some discrepancy between the document explain and code implement of optim.SGD() when use maximize=True
|
l1351868270
|
closed
|
[
"module: docs",
"module: optimizer",
"triaged",
"actionable"
] | 2
|
NONE
|
### 🐛 Describe the bug
In the document (https://pytorch.org/docs/stable/generated/torch.optim.SGD.html) .The loop is

It means:
1.get the original grad, $g_{t} = og_{t}$
2.weight_decay != 0, update the grad, $g_{t} = og_{t} + λθ_{t-1}$
3.momentum != 0, update the grad,
$b_{t} = μb_{t-1} + (1-τ)g_{t} = μb_{t-1} + (1-τ)(og_{t} + λθ_{t-1})$ ,
then
$g_{t} = b_{t} = μb_{t-1} + (1-τ)(og_{t} + λθ_{t-1})$
or
$g_{t} = g_{t} + μb_{t} = g_{t} + μ(μb_{t-1} + (1-τ)g_{t}) = μ^{2}b_{t-1} + (1 + μ(1-τ))g_{t} = μ^{2}b_{t-1} + (1 + μ(1-τ))(og_{t} + λθ_{t-1})$
4.if maximize == True, update the params,
$θ_{t} = θ_{t-1} + γ(μb_{t-1} + (1-τ)(og_{t} + λθ_{t-1})) = (1 + γ(1-τ)λ)θ_{t-1} + γμb_{t-1} + γ(1-τ)og_{t}$
or
$θ_{t} = θ_{t-1} + γ(μ^{2}b_{t-1} + (1 + μ(1-τ))(og_{t} + λθ_{t-1})) =(1 + γ(1 + μ(1-τ))λ)θ_{t-1} + γμ^{2}b_{t-1} + γ(1 + μ(1-τ))og_{t}$
In the code implement,The loop is
```
for i, param in enumerate(params):
grad = grads[i] if not maximize else -grads[i]
if weight_decay != 0:
# Nested if is necessary to bypass jitscript rules
if isinstance(weight_decay, Tensor):
if weight_decay.requires_grad:
# usually this is the differentiable path, which is why the param.clone() is needed
grad = grad.addcmul_(param.clone(), weight_decay)
else:
grad = grad.add(param, alpha=weight_decay)
else:
grad = grad.add(param, alpha=weight_decay)
if momentum != 0:
buf = momentum_buffer_list[i]
if buf is None:
buf = torch.clone(grad).detach()
momentum_buffer_list[i] = buf
else:
buf.mul_(momentum).add_(grad, alpha=1 - dampening)
if nesterov:
grad = grad.add(buf, alpha=momentum)
else:
grad = buf
# Nested if is necessary to bypass jitscript rules
if isinstance(lr, Tensor):
if lr.requires_grad:
param.addcmul_(grad, lr, value=-1)
else:
param.add_(grad, alpha=-lr)
else:
param.add_(grad, alpha=-lr)
```
It means:
1.get the original grad, $g_{t} = -og_{t}$
2.weight_decay != 0, update the grad, $g_{t} = -og_{t} + λθ_{t-1}$
3.momentum != 0, update the grad,
$b_{t} = μb_{t-1} + (1-τ)g_{t} = μb_{t-1} + (1-τ)(-og_{t} + λθ_{t-1})$ ,
then
$g_{t} = b_{t} = μb_{t-1} + (1-τ)(-og_{t} + λθ_{t-1})$
or
$g_{t} = g_{t} + μb_{t} = g_{t} + μ(μb_{t-1} + (1-τ)g_{t}) = μ^{2}b_{t-1} + (1 + μ (1-τ))g_{t} = μ^{2}b_{t-1} + (1 + μ (1-τ))(-og_{t} + λθ_{t-1})$
4.if maximize == True, update the params,
$θ_{t} = θ_{t-1} - γ(μb_{t-1} + (1-τ)(-og_{t} + λθ_{t-1})) = (1-γ(1-τ)λ)θ_{t-1} - γμb_{t-1} + γ(1-τ)og_{t}$
or
$θ_{t} = θ_{t-1} - γ(μ^{2}b_{t-1} + (1 + μ (1-τ))(-og_{t} + λθ_{t-1}))= (1-γ(1 + μ (1-τ))λ)θ_{t-1}- γμ^{2}b_{t-1} + γ(1 + μ (1-τ))og_{t}$
we could find that, only the $og_{t}$ item is the same. the $θ_{t-1}$ item and $b_{t-1}$ item are all diffrent.
i also implement one based on the document(https://github.com/l1351868270/ld_triton/blob/main/ld_triton/optim/sgd/naive_sgd.py), when maximize == True, the result is diffrent from the pytorch implement.
### Versions
master
cc @svekars @sekyondaMeta @AlannaBurke @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,930,269,577
|
IndexError in linear_binary when X and Y are the same with max-autotune enabled
|
CaoE
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
When the x and y are the same in the inputs with max-autotune enabled, an index error occurs.
Simple reproducer:
```
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(1024, 1024)
def forward(self, input):
out = self.linear(input)
out = out + input
return out
if __name__ == "__main__":
input = torch.randn(1024, 1024)
m = Model().eval()
dtype = torch.bfloat16
input = input.to(dtype)
with torch.autocast(enabled=True, device_type="cpu", dtype=dtype):
c_m = torch.compile(m, mode="max-autotune")
inductor_res = c_m(input)
```
```
Traceback (most recent call last):
File "pytorchs/test/test_linear.py", line 72, in <module>
inductor_res = c_m(input)
File "pytorchs/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "pytorchs/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "pytorchs/pytorch/torch/_dynamo/eval_frame.py", line 663, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "pytorchs/pytorch/torch/_dynamo/output_graph.py", line 1544, in _call_user_compiler
raise BackendCompilerFailed(
File "pytorchs/pytorch/torch/_dynamo/output_graph.py", line 1519, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "pytorchs/pytorch/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "pytorchs/pytorch/torch/__init__.py", line 2349, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 1745, in compile_fx
return compile_fx(
File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 2103, in compile_fx
return aot_autograd(
File "pytorchs/pytorch/torch/_dynamo/backends/common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "pytorchs/pytorch/torch/_functorch/aot_autograd.py", line 1160, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "pytorchs/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 775, in load
compiled_fn = dispatch_and_compile()
File "pytorchs/pytorch/torch/_functorch/aot_autograd.py", line 1145, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "pytorchs/pytorch/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "pytorchs/pytorch/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "pytorchs/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 219, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 1643, in fw_compiler_freezing
optimized_function = inner_compile(
File "miniforge3/envs/ecao/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 628, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "pytorchs/pytorch/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 735, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 1309, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 1128, in codegen_and_compile
graph.run(*example_inputs)
File "pytorchs/pytorch/torch/_inductor/graph.py", line 879, in run
return super().run(*args)
File "pytorchs/pytorch/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
File "pytorchs/pytorch/torch/_inductor/graph.py", line 1529, in run_node
result = super().run_node(n)
File "pytorchs/pytorch/torch/fx/interpreter.py", line 240, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "pytorchs/pytorch/torch/_inductor/graph.py", line 1125, in call_function
return target(*args, **kwargs)
File "pytorchs/pytorch/torch/_inductor/fx_passes/mkldnn_fusion.py", line 620, in fn
return L[fusion_op](*computation_args)
File "pytorchs/pytorch/torch/_inductor/lowering.py", line 466, in wrapped
out = decomp_fn(*args, **kwargs)
File "pytorchs/pytorch/torch/_inductor/mkldnn_lowerings.py", line 349, in linear_binary
result = autotune_select_algorithm(
File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 2345, in autotune_select_algorithm
return _ALGORITHM_SELECTOR_CACHE(*args, **kwargs)
File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 1985, in __call__
timings = do_autotuning(precompile_fn)
File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 1913, in do_autotuning
timings = self.lookup(
File "pytorchs/pytorch/torch/_inductor/codecache.py", line 321, in lookup
timings = benchmark(choices)
File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 1893, in autotune
return make_benchmark_fn()(choices)
File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 2119, in benchmark_in_current_process
raise e from None
File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 2083, in benchmark_in_current_process
timing = benchmark_choice_in_current_process(choice, inputs)
File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 2063, in benchmark_choice_in_current_process
result = choice.benchmark(*inpts, out=output)
File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 1535, in benchmark
new_args, new_out = self._preprocessor(args, out)
File "pytorchs/pytorch/torch/_inductor/codegen/cpp_gemm_template.py", line 937, in preprocessor
*maybe_to_dense(*reorder_and_filter(inputs, layout))
File "pytorchs/pytorch/torch/_inductor/codegen/cpp_gemm_template.py", line 846, in reorder_and_filter
inputs[inp_idx],
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
IndexError: tuple index out of range
```
### Versions
latest Pytorch.
cc @chauhang @penguinwu
| true
|
2,930,265,708
|
nn.Linear layer initialization formula wrong in docs
|
karanjakhar
|
closed
|
[
"module: docs",
"module: nn",
"triaged",
"actionable"
] | 2
|
NONE
|
### 📚 The doc issue

But in implementation it's:

### Suggest a potential alternative/fix
It should be:

cc @svekars @sekyondaMeta @AlannaBurke @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,930,265,257
|
[Intel GPU][PT2E] bugfix: use zero-point to decide conv src zp mask
|
ZhiweiYan-96
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 8
|
COLLABORATOR
|
# Motivation
The PR fix a bug that wrongly decides the zero-point mask setting. Specifically, it deems zero-point is always not zeros due to scale is used for judgement. Fortunately, the bug only affects the performance. The accuracy is not affected.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149473
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,930,258,252
|
torch.compile(mode="max-autotune") produces different outputs from eager mode
|
tinywisdom
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"topic: fuzzer"
] | 4
|
NONE
|
### 🐛 Describe the bug
I'm encountering a result mismatch between eager mode and `torch.compile(mode="max-autotune")`.
The outputs differ beyond acceptable tolerances (e.g., `torch.allclose` fails), and this behavior persists in both stable and nightly builds.
### Related Discussion
I initially posted this issue on the PyTorch discussion forum, but have not received a resolution so far.
Here is the link to the original thread:
https://discuss.pytorch.org/t/torch-compile-mode-max-autotune-produces-different-inference-result-from-eager-mode-is-this-expected/217873
Since this appears to be a reproducible and version-independent issue, I'm now submitting it here as a formal GitHub issue.
### Versions
- PyTorch 2.5.1 (original test)
- PyTorch 2.6.0.dev20241112+cu121 (nightly)
- CUDA 12.1
- Platform: Ubuntu 22.04.4 LTS
### Output
=== Detailed comparison ===
- Total number of elements: 3,211,264
- Max absolute error: 0.00128412
- Mean absolute error: 0.000100889
- Max relative error: 23,868.7
- Mean relative error: 0.285904
- Number of elements exceeding tolerance: 98,102
- Percentage of out-of-tolerance elements: 3.05%
- Result of torch.allclose(output_eager, output_compiled, atol=1e-5): False
### Model
Here is my model:
```python
import torch.nn as nn
class BaseConv(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding, conv_layer):
super().__init__()
self.conv = conv_layer(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False)
def forward(self, x):
return self.conv(x)
class ActivatedConv(BaseConv):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding, conv_layer, activation):
super().__init__(in_channels, out_channels, kernel_size, stride, padding, conv_layer)
self.activation = activation
def forward(self, x):
return self.activation(self.conv(x))
class NormalizedConv(ActivatedConv):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding, conv_layer, norm, activation):
super().__init__(in_channels, out_channels, kernel_size, stride, padding, conv_layer, activation)
self.norm = norm(out_channels)
def forward(self, x):
return self.activation(self.norm(self.conv(x)))
class Conv2DBNReLU(NormalizedConv):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding):
super().__init__(in_channels, out_channels, kernel_size, stride, padding, nn.Conv2d, nn.BatchNorm2d, nn.ReLU())
class MyModel(nn.Module):
def __init__(self, in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1):
super().__init__()
self.conv1 = Conv2DBNReLU(in_channels, out_channels, kernel_size, stride, padding)
def forward(self, x):
return self.conv1(x)
def my_model_function(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1):
return MyModel(in_channels, out_channels, kernel_size, stride, padding)
if __name__ == "__main__":
model = my_model_function()
print(model)
```
### Minimal Script
And this is a minimal script that reproduces the issue:
```python
import torch
import importlib.util
import os
def load_model_from_file(module_path, model_function_name="my_model_function"):
model_file = os.path.basename(module_path)[:-3]
spec = importlib.util.spec_from_file_location(model_file, module_path)
model_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(model_module)
model_function = getattr(model_module, model_function_name)
model = model_function()
return model
def compare_outputs(a: torch.Tensor, b: torch.Tensor, atol=1e-5, rtol=1e-3):
print("=== Output difference comparison ===")
diff = a - b
abs_diff = diff.abs()
rel_diff = abs_diff / (a.abs() + 1e-8)
total_elements = a.numel()
print(f"- Total elements: {total_elements}")
print(f"- Max absolute error: {abs_diff.max().item():.8f}")
print(f"- Mean absolute error: {abs_diff.mean().item():.8f}")
print(f"- Max relative error: {rel_diff.max().item():.8f}")
print(f"- Mean relative error: {rel_diff.mean().item():.8f}")
num_exceed = (~torch.isclose(a, b, atol=atol, rtol=rtol)).sum().item()
print(f"- Elements exceeding tolerance: {num_exceed}")
print(f"- Percentage exceeding tolerance: {100.0 * num_exceed / total_elements:.4f}%")
print(f"- torch.allclose: {torch.allclose(a, b, atol=atol, rtol=rtol)}")
if __name__ == "__main__":
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
input_tensor = torch.rand(1, 3, 224, 224, device=device)
model_path = "xxx/xxx/xxx/xxx.py"
model = load_model_from_file(model_path).to(device).eval()
with torch.no_grad():
output_eager = model(input_tensor)
compiled_model = torch.compile(model, mode="max-autotune")
with torch.no_grad():
output_compiled = compiled_model(input_tensor)
compare_outputs(output_eager, output_compiled)
```
### Versions
### Nightly
```
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241112+cu121
[pip3] torchaudio==2.5.0.dev20241112+cu121
[pip3] torchvision==0.20.0.dev20241112+cu121
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241112+cu121 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241112+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241112+cu121 pypi_0 pypi
```
### Original
```
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.5.3.2
[pip3] nvidia-cuda-cupti-cu12==12.5.82
[pip3] nvidia-cuda-nvrtc-cu12==12.5.82
[pip3] nvidia-cuda-runtime-cu12==12.5.82
[pip3] nvidia-cudnn-cu12==9.3.0.75
[pip3] nvidia-cufft-cu12==11.2.3.61
[pip3] nvidia-curand-cu12==10.3.6.82
[pip3] nvidia-cusolver-cu12==11.6.3.83
[pip3] nvidia-cusparse-cu12==12.5.1.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.5.82
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.10.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl defaults
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2022.1.0 hc2b9512_224 defaults
[conda] numpy 1.26.4 py310hb13e2d6_0 conda-forge
[conda] nvidia-cublas-cu12 12.5.3.2 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.5.82 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.5.82 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.5.82 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.3.0.75 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.3.61 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.6.82 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.3.83 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.1.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.5.82 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch 2.5.1 py3.10_cuda12.1_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py310_cu121 pytorch
[conda] torchdata 0.10.0 pypi_0 pypi
[conda] torchtriton 3.1.0 py310 pytorch
[conda] torchvision 0.20.1 py310_cu121 pytorch
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,930,251,005
|
Pin auditwheel to 6.2.0
|
atalman
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Observing aarch64 failure in nightly:
https://github.com/pytorch/pytorch/actions/runs/13917778961/job/38943911228
Similar to: https://github.com/pytorch/vision/pull/8982
```
2025-03-18T08:44:58.4128744Z Repairing Wheel with AuditWheel
2025-03-18T08:44:58.5440988Z INFO:auditwheel.main_repair:Repairing torch-2.8.0.dev20250318+cpu-cp39-cp39-linux_aarch64.whl
2025-03-18T08:45:20.3393288Z Traceback (most recent call last):
2025-03-18T08:45:20.3393732Z File "/opt/python/cp39-cp39/bin/auditwheel", line 8, in <module>
2025-03-18T08:45:20.3394115Z sys.exit(main())
2025-03-18T08:45:20.3394559Z File "/opt/_internal/cpython-3.9.21/lib/python3.9/site-packages/auditwheel/main.py", line 53, in main
2025-03-18T08:45:20.3395064Z result: int | None = args.func(args, p)
2025-03-18T08:45:20.3395626Z File "/opt/_internal/cpython-3.9.21/lib/python3.9/site-packages/auditwheel/main_repair.py", line 203, in execute
2025-03-18T08:45:20.3396163Z out_wheel = repair_wheel(
2025-03-18T08:45:20.3396657Z File "/opt/_internal/cpython-3.9.21/lib/python3.9/site-packages/auditwheel/repair.py", line 84, in repair_wheel
2025-03-18T08:45:20.3397184Z raise ValueError(msg)
2025-03-18T08:45:20.3397620Z ValueError: Cannot repair wheel, because required library "libarm_compute.so" could not be located
2025-03-18T08:45:20.3678843Z Traceback (most recent call last):
2025-03-18T08:45:20.3679267Z File "/pytorch/.ci/aarch64_linux/aarch64_wheel_ci_build.py", line 236, in <module>
2025-03-18T08:45:20.3680988Z pytorch_wheel_name = complete_wheel("/pytorch/")
2025-03-18T08:45:20.3681449Z File "/pytorch/.ci/aarch64_linux/aarch64_wheel_ci_build.py", line 141, in complete_wheel
2025-03-18T08:45:20.3681976Z check_call(["auditwheel", "repair", f"dist/{wheel_name}"], cwd=folder)
2025-03-18T08:45:20.3682860Z File "/opt/python/cp39-cp39/lib/python3.9/subprocess.py", line 373, in check_call
2025-03-18T08:45:20.3683308Z raise CalledProcessError(retcode, cmd)
2025-03-18T08:45:20.3684034Z subprocess.CalledProcessError: Command '['auditwheel', 'repair', 'dist/torch-2.8.0.dev20250318+cpu-cp39-cp39-linux_aarch64.whl']' returned non-zero exit status 1.
2025-03-18T08:45:20.3790063Z ##[error]Process completed with exit code 1.
2025-03-18T08:45:20.3862012Z ##[group]Run pytorch/test-infra/.github/actions/teardown-linux@main
2025-03-18T08:45:20.3862448Z with:
```
Please note aarch64 CUDA failures are related to: https://github.com/pytorch/pytorch/pull/149351
| true
|
2,930,229,896
|
`ParallelStyle`s (ColwiseParallel, etc) do not have a `__repr__()`
|
apaz-cli
|
closed
|
[
"oncall: distributed"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
I'm writing a TP plan for a new model, and it's not possible to print the dict to copy/paste it. It's making debugging the parallelism strategies from torchtitan much harder. `Placement` objects already have a `__repr__`, so this should be easy to support.
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,930,209,865
|
Refactor `test/test_torch.py` by moving testcase to `test_indexing.py`
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Following #148875
Fix `FIXME` in `test_torch.py` by moving test-cases to `test_indexing.py`
```python
# FIXME: move to test indexing
# FIXME: move to indexing test suite
```
- Move tests in `test/test_torch.py` to `test_indexing.py`
- Remove `FIXME` comments
## TestResult
```bash
pytest test/test_torch.py -k TestTorchDeviceType -vv
pytest test/test_indexing.py -k TestIndexing -vv
```


| true
|
2,930,194,738
|
torch.library.opcheck doesn't check strides for CPU Tensors
|
zou3519
|
open
|
[
"high priority",
"module: cpp-extensions",
"triaged",
"module: custom-operators",
"module: library",
"module: pt2-dispatcher"
] | 3
|
CONTRIBUTOR
|
Repro:
```py
import torch
from torchvision.transforms.functional import to_pil_image, pil_to_tensor
import PIL
def crop(pic, box):
img = to_pil_image(pic.cpu())
cropped_img = img.crop(box)
return pil_to_tensor(cropped_img).to(pic.device) / 255.
img = torch.ones(3, 64, 64)
img *= torch.linspace(0, 1, steps=64) * torch.linspace(0, 1, steps=64).unsqueeze(-1)
cropped_img = crop(img, (10, 10, 50, 50))
def f(img):
return crop(img, (10, 10, 50, 50))
cropped_img = f(img)
print(img.shape, img.stride())
print(cropped_img.shape, cropped_img.stride())
from typing import Sequence
@torch.library.custom_op("mylib::crop", mutates_args=())
def crop(pic: torch.Tensor, box: Sequence[int]) -> torch.Tensor:
img = to_pil_image(pic.cpu())
cropped_img = img.crop(box)
result = (pil_to_tensor(cropped_img) / 255.).to(pic.device, pic.dtype)
return result
@crop.register_fake
def _(pic, box):
channels = pic.shape[0]
x0, y0, x1, y1 = box
# result = pic.new_empty(y1 - y0, x1 - x0, channels).permute(2, 0, 1)
result = pic.new_empty(channels, y1 - y0, x1 - x0)
return result
result = torch.library.opcheck(crop, (img, (10, 10, 50, 50)))
print(result)
```
cc @ezyang @gchanan @kadeng @msaroufim @malfet @xmfan @anjali411 @chauhang @penguinwu @bdhirsh
| true
|
2,930,114,308
|
[audio hash update] update the pinned audio hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
| true
|
2,930,102,826
|
[ROCm] support experimental CU carveout
|
jeffdaily
|
open
|
[
"module: rocm",
"triaged",
"open source",
"release notes: rocm",
"ciflow/rocm"
] | 1
|
COLLABORATOR
|
Fixes #149280. Follow up to #147966, but now available for ROCm.
Since hipblaslt does not support HIPBLASLT_MATMUL_DESC_CU_COUNT_TARGET, we instead create a hipStream that has a CU mask applied. We pass this masked stream to hipblaslt instead of pytorch's current stream. We ensure stream ordering between streams using hipEvents and stream synchronization.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,930,102,458
|
[export] Beef up guard_added logs
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"fx",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,930,094,489
|
Catch OSError in general when writing files
|
HollowMan6
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 9
|
CONTRIBUTOR
|
Redundant exception types in `except (PermissionError, OSError):`. Write `except OSError:`, which catches exactly the same exceptions.
https://github.com/pytorch/pytorch/actions/runs/13935844871/job/39141062991
When hipify files, or writing cprofile files, PermissionError is not enough when the file is located in a place that is not writable at all, or other OS errors happened when writing files.
This fix makes the code more robust.
Example error log:
```log
File "deepspeed/ops/adam/fused_adam.py", line 94, in __init__
fused_adam_cuda = FusedAdamBuilder().load()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "deepspeed/ops/op_builder/builder.py", line 540, in load
return self.jit_load(verbose)
^^^^^^^^^^^^^^^^^^^^^^
File "deepspeed/ops/op_builder/builder.py", line 587, in jit_load
op_module = load(name=self.name,
^^^^^^^^^^^^^^^^^^^^
File "torch/utils/cpp_extension.py", line 1597, in load
return _jit_compile(
^^^^^^^^^^^^^
File "torch/utils/cpp_extension.py", line 2031, in _jit_compile
hipify_result = hipify_python.hipify(
^^^^^^^^^^^^^^^^^^^^^
File "torch/utils/hipify/hipify_python.py", line 1167, in hipify
preprocess_file_and_save_result(output_directory, filepath, all_files, header_include_dirs,
File "torch/utils/hipify/hipify_python.py", line 213, in preprocess_file_and_save_result
result = preprocessor(output_directory, filepath, all_files, header_include_dirs, stats,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/utils/hipify/hipify_python.py", line 940, in preprocessor
output_source = RE_QUOTE_HEADER.sub(mk_repl('#include "{0}"', True), output_source)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/utils/hipify/hipify_python.py", line 919, in repl
preprocess_file_and_save_result(output_directory,
File "torch/utils/hipify/hipify_python.py", line 213, in preprocess_file_and_save_result
result = preprocessor(output_directory, filepath, all_files, header_include_dirs, stats,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/utils/hipify/hipify_python.py", line 986, in preprocessor
with clean_ctx.open(fout_path, 'w', encoding='utf-8') as fout:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/utils/hipify/hipify_python.py", line 123, in open
return open(fn, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 30] Read-only file system: 'deepspeed/ops/csrc/adam/multi_tensor_apply_hip.cuh'
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,930,072,971
|
support multinomial for dynamic num_samples
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Test Plan: added test
Fixes #149048
Differential Revision: D71434914
| true
|
2,930,063,823
|
Avoid recompilation caused by is_mm_compute_bound
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
CONTRIBUTOR
|
From @Elias Ellison
is_mm_compute_bound is just to avoid benchmarking cases where it is reliably unprofitable.
so in the case of dynamic we probably should just return keep it on and not guard.
Here is my proposal to address this
The benchmarking is on by default, we disable it iff some conditions are statically known true.
internal post
https://fb.workplace.com/groups/8940092306109185/permalink/9211657442286002/
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,930,052,153
|
Use enum to select floating point format in FbgemmEmbedding APIs
|
MatzeB
|
closed
|
[
"fb-exported"
] | 3
|
CONTRIBUTOR
|
Summary:
X-link: https://github.com/pytorch/FBGEMM/pull/3847
Most FBGemmEmbedding APIs currently feature a `bool is_bf16_out` parameter to differentiate between the float16 and bfloat16 format when the output array has type `uint16_t`.
I am in the process of adding E5M2 and E4M3FN formats for output arrays with type `uint8_t`. Instead of adding another parameter, I would like to change the `bool is_bf16_out` parameter to `enum FloatFormat` to make it easier to add new formats:
```
enum class FloatFormat {
DEFAULT,
FLOAT16,
BFLOAT16,
FP8_E5M2,
FP8_E4M3FN,
};
```
Test Plan: sandcastle
Differential Revision: D71432836
| true
|
2,930,037,985
|
Remove test_get_model_state_dict_del_memory
|
mori360
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (checkpoint)"
] | 7
|
CONTRIBUTOR
|
test_get_model_state_dict_del_memory get unexpected memory, leading to the test failures.
Remove tests right now to avoid blocking the others.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,930,021,565
|
[xpu] set aot device flags in cpp_extension
|
jingxu10
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"ciflow/xpu"
] | 31
|
COLLABORATOR
|
If PyTorch is compiled with only AOT text strings starting with "dg2", the `_get_sycl_arch_list()` function will pass an empty string to `-device` argument of `ocloc` and then cause a compilation crash.
| true
|
2,930,005,954
|
[Graph Partition] Support symbol inputs
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
This PR supports symbol inputs to graph partition functions. Before this PR, we rely on `node.read_writes` to get partition inputs. However, this does not cover symbol inputs.
In this PR, for each graph partition, we collect all symbol inputs which are required to be in scope to successfully perform codegen, including:
- free symbols used in partition nodes.
- free symbols in partition input/node shapes, strides, and offsets. This is needed for recording cudagraphs for tensors with dynamic shapes.
### Note1: MutationLayout
In this example, node.layout is MutationLayoutSHOULDREMOVE. The symint from index `n` does not appear in the size, offset, stridese of node.layout. This symint appear in node.layout.target. So we need extra handle for it.
```python
x = torch.zeros(7, device="cuda")
def fn(n, a):
a[n] = -1
return a
opt_fn = torch.compile(fn, fullgraph=True)
for n in range(2, x.shape[0]):
opt_fn(n, x)
```
### Note2: Composability with Padded Tensor Subclass
W/o graph partition, Padded Tensor subclass lifts outer shapes to input arguments (i.e., arg0_1 for s0, arg1_1 for s1) but does not lift inner shapes (i.e., s2 and s3). Since cudagraph cache relies on integer inputs, it will cache on outer shapes and ignore inner shapes, which is bad.
```
def call(args):
arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1 = args
args.clear()
s0 = arg0_1
s1 = arg1_1
arg2_1_size = arg2_1.size()
s2 = arg2_1_size[0]
s3 = arg2_1_size[1]
assert_size_stride(arg2_1, (s2, s3), (s3, 1))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf0 = empty_strided_cuda((s2, s3), (s3, 1), torch.float32)
# Topologically Sorted Source Nodes: [x1, mul], Original ATen: [aten.add, aten.mul]
triton_poi_fused_add_mul_0_xnumel = s2*s3
stream0 = get_raw_stream(0)
triton_poi_fused_add_mul_0.run(arg2_1, buf0, triton_poi_fused_add_mul_0_xnumel, stream=stream0)
del arg2_1
return (buf0, s0, s1, s1, )
```
w/ graph partition, the partition function only includes tensor and inner shapes as inputs, to make sure the cudagraph caching is correct. Full Comparison: [code](https://www.internalfb.com/intern/diffing/?paste_number=1761674743)
```python
def call(self, args):
arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1 = args
args.clear()
s0 = arg0_1
s1 = arg1_1
arg2_1_size = arg2_1.size()
s2 = arg2_1_size[0]
s3 = arg2_1_size[1]
assert_size_stride(arg2_1, (s2, s3), (s3, 1))
partition0_args = [arg2_1, s2, s3]
del arg2_1
(buf0,) = self.partitions[0](partition0_args)
del partition0_args
return (buf0, s0, s1, s1, )
```
The number of cudagraphs is validated below: (also added to test)
```python
import torch
from padded_tensor import PaddedTensor
# Turning off graph_partition leads to
# torch._inductor.cudagraph_trees.get_container(0).tree_manager.new_graph_id().id=6
# at the end, which is wrong.
# torch._inductor.config.graph_partition = False
# Turning on graph_partition leads to
# torch._inductor.cudagraph_trees.get_container(0).tree_manager.new_graph_id().id=4
# at the end, which is correct.
torch._inductor.config.graph_partition = True
def f(x):
x1 = x + 1
return x1 * 2
compiled_f = torch.compile(f, mode="reduce-overhead")
def run(shape):
x = torch.randn(*shape, device="cuda")
pad_x = PaddedTensor.from_tensor(x, multipliers={0:4, 1:4})
assert hasattr(pad_x, "multipliers"), breakpoint()
eager_out = f(pad_x)
for _ in range(3):
compiled_out = compiled_f(pad_x)
compiled_out = compiled_f(pad_x)
assert eager_out.shape == compiled_out.shape
assert eager_out.tensor.shape == compiled_out.tensor.shape
assert torch.allclose(eager_out.tensor, compiled_out.tensor)
# static shape. record a NEW cudagraph. 1 cudagraph in total now.
run((2,3))
# outer shape is dynamic, leading to a new dynamo graph
# this new dynamo graph forces a NEW cudagraph. 2 cudagraphs in total now
run((3,4))
# outer shape changed but inner shape does not change
# so NO new cudagraph is recorded
run((2,2))
# inner shape is dynamic now, leading to a new dynamo graph
# this new dynamo graph forces a NEW cudagraph. 3 cudagraphs in total now
run((5,6))
# does NOT record a new cudagraph
run((7,8))
# record a NEW cudagraph. 4 cudagraphs in total now
run((10,11))
assert torch._inductor.cudagraph_trees.get_container(0).tree_manager.new_graph_id().id == 4
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,929,981,311
|
Error when tracing torch.func.functional_call inside of a HOP
|
angelayi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: higher order operators",
"module: pt2-dispatcher"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I'm trying to add support for scanning over mulitple layers, and have verified it works with _fake_scan, so now I'm trying to replace that call with real `scan`. However I am running into an error in torch.compile when it's trying to trace the call to torch.func.functional_call with in my scan function. I think the error is `speculate_subgraph: while introspecting scan_combine_fn, we were unable to trace function FunctoolsPartialVariable into a single graph. This means that Dynamo was unable to prove safety for this API and will fall back to eager-mode PyTorch, which could lead to a slowdown.`, but I might be reading some things incorrectly.
Here is the example test case:
```python
def test_scan_mods(self):
class M1(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(3, 3)
def forward(self, x):
return self.linear(x)
from torch._higher_order_ops.scan import _fake_scan, scan
def scan_layers(layers, inp):
assert len(layers) != 0
params = []
buffers = []
for layer in layers:
params.append(dict(layer.named_parameters()))
buffers.append(dict(layer.named_buffers()))
stacked_params = pytree.tree_map(lambda *t: torch.stack(t, dim=0), *params)
stacked_buffers = pytree.tree_map(lambda *t: torch.stack(t, dim=0), *buffers)
layer = copy.deepcopy(layers[0])
def scan_fn(carry, weights):
res = torch.func.functional_call(layer, weights, carry)
return res, weights
return scan(scan_fn, inp, (stacked_params, stacked_buffers))
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.layers = torch.nn.ModuleList([M1() for _ in range(5)])
self.m = M1()
def forward(self, x):
raise RuntimeError("skip")
def forward_orig(self, x):
for layer in self.layers:
x = layer(x)
return x
def forward_scan(self, x):
return scan_layers(self.layers, x)[0]
inp = torch.randn(3)
m = M()
m.forward = types.MethodType(forward_orig , m)
ep = torch.export.export(m, (torch.zeros(3),))
orig_out = m(inp)
self.assertTrue(torch.allclose(orig_out, ep.module()(inp)))
print(ep)
m.forward = types.MethodType(forward_scan , m)
self.assertTrue(torch.allclose(orig_out, m(inp)))
ep = torch.export.export(m, (torch.zeros(3),))
self.assertTrue(torch.allclose(orig_out, ep.module()(inp)))
print(ep)
```
Here's the full log + error: P1759645059
I wrote a small test case with torch.func.functional_call and it seems like torch.compile can trace this successfully, so it's possible that there's some restrictions with scan, or I wrote the code badly.
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
### Versions
main
| true
|
2,929,966,235
|
Base version committed
|
jamesjwu
|
closed
|
[
"triaged"
] | 0
|
CONTRIBUTOR
| null | true
|
2,929,965,999
|
Support num_ctas > 1?
|
jamesjwu
|
open
|
[
"triaged",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
cc @chauhang @penguinwu
| true
|
2,929,965,947
|
Support user defined triton kernels
|
jamesjwu
|
open
|
[
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
cc @chauhang @penguinwu
| true
|
2,929,965,883
|
Support launch_enter and launch_exit hooks
|
jamesjwu
|
open
|
[
"low priority",
"triaged",
"oncall: pt2"
] | 4
|
CONTRIBUTOR
|
cc @chauhang @penguinwu
| true
|
2,929,965,833
|
Support save_cubin (and therefore, support cpp_wrapper use cases)
|
jamesjwu
|
open
|
[
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
cc @chauhang @penguinwu
| true
|
2,929,965,779
|
Support sharedMem > 48 KB
|
jamesjwu
|
closed
|
[
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
See parent issue: support StaticCudaLauncher when triton kernels require more than 48 KB of shared memory
cc @chauhang @penguinwu
| true
|
2,929,965,501
|
Support any number of kernel arguments (fallback to heap allocation beyond N max arguments)
|
jamesjwu
|
closed
|
[
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
cc @chauhang @penguinwu
| true
|
2,929,965,192
|
Hook up statically compiled triton kernels to FXGraphCache's warm start
|
jamesjwu
|
closed
|
[
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
cc @chauhang @penguinwu
| true
|
2,929,965,044
|
Hook up StaticCudaLauncher to torch.compile
|
jamesjwu
|
closed
|
[
"triaged"
] | 0
|
CONTRIBUTOR
| null | true
|
2,929,963,551
|
DTensor slicing on sharded dimension leads to replication
|
garrett361
|
open
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 5
|
NONE
|
Slicing of sharded `DTensor`s currently results in differing placements depending on the axis over which the `DTensor` is sharded:
* Slicing a sharded dimension leads to replication over that dimension.
* Slicing a replicated dimension preserves all placements of the `DTensor`
**Expectation**: slicing will preserve the placements of the original `DTensor`.
Sketch of the behavior here:
```py
# Create sharded DTensor
x_dt = distribute_tensor(torch.randn(32, 32), mesh, (Shard(-1), ))
# Slice along the unsharded, zero dim. Slice is still sharded:
x_dt_slice0 = x_dt[:16]
x_dt_slice0.placement = (Shard(-1),)
# Slice along the sharded -1 dim. Slice is now replicated
x_dt_slice0 = x_dt[..., :16]
x_dt_slice0.placement = (Replicate(),)
```
[Minimal gist repro here.](https://gist.github.com/garrett361/f36b6c0b673cb1d777cb92f35438648c)
The asymmetric behavior seems undesirable, and seems liable to cause large memory spikes, e.g. if slicing a context-parallel sharded tensor. Preserving the sharding placement with `torch.chunk` semantics seems like a natural expectation, IMO.
More generally, I think a reasonable expectation is for `distribute_tensor` and torch ops to commute:
```py
# Expectations
x = torch.rand(256)
# Case 1) distribute then slice
x_dt_slice0 = distribute_tensor(x, mesh, (Shard(-1),))[64:128]
# Case 2): slice then distribute
x_dt_slice1 = distribute_tensor(x[64:128], mesh, (Shard(-1),))
# These currently fail
assert x_dt_slice0.to_local().numel() == x_dt_slice1.to_local().numel()
assert x_dt_slice0.placements == x_dt_slice1.placements
# Similar expectations for ops other than `torch.ops.aten.slice.Tensor`
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,929,952,885
|
[ROCm] skip test_RNN_dropout_state
|
dnikolaev-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
CONTRIBUTOR
|
PR to skip test_nn.py::TestNN::test_RNN_dropout_state
Currently ROCm doesn't support dropout value for RNN
PR to enable RNN dropout on ROCm still in review and blocked pytorch/pytorch#144572
Fixes: https://github.com/pytorch/pytorch/issues/68849
cc: @jithunnair-amd @pruthvistony
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,929,950,466
|
[skip ci] benchmark stack vs heap libtorch_agnostic.my_ones_like
|
janeyx99
|
closed
|
[
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
This PR was for benchmarking purposes. For the stack_my_ones_like op in this PR to work, libtorch's shim_common.cpp's to_ivalue() cannot delete sivp (as sivp is now on the stack and not the heap).
original my_ones_like:
<img width="538" alt="image" src="https://github.com/user-attachments/assets/aebf2b3b-b4c9-464d-a732-ab8659b15652" />
avg runtime: 7.851 us
new stack_my_ones_like as in this PR:
<img width="546" alt="image" src="https://github.com/user-attachments/assets/fd5b1524-822d-4400-b9b1-0d5b63782dd0" />
avg runtime: 7.626 us
So this is a difference of .225 us for 4 std::optionals handling (4 heap allocation difference)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149535
* __->__ #149445
| true
|
2,929,948,025
|
[export] Support python assertion with symints.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"fx",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 10
|
CONTRIBUTOR
|
Summary: This diff ports some technique from torch.fx symbolic trace to trace through Python asserts when we run into data dependent symbolic shape assertions, so that we can achieve the same effect as torch dynamo to automatically turn assert into torch.check()s.
Test Plan: buck test mode/opt caffe2/test:test_export -- -r test_python_asserts_with_sym_int
Differential Revision: D71425360
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,929,944,334
|
ci: Remove mentions and usages of DESIRED_DEVTOOLSET and cxx11
|
seemethere
|
closed
|
[
"Merged",
"Reverted",
"ciflow/binaries",
"release notes: releng",
"skip-pr-sanity-checks",
"ci-no-td"
] | 17
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149443
This is a remnant of our migration to manylinux2_28 we should remove
these since all of our binary builds are now built with cxx11_abi
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
cc @albanD
| true
|
2,929,943,157
|
[StaticCudaLauncher] Support any number of kernel arguments
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149629
* #149657
* #149054
* __->__ #149442
Fixes #149450
This PR adds fallback support on StaticCudaLauncher for any number of kernel arguments. Above MAX_ARGS, we can do a heap allocation/malloc instead.
For 0 arguments, triton technically does some undefined behavior by allocating a 0 byte array and passing it to cuLaunchKernel. In reality, cuLaunchKernel never accesses the pointer if the singature of the cubin has no parameters, so we can just pass nullptr directly.
We could technically use `alloca` to stack allocate instead of heap allocate, though in my tests it didn't seem to affect runtime performance on benchmarks particularly impressively, and alloca has portability issues, so I'd rather just stick with something simpler for now.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,929,933,237
|
Batch Sampler Speedup
|
GalAvineri
|
open
|
[
"triaged",
"open source",
"release notes: dataloader"
] | 5
|
NONE
|
# Motivation
https://github.com/pytorch/pytorch/pull/147706 attempts to accelerate `BatchSampler` over `RandomSampler` by utilizing the fact that `RandomSampler` can construct all the epoch's indices before yielding them.
This PR generalizes this approach for all samplers that share this feature (e.g `SequentialSampler`).
# Content
This PR introduces a new sampler base class `ArrayableSampler` (a poor name perhaps, happy for suggestions!)
that has a function `to_array` which returns the entire sequence of indices, instead of yielding it.
`BatchSampler` is modified to call `to_array` if it is available, and then partition the indices into batches more efficiently.
`RandomSampler` and `SequentialSampler` are changed to inherit `ArrayableSampler` instead of `Sampler` and implement `to_array`.
I've also added unit tests for `BatchSampler`.
# Results
These are the speedup results over `RandomSampler` and `SequentialSampler`
```
Random Sampler
original(avg) original(std) new(avg) new(std) speedup
batch_size drop_last replacement
4 True True 0.083266 0.001230 0.011100 0.000496 650.15%
False True 0.097272 0.001048 0.010956 0.000122 787.86%
True False 0.071846 0.001248 0.019380 0.000427 270.73%
False False 0.081651 0.000393 0.019177 0.000406 325.77%
8 True True 0.080392 0.000948 0.006527 0.000057 1131.65%
False True 0.089747 0.001443 0.006300 0.000141 1324.56%
True False 0.070335 0.000481 0.014993 0.000398 369.10%
False False 0.076151 0.001038 0.014292 0.000989 432.84%
16 True True 0.079936 0.001022 0.003918 0.000063 1940.24%
False True 0.088889 0.002255 0.003966 0.000034 2141.47%
True False 0.070394 0.002158 0.012234 0.000371 475.39%
False False 0.073136 0.000844 0.012345 0.000358 492.46%
32 True True 0.079251 0.001090 0.002816 0.000034 2714.11%
False True 0.086134 0.001740 0.002776 0.000021 3002.72%
True False 0.068372 0.000683 0.010850 0.000388 530.14%
False False 0.070534 0.000757 0.011073 0.000405 537.00%
64 True True 0.076503 0.000867 0.002152 0.000031 3455.23%
False True 0.080709 0.000728 0.002079 0.000033 3781.88%
True False 0.067604 0.000163 0.010141 0.000429 566.67%
False False 0.068694 0.000324 0.010150 0.000402 576.80%
256 True True 0.076467 0.000447 0.001673 0.000041 4471.89%
False True 0.079399 0.000464 0.001671 0.000036 4652.11%
True False 0.066305 0.000353 0.009784 0.000383 577.66%
False False 0.068494 0.000760 0.009861 0.000351 594.57%
1024 True True 0.077544 0.000437 0.001531 0.000028 4964.72%
False True 0.078970 0.000251 0.001532 0.000035 5055.80%
True False 0.066495 0.000693 0.009903 0.000433 571.45%
False False 0.068854 0.001016 0.009248 0.000885 644.53%
4096 True True 0.080214 0.000778 0.001599 0.000085 4915.45%
False True 0.080381 0.001041 0.001580 0.000045 4988.82%
True False 0.067910 0.000534 0.009977 0.000956 580.65%
False False 0.067867 0.000811 0.009625 0.000386 605.11%
8192 True True 0.079692 0.001228 0.001605 0.000042 4864.47%
False True 0.081308 0.001007 0.001569 0.000043 5082.90%
True False 0.067922 0.002579 0.009508 0.000522 614.35%
False False 0.067451 0.001880 0.009628 0.000383 600.59%
16384 True True 0.082358 0.001515 0.001587 0.000049 5088.30%
False True 0.079919 0.001728 0.001474 0.000034 5323.23%
True False 0.068146 0.000946 0.010022 0.000331 579.98%
False False 0.067269 0.000629 0.009658 0.000369 596.53%
Sequential Sampler
original(avg) original(std) new(avg) new(std) speedup
batch_size drop_last
4 True 0.011663 0.000044 0.009717 0.000065 20.04%
False 0.022071 0.000238 0.009743 0.000142 126.53%
8 True 0.009131 0.000133 0.005157 0.000044 77.08%
False 0.014645 0.000262 0.004918 0.000120 197.81%
16 True 0.008144 0.000128 0.002611 0.000016 211.87%
False 0.012597 0.000151 0.002699 0.000015 366.73%
32 True 0.007929 0.000087 0.001406 0.000020 463.90%
False 0.009932 0.000150 0.001423 0.000021 598.01%
64 True 0.006814 0.000077 0.000793 0.000014 759.12%
False 0.008856 0.000146 0.000789 0.000009 1022.34%
256 True 0.006819 0.000096 0.000358 0.000009 1804.35%
False 0.008643 0.000073 0.000357 0.000006 2324.08%
1024 True 0.007234 0.000107 0.000241 0.000006 2903.41%
False 0.008019 0.000117 0.000247 0.000007 3147.97%
4096 True 0.007520 0.000068 0.000263 0.000093 2761.04%
False 0.007552 0.000134 0.000258 0.000080 2830.46%
8192 True 0.007736 0.000096 0.000227 0.000017 3312.73%
False 0.007355 0.000107 0.000217 0.000007 3283.53%
16384 True 0.009124 0.000134 0.000211 0.000017 4215.36%
False 0.007744 0.000100 0.000228 0.000024 3303.29%
```
## Note
While `BatchSampler` previously yielded `List[int]`, it now yields `numpy` arrays instead.
Furthermore `RandomSampler.to_array` uses `numpy` generator instead of `torch` generator.
I'll provide speed comparisons using alternative implementations:
1. Using `numpy` generator and yielding `List[int]`.
2. Using `torch` generator and yielding `Tensor`.
3. Using `torch` generator and yielding `List[int]`.
| true
|
2,929,927,619
|
[pt2] Support statically launching triton compiled cuda kernels
|
jamesjwu
|
open
|
[
"triaged",
"actionable",
"oncall: pt2",
"module: inductor",
"module: user triton"
] | 1
|
CONTRIBUTOR
|
This is a master issue describing progress for StaticCudaLauncher.
Overall, the goal here is to be able to statically launch cuda kernels generated by Triton from just the cubin file and various metadata, without having to ever call CompiledKernel.init_handles(). To do so, we need to:
- Implement the launcher itself
- Hook it up to torch.compile (cold start)
- Hook up statically compiled triton kernels to FXGraphCache's artifact (warm start)
- Implement feature parity with triton's own launcher: this is so that every feature that triton kernels can use, StaticCudaLauncher can also use.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @oulgen @davidberard98 @jansel
| true
|
2,929,925,787
|
[dynamo] recursive-only dont_skip_tracing with traceback approach
|
williamwen42
|
open
|
[
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"keep-going",
"module: compile ux"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149439
Attempt #2 at https://github.com/pytorch/pytorch/pull/148736 using a traceback approach rather than a global variable approach.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,929,914,230
|
Fix format string in ck_gemm_template.h for int64_t variables
|
izaitsevfb
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary:
Change %d to %ld in printf format specifier to correctly handle int64_t variables n, m, k.
This fixes compilation errors in HIP builds where the format string didn't match the argument type.
forward fix for D71412006
```
In file included from fbcode/caffe2/aten/src/ATen/native/hip/ck_gemm_bfloat16.hip:4:
fbcode/caffe2/aten/src/ATen/native/hip/ck_gemm_template.h:386:28: error: format specifies type 'int' but the argument has type 'int64_t' (aka 'long') [-Werror,-Wformat]
385 | printf("error shape = %d %d %d TRANSA=%d TRANSB=%d \n",
| ~~
| %ld
386 | n, m, k,TRANSA, TRANSB);
| ^
fbcode/caffe2/aten/src/ATen/native/hip/ck_gemm_template.h:386:31: error: format specifies type 'int' but the argument has type 'int64_t' (aka 'long') [-Werror,-Wformat]
385 | printf("error shape = %d %d %d TRANSA=%d TRANSB=%d \n",
| ~~
| %ld
386 | n, m, k,TRANSA, TRANSB);
| ^
fbcode/caffe2/aten/src/ATen/native/hip/ck_gemm_template.h:386:25: error: format specifies type 'int' but the argument has type 'int64_t' (aka 'long') [-Werror,-Wformat]
385 | printf("error shape = %d %d %d TRANSA=%d TRANSB=%d \n",
| ~~
| %ld
386 | n, m, k,TRANSA, TRANSB);
| ^
```
Test Plan:
```
buck2 build --flagfile fbcode//mode/opt-amd-gpu fbcode//torchrec/sparse/tests:test_jagged_tensor_gpu
```
Differential Revision: D71418611
| true
|
2,929,880,500
|
[MPSInductor] Move threadfence at the right location
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 7
|
CONTRIBUTOR
|
Not sure how it worked in the past, but fence should be before first read from the shared memory, not after it.
This bug was exposed by https://github.com/pytorch/pytorch/pull/148969 which removed unnecessary barrier before calling `threadgroup_reduce` functions
Test plan:
```
% python3 generate.py --checkpoint_path checkpoints/stories15M/model.pth --prompt "Once upon a time" --device mps --compile
```
Before that it produced gibberish, now it works fine
| true
|
2,929,863,013
|
[MTIA] Add _mtia_getCurrentRawStream to MTIA module
|
PatriceVignola
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary: The FlexAttention path generates code that uses this function. Although streams are not used yet in Triton-MTIA, adding this now allows us to not branch out just for MTIA and generate different code.
Test Plan: CI
Reviewed By: chaos5958
Differential Revision: D70072057
| true
|
2,929,853,311
|
Enable fast path for qlinear (static/dynamic) and qadd for AArch64 though ACL directly.
|
fadara01
|
closed
|
[
"module: cpu",
"open source",
"module: arm",
"release notes: quantization",
"ciflow/linux-aarch64",
"arm priority"
] | 10
|
COLLABORATOR
|
This is a backport for the PRs enabling a fast path for eager mode static/dynamic quantized matmuls and quantized add for AArch64 through Arm Compute Library (ACL) directly - https://github.com/pytorch/pytorch/pull/148585, https://github.com/pytorch/pytorch/pull/148653.
PR https://github.com/pytorch/pytorch/pull/148584 is the base for all of the above and made its way to `release/2.7`, but we need the above two PRs to capitalize on it.
It would mean a lot for us to have these changes in v2.7. They directly enable business partners to adopt PyTorch on Arm as they accelerate MLPerf's recommender model by ~ **14x**
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,929,823,041
|
Supporting non-tensor-data write_size in planner write items.
|
pradeepfn
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (checkpoint)",
"ci-no-td",
"oncall: distributed checkpointing"
] | 9
|
CONTRIBUTOR
|
Summary:
1\ The current write item structure does not contain the amount of data that needs to be written.
2\ the planner.item already has a size primitive 'tensor_storage_size'. https://fburl.com/code/7a0gsmw7 But only for tensors.
3\ Right now, the only way the writer layer get hold of this property (fro non tensor data)
- first do a lookup in to the actual tensor/bytes
- then calculate the nbytes.
This change introduce a way to capture non-tensor data size within a write-plan item.
Reviewed By: daulet-askarov
Differential Revision: D70497442
cc @LucasLLC @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,929,812,139
|
[MTIA] Ensure correct stream behavior for input_buffer add autograd on MTIA
|
jvandebon
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Test Plan: CI
Differential Revision: D71414498
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.