id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,957,354,440
|
[c10d][fr] Allow multiple writer registration with warnings
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150232
The life span of writer is actually the whole program which is sub-optimal but it is a practical compromise so that the registration of writer can happen outside PG creation.
So we decide to allow multiple writer registrations with warnings.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,957,346,701
|
support backed_size_oblivious in guard_or_false/guard_or_true
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150231
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,957,340,640
|
gloo: use shared Stores
|
d4l3k
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 10
|
MEMBER
|
Summary:
X-link: https://github.com/facebookincubator/gloo/pull/423
This modifies `connectFullMesh` to take in a shared_ptr<IStore> instead of a reference. This is an API breaking change but fairly easy to work around.
To have backwards compatibility in PyTorch during the commit phase we add a new ifdef `GLOO_SHARED_STORE` which can provide backwards compatibility until we update the pinned Gloo version in pytorch OSS repo.
This also adds a new `wait_get` method to `IStore` which will allow us to do a more efficient operation in PyTorch TCPStore. PyTorch's `Store::get` automatically waits so we want to make sure we can avoid waiting twice to reduce network traffic.
This change will land simultaneously in PyTorch and Gloo repos.
Test Plan:
```
buck2 test //gloo/... //caffe2/caffe2/contrib/gloo:
```
Differential Revision: D72084111
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o
| true
|
2,957,289,357
|
Refactor fake tensor caching
|
angelayi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
After rebasing #149298, my test case was failing at [this part of the code](https://github.com/pytorch/pytorch/blob/d5a8bd068821ca0cd036400c13a561452d391b15/torch/_subclasses/fake_tensor.py#L1519-L1527) where torch._check/torch._check_is_size is being passed to `validate_cache_key`.
I chatted with @zou3519 and he mentioned:
> In general if you have a function that we're doing FakeTensor prop on and it has torch._check calls on it: torch._check is not an OpOverload so FakeTensor never sees it.
> Currently the code manually runs validate_cache_key on all nodes of the invoke_subgraph. This is not correct, we should morally be doing some tracing-based approach.
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,957,267,268
|
Move MacOS inductor tests to M2-15 runner
|
malfet
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"ci-no-td"
] | 8
|
CONTRIBUTOR
|
To get more representative results (and be able to run more tests eventually)
Also get pull_request for workflow dispatch if yml file is modified
| true
|
2,957,266,580
|
[ROCm] use correct workspace for hipblaslt, silence warning
|
ethanwee1
|
open
|
[
"module: rocm",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"rocm",
"ci-no-td"
] | 15
|
CONTRIBUTOR
|
Follow up to #145130. That PR caused a warning on ROCm the first time hipblaslt was called for any workload, always.
Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,957,256,887
|
[CI] Use system nccl in build
|
clee2000
|
closed
|
[
"Merged",
"release notes: releng",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Install nccl in the docker image (which is already being done in some docker images), and use USE_SYSTEM_NCCL=1 in CI builds
It takes some time to build nccl and doesn't happen in parallel, so theres less benefit in switching to a bigger runner and using more processes
The other changes in this PR are because there is an install_cuda script and an install_cuda_aarch64 script and they both build nccl from source and define their own pins for the nccl version. There is also a .ci/docker/nccl-cu11.txt and cu12.txt that define the pins, and this is an attempt to unify them. Unfortunately this leads to a lot of files needing to be copied to the docker build
Generally seems to increase docker pull times by <1 min, P1768456379 but its hard to tell what the real increase is
15761 mib -> 16221 [linux-focal-cuda11.8-py3.10-gcc9 / test (distributed](https://github.com/pytorch/pytorch/actions/runs/14114171729/job/39545500161#logs)
`jq '[.layers[].size, .config.size] | add / 1024 / 1024'`
Example https://hud.pytorch.org/pytorch/pytorch/commit/6eb3c2e2822c50d8a87b43938a9cf7ef0561ede2#39520169577-box

TODO:
* Figure out a way to verify that nccl was built + works properly when it is expected (this time i just checked torch.distributed.is_nccl_available)
* Merge the cusparse installation scripts
* Merge the cuda installation scripts
* Either split the nccl, cuda, and cusparse installations always, or make the always together in one bash script
distributed/test_distributed_spawn
| true
|
2,957,232,856
|
[inductor] skip non-trivial tiling if unbacked symints are present
|
ColinPeppler
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Take two of https://github.com/pytorch/pytorch/pull/149994.
This time we just skip `convert_tiling_to_3d` and `candidate_tilings` if there exists unbacked symints.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150225
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,957,226,937
|
CVE-2024-7804 still present in versions greater than 2.3.1
|
HammySession
|
closed
|
[
"oncall: distributed",
"triaged",
"security"
] | 3
|
NONE
|
### 🐛 Describe the bug
[CVE-2024-7804](https://nvd.nist.gov/vuln/detail/CVE-2024-7804) was published on 03/20/2025, the Proof of Concept for the vulnerability is at [Huntr](https://huntr.com/bounties/0e870eeb-f924-4054-8fac-d926b1fb7259).
The vulnerability in question is due to de-serialization at [Internal.py](https://github.com/pytorch/pytorch/blob/27a14405d3b996d572ba18339410e29ec005c775/torch/distributed/rpc/internal.py#L162)
The alert on the [PyTorch Repo](https://github.com/advisories/GHSA-4vmg-rw8f-92f9) specifies that the affected versions are <= 2.3.1, but upon investigation this attack is equally effective on the latest PyTorch platform 2.6.0 in my own tests.
As an additional note this appears to be a re-emergence of the disputed [CVE-2024-48063](https://www.cve.org/CVERecord?id=CVE-2024-48063), in that it takes advantage of the same pickle issue.
My question is:
1. If the CVE-2024-7804 is valid is there a fix planned for the next version of PyTorch
2. If the team discerns that it is the same as the disputed CVE-2024-48063, could the team reach out to MITRE and dispute the new CVE?
Thanks for your time
<img width="640" alt="Image" src="https://github.com/user-attachments/assets/24357c47-d63f-48de-a34c-794cdf3a302d" />
<img width="640" alt="Image" src="https://github.com/user-attachments/assets/11e1e10f-e127-467e-a950-febc335a0622" />
<img width="640" alt="Image" src="https://github.com/user-attachments/assets/878d0499-c78a-4357-9ab4-bbaf3a4d62e4" />
### Versions
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.16 (main, Dec 11 2024, 10:22:29) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] torch==2.6.0
[conda] numpy 2.2.4 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,957,200,705
|
Pin cmake to 3.31.2 for windows conda install
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Trying to fix nightly failures
Cmake 4.0 update https://pypi.org/project/cmake/4.0.0/ broke nightly builds
You can see it here: https://hud.pytorch.org/hud/pytorch/pytorch/main/1?per_page=50&name_filter=cuda11_8-build
and here: https://hud.pytorch.org/hud/pytorch/pytorch/nightly/1?per_page=50&name_filter=
This fix for Windows Builds. Linux and MacOS where already fixed.
| true
|
2,957,177,647
|
[dynamic shapes] allow duck typing for 0/1
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Fixes #150184
e.g. for config.backed_size_oblivious=True and compile
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,957,166,983
|
[ROCm][CI] Increase wheel build timeout from 210 to 240
|
ethanwee1
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/nightly"
] | 5
|
CONTRIBUTOR
|
Fixes #150046. Increasing the timeout from 210 to 240.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,957,157,740
|
[CI] Fix docker builds failing due to cmake update by setting CMAKE_POLICY_VERSION_MINIMUM
|
clee2000
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Set the CMAKE_POLICY_VERSION_MINIMUM env var to make executorch and halide docker builds pass (they install from those repos which don't have cmake pinned)
This can be removed if executorch and halide update their builds and we update the hash?
| true
|
2,957,075,619
|
Move dump location to avoid dumping twice
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary:
If we put the dumping code in codegen, we might get a separate node_mapping dump for the constant folded graph (https://github.com/pytorch/pytorch/blob/main/torch/_inductor/compile_fx.py#L1119).
We move it into compile_fx.py so there's only one node_mapping dump.
Test Plan: CI
Reviewed By: YUNQIUGUO
Differential Revision: D72068715
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,957,028,151
|
[DLPack] Add support for missing keyword-arguments.
|
ysiraichi
|
open
|
[
"open source",
"module: dlpack",
"release notes: python_frontend"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150691
* __->__ #150218
* #150217
* #150216
* #145000
This PR introduces the rest of the keyword-arguments added in DLPack
version 2023.12: `dl_device` and `copy`.
In summary, we handle these arguments in the C++ implementation of
`to_dlpack(...)` at _torch/csrc/Module.cpp_, by calling the
`maybeCopyTensor` function at _aten/src/ATen/DLConvertor.cpp_. It also
introduces the following changes:
- Add a new Python API `torchDeviceToDLDevice()`, which is simply a
refactoring of the `getDLDevice()` function at
_aten/src/ATen/DLConvertor.cpp_.
- Add both keyword-arguments to the `from_dlpack()` function at
_torch/utils/dlpack.py_ and to the `Tensor.__dlpack__()` dunder
method.
| true
|
2,957,028,030
|
Fix DLPack stream logic.
|
ysiraichi
|
open
|
[
"open source",
"module: dlpack",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150691
* #150218
* __->__ #150217
* #150216
* #145000
This PR fixes the logic for dealing with CUDA and ROCm streams whenever
we are trying to create a DLPack capsule from a tensor.
In summary, this PR:
- Uses the legacy default stream if `tensor.__dlpack__(stream=None)` is
called for a CUDA tensor.
- Errors if `tensor.__dlpack__(stream=2)` is called for a CUDA tensor:
PyTorch doesn't support the per-thread default stream.
- Errors if `tensor.__dlpack__(stream=stream)`, where `stream` is 1 or
2, is called for a CUDA tensor using ROCm.
For more details, see [the documentation][1].
[1]: https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.__dlpack__.html
| true
|
2,957,027,921
|
[DLPack] add NumPy exchange tests.
|
ysiraichi
|
open
|
[
"open source",
"module: dlpack",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150691
* #150218
* #150217
* __->__ #150216
* #145000
This PR resolves an old TODO that requested NumPy DLPack exchange tests
once version 1.22 was required.
| true
|
2,957,026,565
|
TCPStoreLibUvBackend: support masterListenFd
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 4
|
MEMBER
|
This supports `masterListenFd` which is required for full compatibility with the non-libuv TCPStore. The code was just missing a `uv_listen` call and now it works just fine.
This is required to migrate the last remaining uses of TCPStore off of the non-libuv backend.
Test plan:
```
pytest -v test/distributed/test_store.py -k test_take_over_listen_socket
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o
| true
|
2,957,020,339
|
Explicitly state that a test-infra branch cut is required
|
ZainRizvi
|
closed
|
[
"Merged",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150214
* #150213
* #150211
* #150210
| true
|
2,957,001,746
|
Update reference for binary_build workflows
|
ZainRizvi
|
closed
|
[
"Merged",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150214
* __->__ #150213
* #150211
* #150210
There hasn't been a circleci for a looooong time
| true
|
2,956,990,693
|
[ROCm] change preferred blas lib defaults
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 10
|
COLLABORATOR
|
Fixes #148883
Fixes #150155
Also adds at::BlasBackend:Default. Instinct cards prefer hipBLASLt, everything else prefers rocBLAS.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,956,964,838
|
Update referenced PRs for ecosystem library branch cut
|
ZainRizvi
|
closed
|
[
"Merged",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150214
* #150213
* __->__ #150211
* #150210
The old PRs had a lot of extra changes in them which are no longer needed
| true
|
2,956,955,892
|
Mention the cherry-picker bot in the release docs
|
ZainRizvi
|
closed
|
[
"Merged",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150214
* #150213
* #150211
* __->__ #150210
| true
|
2,956,922,136
|
[dynamo][hooks] use wrap_top_frame config for functions
|
yf225
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
When torch.compile is applied to a module via `mod.compile(...)`, it's equivalent to `torch.compile(mod._call_impl)` which takes a different path than `OptimizedModule`. This PR ensures that the `wrap_top_frame` config can also take effect for the `torch.compile(mod._call_impl)` use case.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150209
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,956,920,684
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_float32 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 6
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_float32&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39596379608).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1420, in only_fn
return fn(slf, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 1349, in test_foreach_copy_with_multi_dtypes
out = foreach_copy_(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_copy_', keys=('aten::_foreach_copy_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float32], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float32], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float32], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float32], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float32], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float32], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float32], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float32], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float32], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float32], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float32], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float32], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float32], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float32], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float32], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float32], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float32], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float32], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float32], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float32]], args=(TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float32], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float32], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float32], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float32], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float32], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float32], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float32], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float32], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float32], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float32], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float32], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float32], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float32], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float32], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float32], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float32], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float32], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float32], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float32], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float32]]), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,956,903,831
|
Missing decomp for ops.aten.linalg_vector_norm
|
guangy10
|
closed
|
[
"triaged",
"module: linear algebra",
"module: decompositions"
] | 13
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This ops is heavily used in many vision transformers models ([convnextv2](https://github.com/huggingface/optimum-executorch/actions/runs/14098303415/job/39489765027?pr=41), [swiftformer](https://github.com/huggingface/optimum-executorch/actions/runs/14098303415/job/39489784457?pr=41), [swinv2](https://github.com/huggingface/optimum-executorch/actions/runs/14098303415/job/39489765027?pr=41)), lowering those models to ExecuTorch will fail because this op is neither in core aten opset or having a decomp rule.
We should consider adding a decomp rule for it to unblock
### Versions
trunk
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @SherlockNoMad
| true
|
2,956,880,543
|
[ONNX] remove_assertion_nodes before decomp
|
justinchuby
|
closed
|
[
"open source",
"release notes: onnx"
] | 2
|
COLLABORATOR
|
Call remove_assertion_nodes before decomp because we see errors like `<class 'TypeError'>: cannot determine truth value of RelationalWhile executing %_assert_scalar_default : [num_users=0] = call_function[target=torch.ops.aten._assert_scalar.default](args = (%ge_2 Runtime assertion failed for expression u4 >= 0 on node 'ge_2') kwargs = {})`
| true
|
2,956,868,395
|
Fix documentation build errors caused by unsupported section titles
|
dscamiss
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Fixes #150134
Build with `make html` looks OK now:
```shell
reading sources... [100%] torch.compiler_get_started .. xpu
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [ 80%] generated/torch.nn.Softsign .. generated/torch.nn.modules.module.register_module_full_backward_writing output... [ 86%] generated/torch.nn.modules.module.register_module_module_registration_hook .. generated/torch.rwriting output... [100%] generated/torch.xpu.get_rng_state .. xpu
generating indices... genindex done
highlighting module code... [100%] typing
writing additional pages... search done
copying images... [100%] _static/img/torch_cuda_memory/allocator_state_history.png
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build succeeded.
The HTML pages are in build/html.
```
New rendering looks like this:

| true
|
2,956,841,018
|
Add `Any` return annotation to `__getattr__` methods that return a union of types.
|
rchen152
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: improvements",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Adds an `Any` return type annotation to `__getattr__` methods in `torch/_ops.py` that return a union of types. Attribute access returning a union of types can cause issues downstream because consumers would need to handle all of the possible types to make the type checker happy. This doesn't seem to matter today for mypy, presumably because `Any` is always inferred when a return type annotation is missing, but it still makes explicit what mypy is already doing implicitly.
| true
|
2,956,840,732
|
[Cmake] Make PyTorch buildable by CMake-4.x
|
malfet
|
closed
|
[
"module: cpu",
"Merged",
"release notes: quantization",
"release notes: build",
"topic: bug fixes",
"topic: build"
] | 8
|
CONTRIBUTOR
|
By turning on compatibility mode for protobuf, nnpack, PSimd and FP16, ittapi, TensorPipe and Gloo
Update CMake requirements
Revert 0ece461ccafe5649d2d0f058ff5477765fd56499 and b0901d62ae2c2e909f91401eacebf3731df20cbe to test that it actually works
TODO:
- Update/get rid of those libraries
Fixes https://github.com/pytorch/pytorch/issues/150149
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,956,825,750
|
[ONNX] Support float4
|
justinchuby
|
open
|
[
"module: onnx",
"triaged"
] | 3
|
COLLABORATOR
|
float4 support was added in https://github.com/pytorch/pytorch/pull/148791. We should add it to onnx exporter as well.
1. https://github.com/pytorch/pytorch/blob/35ff5084e6a5bbf7c897840943ee3ac846ffaaf1/torch/onnx/_internal/exporter/_core.py#L116
2. https://github.com/pytorch/pytorch/blob/35ff5084e6a5bbf7c897840943ee3ac846ffaaf1/torch/onnx/_internal/exporter/_core.py#L52
| true
|
2,956,804,902
|
[fix] pin cmake to 3.31.6 in build requirements
|
k223kim
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 2
|
NONE
|
As per PR #150158 , I believe that the build requirements should be updated.
| true
|
2,956,755,011
|
UNSTABLE s390 / linux-manylinux-2_28-py3-cpu-s390x / build
|
clee2000
|
closed
|
[
"module: ci",
"triaged",
"module: POWER",
"unstable"
] | 4
|
CONTRIBUTOR
|
> Please provide a brief reason on why you need to mark this job as unstable.
Failing on main branch: https://hud.pytorch.org/pytorch/pytorch/commit/d4da0e955e5a29942d631cddaf0fb7805bb27231#39568901452-box
AWS credentials have issues so sccache is failing?
```
2025-03-28T14:19:55.2818684Z ++ sccache --zero-stats
2025-03-28T14:19:56.5811986Z sccache: error: Server startup failed: cache storage failed to read: Unexpected (temporary) at read => loading credential to sign http request
2025-03-28T14:19:56.5821621Z
2025-03-28T14:19:56.5826095Z Context:
2025-03-28T14:19:56.5833750Z called: reqsign::LoadCredential
2025-03-28T14:19:56.5836942Z service: s3
2025-03-28T14:19:56.5839287Z path: .sccache_check
2025-03-28T14:19:56.5842551Z range: 0-
2025-03-28T14:19:56.5843936Z
2025-03-28T14:19:56.5844832Z Source:
2025-03-28T14:19:56.5850109Z error sending request for url (http://169.254.169.254/latest/api/token): operation timed out
```
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,956,754,046
|
`asarray`: device does not propagate from input to output after `set_default_device`
|
crusaderky
|
open
|
[
"triaged",
"module: python array api",
"module: python frontend"
] | 9
|
NONE
|
### 🐛 Describe the bug
The documentation of `asarray` states:
> device ([torch.device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device), optional) – the device of the returned tensor. Default: None, **which causes the device of obj to be used**. Or, if obj is a Python sequence, the current default device will be used.
The described behaviour is coherent with the specification of the Array API standard.
This works as expected in practice: `asarray(x, device=None)`, propagates the device of `x` to the output - _unless_ the user sets the default device (even if just to explicitly state the current value). After that, `asarray(x, device=None)` disregards the device of `x` and converts everything to the default device.
```python
In [1]: import torch
In [2]: torch.get_default_device()
Out[2]: device(type='cpu')
In [3]: x = torch.asarray(0, device=torch.device('cuda'))
In [4]: torch.asarray(x).get_device()
Out[4]: 0 # OK
In [5]: torch.set_default_device('cpu')
In [6]: torch.asarray(x).get_device()
Out[6]: -1 # KO
```
### Versions
pytorch 2.6.0 conda-forge linux intel
cc @mruberry @rgommers @asmeurer @leofang @AnirudhDagar @asi1024 @emcastillo @kmaehashi @albanD
| true
|
2,956,683,316
|
[export] Symint support (nonstrict, Dim.DYNAMIC)
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 8
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/113682 only in the non-strict export case. Also we only support Dim.DYNAMIC/AUTO, not named-Dims
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,956,661,905
|
[AOTI] Always use oss schema for ExternKernelNodes serialization
|
yiming0416
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 15
|
CONTRIBUTOR
|
Summary: Added a field `protocol` to `ExternKernelNodes` and all the lowering pass will always use the oss schema to serialize external kernel nodes from now on.
Test Plan: CI
Differential Revision: D72020444
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,956,661,873
|
[reland] Support torchbind in OSS proxy executor
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export",
"ci-no-td"
] | 4
|
CONTRIBUTOR
|
Summary:
The original Diff D69500038 is reverted due to a false alarm on trunk health.
Implement torchbind support in OSSProxyExecutor.
Exactly the same as the implementation in FbProxyExecutor.
D69693697 - fbProxyExecutor
D69887230 - fbProxyExecutor but for torchbind method
D70746626 - Support None output type
Other changes:
- When generating the schema of the CallTrochBind HOP, the arg name of the torchbind object arg should be the same as the torchbind method's torchbind object arg (instead of `obj`).
- In `AOTIModelPackageLoader`, we extract everything in `data/constants` to `tmp_dir/data/aot_inductor/<model>/` folder, so the torchbind objs exist in the same folder as the rest of the files (e.g. cpp, so). This is to be consistent of how files are packaged internally (more details in internal Diff summary).
Note on using `filesystem`:
Seems like there'll be [issues](https://github.com/pytorch/pytorch/pull/137209) with using`filesystem` header in linux, so here I use string manipulation instead of `filesystem::path`.
Test Plan:
```
test/inductor:torchbind -- -r torchbind_aoti
test/inductor:torchbind -- -r aot_compile
```
Differential Revision: D72063691
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,956,639,063
|
Parallelize sort using libstdc++ parallel mode
|
annop-w
|
open
|
[
"module: cpu",
"triaged",
"open source",
"topic: not user facing"
] | 23
|
CONTRIBUTOR
|
Fixes #149977, #149979, #150094.
Previously, #149505 used libstdc++ parallel mode by enabling `-D_GLIBCXX_PARALLEL`. However, mixing source files compiled with and without parallel mode can lead to undefined behavior (See https://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode_using.html) We switch to using the specific paralell sort from `<parallel/algorithm>` when compiled with GCC compiler. Note that use of `std::execution` policy has dependency on libtbb and we thus decide to avoid that.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,956,581,466
|
Smoke Test - disable pypi package validation for binaries that package cuda libs
|
atalman
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Smoke Test - disable pypi package validation for binaries that package cuda libs. These binaries do not install packages via pypi.
Should Resolve this from `linux-binary-manywheel / manywheel-py3_11-cuda12_6-full-test / test`:
```
Traceback (most recent call last):
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 468, in <module>
main()
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 462, in main
smoke_test_cuda(
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 274, in smoke_test_cuda
compare_pypi_to_torch_versions(
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 220, in compare_pypi_to_torch_versions
raise RuntimeError(f"Can't find {package} in PyPI for Torch: {torch_version}")
RuntimeError: Can't find cudnn in PyPI for Torch: 9.5.1
```
Link: https://github.com/pytorch/pytorch/actions/runs/14101221665/job/39505479587#step:15:982
| true
|
2,956,577,711
|
Pin cmake==3.31.6
|
pytorchbot
|
closed
|
[
"topic: not user facing"
] | 1
|
COLLABORATOR
|
I'm not sure if this is the right think to do, but cmake 4.0.0 got released on pypi and our builds are failing with it
Example:
https://hud.pytorch.org/pytorch/pytorch/commit/aa70d62041c28fe35c416aa932b32ef0e4d5bc33#39555975425-box
I guess we have to go change all the cmake_minimum_required to >=3.5?
backwards compat still failing because its building with the base commit which this pr can't really change until it gets merged, but at least manywheel binary builds got past where they were originally failing
Also pin the conda installation, but the most recent version on conda is 3.31.2
| true
|
2,956,522,492
|
Update torch.compile issue template
|
eellison
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150192
* #149947
| true
|
2,956,505,021
|
LibTorch for Windows on Arm as Arm64EC variant?
|
1enn0
|
closed
|
[
"module: windows",
"module: cpp",
"triaged",
"module: arm"
] | 3
|
NONE
|
### 🐛 Describe the bug
Hi, is there any plan to support building LibTorch for Windows on Arm as Arm64EC at some point in the future?
I have tried building it locally but have given up. It seems many dependencies do not support it, either (e.g. `XNNPACK`, `cpuinfo`, ...).
### Versions
`25309a17f0f293573f3e02f61b56904f4f666979`
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jbschlosser @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
2,956,414,829
|
auto_functionalize HOPs are cacheable, also we should check that we allow for manual overrides for cacheability
|
zou3519
|
open
|
[
"triaged",
"compile-cache",
"vllm-compile"
] | 0
|
CONTRIBUTOR
|
https://github.com/vllm-project/vllm/blob/3b00ff91380044fa409612401309b9cb6a82685f/vllm/compilation/compiler_interface.py#L255-L262
cc @oulgen @jamesjwu @masnesral
| true
|
2,956,389,005
|
Update CPUAllocator.cpp
|
AhmedZekoCodes
|
closed
|
[
"open source"
] | 3
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
2,956,320,082
|
[AOTI] Emit Triton kernels as comment
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150188
Summary: Emit the corresponding Triton kernel code as comment in each call_triton_ wrapper function, for easier debugging.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D72178907](https://our.internmc.facebook.com/intern/diff/D72178907)
| true
|
2,956,298,627
|
at::BlasBackend::Ck does not handle all ROCm BLAS gpus
|
trixirt
|
open
|
[
"high priority",
"module: rocm",
"triaged",
"module: regression"
] | 2
|
NONE
|
### 🐛 Describe the bug
The ck backend depends on composable_kernel, the set of GPU's it is known to work for is
https://github.com/ROCm/composable_kernel/blob/develop/CMakeLists.txt#L171
The generic blas backend depends on rocBLAS, the set of GPU's it is known to work for is
https://github.com/ROCm/rocBLAS/blob/develop/CMakeLists.txt#L115
These sets are different.
rocBLAS additionally has gfx1010,gfx1012,gfx1151
Linux distro's Fedora and OpenSUSE have additional rocBLAS gpus to the official AMD set
gfx1031,gfx1035,gfx1103,gfx1150,gfx1152
The use of the ck is similar to hipblaslt, not supported on all gpus
but unlike hipblaslt, where the support is part of the library and can be disabled at runtime here
https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/Context.cpp#L333
ck is unconditionally part of the build, with the ck templates/headers used directly instead of through ck library
This shows up as build failures similar to those trying to build the ck library for a gpu without support
/usr/include/ck/utility/amd_buffer_addressing.hpp:32:48: error: use of undeclared identifier 'CK_BUFFER_RESOURCE_3RD_DWORD'
32 | wave_buffer_resource.config(Number<3>{}) = CK_BUFFER_RESOURCE_3RD_DWORD;
| ^
The user can set the gpu's to build for with PYTORCH_ROCM_ARCH, there needs to be a build check to disable the building and use of the Ck backend if the user's list is not compatible with ck.
### Versions
This is a build problem.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,956,287,490
|
`MaskedTensor` 10x slower than `Tensor` compared with `nn.functional.softmax`
|
famura
|
open
|
[
"module: performance",
"triaged",
"module: masked operators"
] | 1
|
NONE
|
## Context
I was reading the docs on the new `MasedTensor` feature and wanted to get some insight if it might be useful in our transformer-based model. Thus, I started with pushing a demo input though the `softmax` function: once with a mask where certain values are set to `-inf` (typical for the attention mask in transformers), and once with the new `MasedTensor` feature. My hope was that the masked tensor would be faster as the masked values are skipped, but the opposite is the case.
Am I overlooking something?
## MWE
Here is the code to reproduce my results
```py
import torch
from torch.nn.functional import softmax
from torch.masked import MaskedTensor
import torch.utils.benchmark as benchmark
# Define parameters
DEBUG = False
batch_size = 1 if DEBUG else 32
seq_len = 6 if DEBUG else 200
embed_dim = 4 if DEBUG else 64
# Set the seed
torch.manual_seed(0)
# Generate random input tensor
input_tensor = torch.randn(batch_size, seq_len, embed_dim)
# Generate a random mask
mask = torch.randint(0, 2, (batch_size, seq_len, embed_dim), dtype=torch.bool)
# Create the "normal" (tensor as it would be seen by our model)
input_with_minf = input_tensor.masked_fill(~mask, float("-inf"))
# Create masked tensor
masked_input = MaskedTensor(input_tensor, mask)
# Define the benchmarking tasks
t1 = benchmark.Timer(
stmt="softmax(input_with_minf, dim=-1, dtype=torch.float32)",
globals={"softmax": softmax, "input_with_minf": input_with_minf},
)
t2 = benchmark.Timer(
stmt="softmax(masked_input, dim=-1, dtype=torch.float32)",
globals={"softmax": softmax, "masked_input": masked_input},
)
# Run the benchmarks (run multiple iterations)
n_iter = 1 if DEBUG else 1000
normal_result = t1.timeit(n_iter)
masked_result = t2.timeit(n_iter)
# Check if outputs are the same
softmax_output_normal = softmax(input_with_minf, dim=-1, dtype=torch.float32)
softmax_output_masked = softmax(masked_input, dim=-1, dtype=torch.float32)
assert isinstance(softmax_output_masked, MaskedTensor)
softmax_output_masked = softmax_output_masked.to_tensor(value=0.0) # 0.0 is softmax-specific
outputs_match = torch.allclose(softmax_output_normal, softmax_output_masked, equal_nan=True)
# Print results
print(f"Normal Tensor median time: {normal_result.median:.6f} s")
print(f"Masked Tensor median time: {masked_result.median:.6f} s")
print(f"Outputs Match: {outputs_match}")
```
## My Output (except the warnings about torch.masked being in prototype phase)
```sh
Normal Tensor median time: 0.000589 s
Masked Tensor median time: 0.006134 s
Outputs Match: True
```
cc @msaroufim
| true
|
2,956,141,490
|
Pin cmake to 3.31.2 for windows conda install
|
atalman
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Trying to fix nightly failures
Cmake 4.0 update https://pypi.org/project/cmake/4.0.0/ broke nightly builds
You can see it here: https://hud.pytorch.org/hud/pytorch/pytorch/main/1?per_page=50&name_filter=cuda11_8-build
and here: https://hud.pytorch.org/hud/pytorch/pytorch/nightly/1?per_page=50&name_filter=
This fix for Windows Builds. Linux and MacOS where already fixed.
| true
|
2,956,073,437
|
Backed size oblivious not working as expected
|
laithsakka
|
closed
|
[
"triaged",
"module: dynamic shapes"
] | 2
|
CONTRIBUTOR
|
```
with torch.fx.experimental._config.patch(backed_size_oblivious=True):
@torch.compile(dynamic=True, fullgraph=True)
def func3(a, b):
if guard_size_oblivious(a.size()[0]==1):
return b*10
else:
return b*20
# always go to the true branch.
print(func3(torch.tensor([1]), torch.tensor([1])))
self.assertEqual(func3(torch.tensor([1]), torch.tensor([1])), torch.tensor([20]))
```
I expect the output to be 20 but its 10
cc @chauhang @penguinwu @ezyang @bobrenjc93 @pianpwk
| true
|
2,956,019,392
|
`differentiable` leads an in-place error in `torch.optim.Adam()` and `torch.optim.AdamW()`
|
ILCSFNO
|
closed
|
[
"module: optimizer",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The docs of [torch.optim.Adam()](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html#torch.optim.Adam) and [torch.optim.AdamW()](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#torch.optim.AdamW) show their shared description as below:
For `torch.optim.Adam()`:
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/optim/adam.py#L332
For `torch.optim.AdamW()`:
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/optim/adamw.py#L116
where `differentiable` is described here:
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/optim/optimizer.py#L277-L281
It shows that argument `differentiable` allows gradient propagation through the optimizer step but doesn't inherently cause exceptions unless specific unsupported operations are attempted.
While codes below shows an error:
### Repro
```python
import torch
import torch.nn as nn
model = nn.Sequential(nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 5), nn.ReLU())
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, differentiable=True)
# optimizer = torch.optim.AdamW(model.parameters(), lr=0.01, differentiable=True)
x = torch.randn(100, 5)
y = torch.randn(100, 5)
criterion = nn.MSELoss()
optimizer.zero_grad()
loss = criterion(model(x), y)
loss.backward()
optimizer.step()
```
### Output
```txt
RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
```
By tracing, the error maybe caused by in-place operation in `_single_tensor_adam()`:
### Tracing
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/optim/adam.py#L345
### Suggestions
To avoid in-place error when set `differentiable=True`, I suggest change:
* Fix the in-place operation in `_single_tensor_adam()` to Non in-place operation
Thanks for noting!
### Versions
Nightly
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,955,950,023
|
New Sampler: DistributedWeightedRandomSampler
|
dani-capellan
|
open
|
[
"triaged",
"open source",
"release notes: dataloader"
] | 5
|
NONE
|
### Summary
This PR introduces `DistributedWeightedRandomSampler`, which combines the functionality of `WeightedRandomSampler `and `DistributedSampler`. This sampler enables weighted sampling across multiple distributed processes, ensuring each process receives a balanced subset of the weighted samples.
### Motivation
Currently, PyTorch does not provide a straightforward workaround to use `WeightedRandomSampler `in a distributed setting. This has been a common issue, see discussions below:
- [PyTorch Forums: WeightedRandomSampler + DistributedSampler](https://discuss.pytorch.org/t/weightedrandomsampler-distributedsampler/52817)
- [PyTorch Lightning Issue #10946](https://github.com/Lightning-AI/pytorch-lightning/issues/10946)
### Expected Usage
```
import torch
from torch.utils.data import DataLoader
from torch.utils.data.distributed import DistributedWeightedRandomSampler
dataset = list(range(10))
weights = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
sampler = DistributedWeightedRandomSampler(weights, num_samples=5, num_replicas=2, rank=0)
loader = DataLoader(dataset, sampler=sampler, batch_size=2)
for epoch in range(start_epoch, n_epochs):
sampler.set_epoch(epoch)
for batch in loader:
print(batch) # Each process gets a subset of the sampled indices
```
| true
|
2,955,943,691
|
`out` argument should in `no_grad` mode
|
ILCSFNO
|
closed
|
[
"module: docs",
"triaged",
"actionable",
"module: python frontend"
] | 6
|
CONTRIBUTOR
|
### 📚 The doc issue
Seen from #146676, below issue also needs attention on `out` argument.
This issue is found while testing `torch.add()` together with `torch.full()`, but funcs mentioned in #146676 also remain fixed the issue below.
The docs of `torch.full()`, `torch.add()`, `torch.sub()`, `torch.mul()` and `torch.div()` shows its description as below:
<details><summary>Doc Details</summary>
#### torch.full()
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/_torch_docs.py#L12153-L12179
#### torch.add()
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/_torch_docs.py#L333-L379
#### torch.sub()
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/_torch_docs.py#L10575-L10605
#### torch.mul()
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/_torch_docs.py#L7524-L7570
#### torch.div()
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/_torch_docs.py#L3826-L3892
</details>
And their shared argument `out` shows its description here:
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/_torch_docs.py#L37
Codes below shows an error I met:
### Minified Repro
```python
import torch
input_data = torch.randn(2, 3)
output = torch.full(size=(2, 3), fill_value=100, dtype=input_data.dtype, requires_grad=True)
torch.add(input_data, output, out=output)
# torch.sub(input_data, output, out=output)
# torch.mul(input_data, output, out=output)
# torch.div(input_data, output, out=output)
```
### Output
```txt
RuntimeError: add(): functions with out=... arguments don't support automatic differentiation, but one of the arguments requires grad.
```
In all, I accept that it should raise error, but may the documentation could express more about this.
That is, the parameter `out` needs the tensor to be in `no_grad` mode, instead of `requires_grad=True`.
Suggestions showed below in detail.
Thanks for noting!
### Suggest a potential alternative/fix
I suggest change the description of `out`:
from:
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/_torch_docs.py#L37
to:
```python
out (Tensor, optional): the output tensor which should in `no_grad` mode.
```
cc @svekars @sekyondaMeta @AlannaBurke @albanD
| true
|
2,955,901,556
|
[ROCm][Windows] Fix torchvision build with ROCm 6.4 on windows
|
tvukovic-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Since with HIP SDK 6.4 hipcc files and calls and restructured, the case for calling hipcc.exe is added in case of building torchvision with HIP SDK 6.4 on Windows
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,955,887,800
|
Argument `pin_memory` works when cuda device in func `torch.empty_strided()`
|
ILCSFNO
|
closed
|
[
"module: docs",
"triaged",
"module: python frontend"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The docs of [torch.empty_strided()](https://pytorch.org/docs/stable/generated/torch.empty_strided.html#torch-empty-strided) shows its description as below:
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/_torch_docs.py#L12085
where pin_memory is defined here:
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/_torch_docs.py#L112-L113
The API explicitly states that `pin_memory` works only for CPU tensors. If CUDA is available (device="cuda"), `pin_memory=True` should be silently ignored. If CPU is used, `pin_memory=True` is valid and works. In both cases, no exception should occur.
But here are some repros:
### Minified Repro
```python
import torch
input_size = (2, 3)
input_stride = (1, 2)
dtype = torch.float32
layout = torch.strided
device = torch.device('cuda:0')
requires_grad = False
pin_memory = True
result = torch.empty_strided(input_size, input_stride, dtype=dtype, layout=layout, device=device, requires_grad=requires_grad, pin_memory=pin_memory)
```
### Output
```txt
RuntimeError: Only dense CPU tensors can be pinned
```
So `pin_memory` also works when meeting with cuda device, though this work is to lead an error out.
### Suggestions
There are two thoughts of this, both are available:
* Change the descrption from:
https://github.com/pytorch/pytorch/blob/d4da0e955e5a29942d631cddaf0fb7805bb27231/torch/_torch_docs.py#L112-L113
to:
```txt
pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned
memory. Works only for CPU tensors, otherwise do not set it to True. Default: ``False``.
```
* Fix the codes to ignore the value of argument `pin_memory` when detecting the device cuda
Both are alright. And thanks for noting!
### Versions
Nightly
cc @svekars @sekyondaMeta @AlannaBurke @albanD
| true
|
2,955,876,272
|
support guard or false/true in user code and add tests
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150231
* __->__ #150178
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,955,825,356
|
Aborted (core dumped)
|
Cookiee235
|
closed
|
[
"needs reproduction",
"module: crash",
"module: cpu",
"module: error checking",
"triaged",
"module: linear algebra"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class DeterministicModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(2, 2, bias=False)
with torch.no_grad():
self.linear.weight.copy_(torch.eye(2))
def forward(self, x):
block_diag = torch.block_diag(x, x)
LD = torch.tensor([[1.0, 0.0], [0.0, 1.0]])
pivots = torch.tensor([0, 0], dtype=torch.int32)
B = torch.tensor([[1.0], [1.0]])
solution = torch.linalg.ldl_solve(LD, pivots, B)
unique_solution = torch.unique_consecutive(solution.flatten())
return block_diag, solution, unique_solution
model = DeterministicModel()
inputs = torch.eye(2)
res = model(inputs)
```
free(): invalid next size (fast)
Aborted (core dumped)
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 80%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.25
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,955,720,716
|
Optimize SVE embedding performance
|
annop-w
|
closed
|
[
"caffe2",
"triaged",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: performance",
"skip-pr-sanity-checks"
] | 23
|
CONTRIBUTOR
|
Change loop unrolling strategy. Previously, the script only unrolls the inner loop over block_size when block size is multiple of vector length. This version instead unrolls the outer loop which reduces the number of load/store for accumulation into the output array and improves performance for cases when block size is not multiple of vector length.
Benchmarking script:
```python
# SPDX-FileCopyrightText: Copyright 2025 Arm Limited and/or its affiliate <open-source-office@arm.com>
# SPDX-License-Identifier: BSD-3-Clause
import torch
import torch.nn as nn
import numpy as np
import time
import sys
np.random.seed(0)
torch.manual_seed(0)
num_embeddings = 400000
embedding_dim = int(sys.argv[1])
multi_hot = 100
batch_size = 400
nrun = 1000
class SimpleEmbeddingBagModel(nn.Module):
def __init__(self, num_embeddings, embedding_dim):
super(SimpleEmbeddingBagModel, self).__init__()
weights = torch.from_numpy((np.random.random_sample((num_embeddings, embedding_dim)) + 1).astype(np.float32)).to(torch.float16)
# Defining the EmbeddingBag layer
self.embedding_bag = torch.nn.EmbeddingBag(num_embeddings, embedding_dim, _weight=weights,
mode='sum', include_last_offset=True, dtype=torch.float32)
def forward(self, input, offsets):
# Forward pass through the EmbeddingBag layer
result32 = self.embedding_bag(input, offsets, per_sample_weights=None)
return result32
# Instantiate the model
model = SimpleEmbeddingBagModel(num_embeddings=num_embeddings, embedding_dim=embedding_dim)
model.eval()
# Example input
input_tensor = torch.randint(0, num_embeddings, (batch_size * multi_hot,), dtype=torch.long)
offsets = torch.tensor(range(0, batch_size * multi_hot + 1, multi_hot))
with torch.no_grad():
# warm up
output32 = model(input_tensor, offsets)
ti = time.time_ns()
for i in range(nrun):
_ = model(input_tensor, offsets)
tf = time.time_ns()
print("{:3d} {:.3E}".format(embedding_dim, (tf-ti)/nrun/1.e6))
```
Speedup on NEOVERSEV1 with 1 thread

cc @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01 @albanD
| true
|
2,955,648,923
|
Nightly Errors - 2025-03-28 - Windows
|
ozanMSFT
|
closed
|
[
"module: windows"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
Windows wheels are failing for `2025-03-28`:
https://github.com/pytorch/pytorch/actions/runs/14124155298/job/39569597736
---
Error:
```cmake
CMake Error at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required):
Compatibility with CMake < 3.5 has been removed from CMake.
Update the VERSION argument <min> value. Or, use the <min>...<max> syntax
to tell CMake that the project requires at least <min> but has been updated
to work with policies introduced by <max> or earlier.
Or, add -DCMAKE_POLICY_VERSION_MINIMUM=3.5 to try configuring anyway.
-- Configuring incomplete, errors occurred!
```
### Versions
torch-2.8.0.dev20250328+cpu
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,955,567,782
|
GCC14 internal compiler error on aarch64-linux
|
prusnak
|
closed
|
[
"module: build",
"triaged",
"module: arm"
] | 5
|
NONE
|
### 🐛 Describe the bug
when trying to compile torch-2.6.0 on aarch64-linux:
```
[2289/5466] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/Activation.cpp.DEFAULT.cpp.oLT.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/Activation.cpp.DEFAULT.cpp.o
/nix/store/6cpyrcg1zi4a89bg80vfqfdwb6nr81fr-gcc-wrapper-14-20241116/bin/g++ -DAT_BUILD_ARM_VEC256_WITH_SLEEF -DAT_PER_OPERATOR_HEADERS -DBUILD_ONEDNN_GRAPH -DCAFFE2_BUILD_MAIN_LIB -DCAFFE2_PERF_WITH_SVE=1 -DCPUINFO_SUPPORTED_PLATFORM=1 -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -DXNN_LOG_LEVEL=0 -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/build/pytorch/build/aten/src -I/build/pytorch/aten/src -I/build/pytorch/build -I/build/pytorch -I/build/pytorch/third_party/onnx -I/build/pytorch/build/third_party/onnx -I/build/pytorch/nlohmann -I/build/pytorch/torch/csrc/api -I/build/pytorch/torch/csrc/api/include -I/build/pytorch/caffe2/aten/src/TH -I/build/pytorch/build/caffe2/aten/src/TH -I/build/pytorch/build/caffe2/aten/src -I/build/pytorch/build/caffe2/../aten/src -I/build/pytorch/torch/csrc -I/build/pytorch/third_party/miniz-3.0.2 -I/build/pytorch/third_party/kineto/libkineto/include -I/build/pytorch/third_party/kineto/libkineto/src -I/build/pytorch/third_party/cpp-httplib -I/build/pytorch/aten/src/ATen/.. -I/build/pytorch/third_party/FXdiv/include -I/build/pytorch/c10/.. -I/build/pytorch/third_party/pthreadpool/include -I/build/pytorch/third_party/cpuinfo/include -I/build/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/include -I/build/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/src -I/build/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/deps/clog/include -I/build/pytorch/third_party/FP16/include -I/build/pytorch/third_party/tensorpipe -I/build/pytorch/build/third_party/tensorpipe -I/build/pytorch/third_party/tensorpipe/third_party/libnop/include -I/build/pytorch/third_party/fmt/include -I/build/pytorch/build/third_party/ideep/mkl-dnn/include -I/build/pytorch/third_party/ideep/mkl-dnn/src/../include -I/build/pytorch/third_party/flatbuffers/include -isystem /build/pytorch/build/third_party/gloo -isystem /build/pytorch/cmake/../third_party/gloo -isystem /build/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /build/pytorch/third_party/protobuf/src -isystem /build/pytorch/third_party/XNNPACK/include -isystem /build/pytorch/cmake/../third_party/eigen -isystem /build/pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -isystem /build/pytorch/third_party/ideep/include -isystem /build/pytorch/INTERFACE -isystem /build/pytorch/third_party/nlohmann/include -isystem /build/pytorch/build/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_PYTORCH_QNNPACK -DAT_BUILD_ARM_VEC256_WITH_SLEEF -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-error=dangling-reference -Wno-error=redundant-move -Wno-stringop-overflow -DHAVE_SVE_CPU_DEFINITION -DHAVE_SVE256_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIC -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -D__NEON__ -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-missing-field-initializers -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wunused-function -Wunused-variable -Wunused-but-set-variable -Wno-maybe-uninitialized -fvisibility=hidden -O2 -pthread -fopenmp -O3 -DCPU_CAPABILITY=DEFAULT -DCPU_CAPABILITY_DEFAULT -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/Activation.cpp.DEFAULT.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/Activation.cpp.DEFAULT.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/Activation.cpp.DEFAULT.cpp.o -c /build/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.DEFAULT.cpp
during RTL pass: expand
In file included from /build/pytorch/aten/src/ATen/native/cpu/Activation.cpp:12,
from /build/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.DEFAULT.cpp:1:
/build/pytorch/aten/src/ATen/native/cpu/Activation.cpp: In lambda function:
/build/pytorch/aten/src/ATen/native/cpu/Activation.cpp:89:7: internal compiler error: Segmentation fault
89 | });
| ^
/build/pytorch/aten/src/ATen/Dispatch.h:202:7: note: in definition of macro ‘AT_DISPATCH_SWITCH’
202 | __VA_ARGS__ \
| ^~~~~~~~~~~
/build/pytorch/aten/src/ATen/Dispatch.h:73:3: note: in expansion of macro ‘AT_PRIVATE_CASE_TYPE_USING_HINT’
73 | AT_PRIVATE_CASE_TYPE_USING_HINT(enum_type, scalar_t, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/build/pytorch/aten/src/ATen/Dispatch.h:215:3: note: in expansion of macro ‘AT_DISPATCH_CASE’
215 | AT_DISPATCH_CASE(at::ScalarType::Double, __VA_ARGS__) \
| ^~~~~~~~~~~~~~~~
/build/pytorch/aten/src/ATen/Dispatch.h:219:34: note: in expansion of macro ‘AT_DISPATCH_CASE_FLOATING_TYPES’
219 | AT_DISPATCH_SWITCH(TYPE, NAME, AT_DISPATCH_CASE_FLOATING_TYPES(__VA_ARGS__))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/build/pytorch/aten/src/ATen/native/cpu/Activation.cpp:65:5: note: in expansion of macro ‘AT_DISPATCH_FLOATING_TYPES’
65 | AT_DISPATCH_FLOATING_TYPES(input.scalar_type(), "log_sigmoid_cpu", [&] {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
0x1ec327f diagnostic_impl(rich_location*, diagnostic_metadata const*, int, char const*, std::__va_list*, diagnostic_t)
???:0
0x1ec3aaf internal_error(char const*, ...)
???:0
0xfa41d3 crash_signal(int)
???:0
0xdb3c6c mark_jump_label_1(rtx_def*, rtx_insn*, bool, bool)
???:0
0xdb3e7f mark_jump_label_1(rtx_def*, rtx_insn*, bool, bool)
???:0
0xdb3e7f mark_jump_label_1(rtx_def*, rtx_insn*, bool, bool)
???:0
0xdb4067 mark_all_labels(rtx_insn*)
???:0
0xdb4373 rebuild_jump_labels(rtx_insn*)
???:0
0xaadf7f (anonymous namespace)::pass_expand::execute(function*)
???:0
```
full log here: https://logs.ofborg.org/?key=nixos/nixpkgs.377785&attempt_id=1eb98b08-e9fc-47de-af18-fd52c1a37d77
### Versions
torch 2.6.0
gcc 14 snapshot 20241116
cc @malfet @seemethere @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
2,955,556,732
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_float16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 6
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_float16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39564734447).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1420, in only_fn
return fn(slf, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 1349, in test_foreach_copy_with_multi_dtypes
out = foreach_copy_(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_copy_', keys=('aten::_foreach_copy_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float16]], args=(TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float16]]), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,955,456,868
|
Relax njt x njt to dense matmul reduction checks
|
JCBrouwer
|
open
|
[
"triaged",
"open source",
"module: nestedtensor"
] | 3
|
NONE
|
Attempts to address [this comment](https://github.com/pytorch/pytorch/issues/145158#issuecomment-2712038919) from @jbschlosser.
The new forward matmul tests I added are passing except for the 'noncontig_holes' examples which are failing because they are not raising the expected torch._dynamo.exc.BackendCompilerFailed.
```
...
FAIL: test_forward_matmul_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA.test_forward_matmul_cuda_float32) (sample='6D_noncontig_holes_without_seqlen_cache: (..., E, j1) x (..., j1, F)', idx=18)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/HUGE/Code/pytorch/test/test_nestedtensor.py", line 8689, in test_forward
with subtest_ctx(self), skip_xfail_ctx(self):
AssertionError: (<class 'RuntimeError'>, <class 'torch._dynamo.exc.BackendCompilerFailed'>) not raised
...
```
The backward matmul tests are passing, although I think these aren't being run on the new SampleInputs as there are only two tests passing. How can I add new tests for the backwards pass?
All of the forward and backward compiled tests seem to be failing on :
```
torch._dynamo.exc.UserError: Could not guard on data-dependent expression Eq(36, u0) (unhinted: Eq(s34, u0)). (Size-like symbols: u0)
or
AssertionError: "not supported for nested tensors with holes" does not match "Could not guard on data-dependent expression Eq(36, u0) (unhinted: Eq(s34, u0)). (Size-like symbols: u0)
```
The second of which seems like it should be caught by the earlier assertion on the non-compiled tests...
Here's a verbose log for:
```
TORCH_LOGS="+dynamo,+dynamic" TORCHDYNAMO_VERBOSE=1 python test/test_nestedtensor.py -k matmul -vvv
```
[relax_njt_x_njt_to_dense_matmul_reduction_checks.txt](https://github.com/user-attachments/files/19501949/relax_njt_x_njt_to_dense_matmul_reduction_checks.txt)
I would appreciate any feedback or pointers on how to get things sorted!
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,955,438,849
|
Add plot for `torch.nn.Threshold` and `torch.nn.GLU`
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Fixes #150170
## Changes
- Add plot for `torch.nn.Threshold` and `torch.nn.GLU`
- Add example output make them easier get result by users
## Test Result


| true
|
2,955,377,464
|
`torch.nn.Threshold` and `torch.nn.GLU` missing plot
|
zeshengzong
|
closed
|
[
"module: docs",
"module: nn",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 📚 The doc issue
[torch.nn.Threshold](https://pytorch.org/docs/stable/generated/torch.nn.Threshold.html#torch.nn.Threshold) and [torch.nn.GLU](https://pytorch.org/docs/stable/generated/torch.nn.GLU.html#torch.nn.GLU) not have plot in docs like other activate method.
### ReLU has plot

### GLU

### Threshold

### Suggest a potential alternative/fix
_No response_
cc @svekars @sekyondaMeta @AlannaBurke @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,955,370,420
|
Fix CMake 4 version check
|
cyyever
|
closed
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Add CMAKE_POLICY_VERSION_MINIMUM to avoid CMake 4 error.
| true
|
2,955,278,775
|
[ROCm] PyTorch slow on TTS
|
winstonma
|
open
|
[
"module: rocm",
"triaged"
] | 16
|
NONE
|
### 🐛 Describe the bug
I installed [Kokoro TTS](https://github.com/hexgrad/kokoro) and PyTorch onto my machine, running AMD 6800U with Radeon 680M.
```bash
# Installed PyTorch ROCm already
# Install Kokoro
pip install -q kokoro>=0.9.2 soundfile
apt-get -qq -y install espeak-ng > /dev/null 2>&1
# Run create a sample TTS
echo 'Hello! How are you today?' | kokoro -o output.wav
```
And then I got a lot of warning message:
```bash
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 11325440, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 11325440, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 11325440, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 11325440, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 4853760, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 4853760, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 4853760, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 4853760, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 17797120, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 17797120, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 17797120, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 17797120, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 53396992, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 53396992, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 53396992, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 53396992, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 14562816, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 14562816, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 14562816, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 14562816, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 33979904, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 33979904, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [GetSolutionsFallback WTI] Solver <GemmFwdRest>, workspace required: 33979904, provided ptr: 0 size: 0
MIOpen(HIP): Warning [IsEnoughWorkspace] [EvaluateInvokers] Solver <GemmFwdRest>, workspace required: 33979904, provided ptr: 0 size: 0
```
After reporting to [MIOpen team](https://github.com/ROCm/MIOpen/issues/2981) they suggested me to [tune the performance database](https://github.com/ROCm/MIOpen/blob/develop/docs/conceptual/tuningdb.rst). After tuning the database using a text file, I ran executed the TTS using the same text file (this would ensure that no new entries and results are taken from the database). However on my machine the fully-trained PyTorch ROCm and the PyTorch CPU requires the same amount of time to execute.
I use the following command run with the tuned performance database and using the text in [result.txt](https://github.com/user-attachments/files/19500860/result.txt)
```bash
# Installed PyTorch ROCm and Kokoro already
export MIOPEN_FIND_MODE=FAST
# Download a small paragraph
wget https://github.com/user-attachments/files/19500860/result.txt
# Run create a sample TTS
kokoro -i result.txt -o output.wav
```
Also as CPU version won't require any tuning, just wonder if tuning performance database should be needed?
### Versions
PyTorch CPU version
```
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.10 (x86_64)
GCC version: (Ubuntu 14.2.0-4ubuntu2) 14.2.0
Clang version: 19.1.7 (1ubuntu2~kisak~o)
CMake version: version 3.30.3
Libc version: glibc-2.40
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.14.0-061400-generic-x86_64-with-glibc2.40
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 6800U with Radeon Graphics
CPU family: 25
Model: 68
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 39%
CPU max MHz: 4769.0000
CPU min MHz: 400.0000
BogoMIPS: 5390.09
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca debug_swap
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy==1.14.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.2
[pip3] numpydoc==1.7.0
[pip3] pytorch-triton-rocm==3.2.0
[pip3] torch==2.6.0+cpu
[pip3] torchaudio==2.6.0+cpu
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0+cpu
[conda] _anaconda_depends 2025.03 py312_mkl_0
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_2
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 1.26.4 py312hc5e2394_0
[conda] numpy-base 1.26.4 py312h0da6c21_0
[conda] numpydoc 1.7.0 py312h06a4308_0
[conda] pytorch-triton-rocm 3.2.0 pypi_0 pypi
[conda] torch 2.6.0+cpu pypi_0 pypi
[conda] torchaudio 2.6.0+cpu pypi_0 pypi
[conda] torchvision 0.21.0+cpu pypi_0 pypi
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,955,195,452
|
PyTorch cmake configuration fails with CMake 4.0.0
|
WangYutao1995
|
closed
|
[
"module: build",
"triaged"
] | 5
|
NONE
|
### 🐛 Describe the bug
**Error message**
```
-- Found SYCL: (found version "20250200")
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
CMake Error at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required):
Compatibility with CMake < 3.5 has been removed from CMake.
Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier.
Or, add -DCMAKE_POLICY_VERSION_MINIMUM=3.5 to try configuring anyway.
-- Configuring incomplete, errors occurred!
-- Building version 2.7.0a0+git504bc92
```
**Root cause**
CMake 4.0.0 is released to pypi.org today (March 28, 2025), but it is not compatible with cmake_minimum_required defined in many third_party project's CMakeLists.txt, such as protobuf.
https://github.com/protocolbuffers/protobuf/blob/d1eca4e4b421cd2997495c4b4e65cea6be4e9b8a/cmake/CMakeLists.txt
https://cmake.org/cmake/help/latest/command/cmake_minimum_required.html
```
Changed in version 4.0: Compatibility with versions of CMake older than 3.5 is removed.
Calls to cmake_minimum_required(VERSION) or cmake_policy(VERSION) that do not specify at least 3.5 as their policy version (optionally via ...<max>) will produce an error in CMake 4.0 and above.
```
**Workaround**
Stick to CMake 3.31
```
python -m pip install cmake==3.31
```
### Versions
```
PyTorch: main branch
CMake: 4.0.0
```
cc @malfet @seemethere @chuanqi129
| true
|
2,955,151,835
|
refresh expected results again racing changes landing while test is unstable
|
laithsakka
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150166
The test went unstable and now I am playing a race to make it back stable !
diffs that are published while the test was unstable are landing, with out changing expected results.
<img width="512" alt="Screenshot 2025-03-27 at 10 48 21 PM" src="https://github.com/user-attachments/assets/225bcad9-6e06-4a66-8438-8b70ff9dbeff" />
Two more PRs causing slight regressions:
1) the stack on https://github.com/pytorch/pytorch/pull/150036
2) https://github.com/pytorch/pytorch/pull/150137
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @anijain2305 @eellison
@huydhn when do you think we will have the APIs to access results on oss storage available so we do not
have to worry about this racing again?
Also is there a way to accelerate unstability in this after we land it?
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,955,138,481
|
Inconsistency inference results between eager and inductor compiler.
|
Cookiee235
|
closed
|
[
"high priority",
"triaged",
"module: correctness (silent)",
"oncall: pt2",
"module: inductor",
"oncall: cpu inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = torch.nn.functional.interpolate(x, size=(64, 64), mode='bilinear', align_corners=True)
x = torch.inverse(x)
return x
model = Model()
inputs = torch.randn(1, 3, 32, 32)
res = model(inputs)
compiled_model = torch.compile(model, backend='inductor')
with torch.no_grad():
compiled_out = compiled_model(inputs)
torch.testing.assert_close(res, compiled_out)
```
### Error logs
```
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0328/1.py", line 22, in <module>
torch.testing.assert_close(res, compiled_out)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 12288 / 12288 (100.0%)
Greatest absolute difference: 2377490176.0 at index (0, 1, 7, 1) (up to 1e-05 allowed)
Greatest relative difference: 122582.9140625 at index (0, 0, 60, 10) (up to 1.3e-06 allowed)
```
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.25
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
2,955,120,494
|
[WIP][dynamic shapes] rewrite should_swap with guard_or_false
|
pianpwk
|
open
|
[
"release notes: export"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,955,077,155
|
[AOTInductor] Add function for users to extract constants in container
|
muchulee8
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: Add extract_constant_map that allows users to inspect the constants being used by AOTInductor
Test Plan:
`python test/inductor/test_aot_inductor.py -k extract_constants_map`
`LD_LIBRARY_PATH=/data/users/$USER/pytorch/build/lib /data/users/$USER/pytorch/build/bin/test_aoti_inference`
Differential Revision: D72020400
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
2,955,062,644
|
[c10d] Add `_allgather_base` , `reduce_scatter` , and `_reduce_scatter_base` into ProcessGroupMPI to enable FSDP with MPI backend
|
nariaki3551
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 15
|
CONTRIBUTOR
|
This PR implements _allgather_base, reduce_scatter, and _reduce_scatter_base in the MPI backend (ProcessGroupMPI), enabling support for Fully Sharded Data Parallel (FSDP) in environments that use MPI for distributed communication.
### Context
As noted in https://github.com/pytorch/pytorch/issues/85628, FSDP currently supports only the NCCL backend. Due to this limitation, FSDP cannot run on legacy HPC environments or clusters that rely on MPI.
By implementing just these three collective operations, we can enable FSDP to work with the MPI backend. These collectives are implemented in a similar manner to existing operations such as allgather.
### Testing
We validated this PR using pytorch/build/bin/ProcessGroupMPITest with OpenMPI, and all tests passed successfully.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,954,935,218
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_complex64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 9
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_complex64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39554922122).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,954,933,939
|
[Release/2.7] Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/xpu",
"release notes: xpu"
] | 3
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [9ff76ec6e8038afd6f4e4ecb929ee9f4aad9f233](https://github.com/intel/torch-xpu-ops/commit/9ff76ec6e8038afd6f4e4ecb929ee9f4aad9f233), includes:
- Bugfix of performance issue relating to GRF configuration.
| true
|
2,954,928,562
|
build 2.0.1 failed
|
wanggei
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
General:
-- CMake version : 3.29.0
-- CMake command : /usr/local/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler id : GNU
-- C++ compiler version : 14.2.0
-- Using ccache if found : ON
-- Found ccache : CCACHE_PROGRAM-NOTFOUND
-- CXX flags : -D_GLIBCXX_USE_CXX11_ABI=1 -Wno-deprecated -fvisibility-inlines-hidden -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
-- CMAKE_PREFIX_PATH : /usr/local/lib/python3.11/site-packages
-- CMAKE_INSTALL_PREFIX : /home/pytorch-v2.0.1/torch
-- USE_GOLD_LINKER : OFF
--
-- TORCH_VERSION : 2.0.1
-- CAFFE2_VERSION : 2.0.1
-- BUILD_CAFFE2 : OFF
-- BUILD_CAFFE2_OPS : OFF
-- BUILD_STATIC_RUNTIME_BENCHMARK: OFF
-- BUILD_TENSOREXPR_BENCHMARK: OFF
-- BUILD_NVFUSER_BENCHMARK: OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : True
-- Python version : 3.11.8
-- Python executable : /usr/local/bin/python3
-- Pythonlibs version : 3.11.8
-- Python library : /usr/local/lib/libpython3.11.a
-- Python includes : /usr/local/include/python3.11
-- Python site-packages: lib/python3.11/site-packages
-- BUILD_SHARED_LIBS : ON
-- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF
-- BUILD_TEST : False
-- BUILD_JNI : OFF
-- BUILD_MOBILE_AUTOGRAD : OFF
-- BUILD_LITE_INTERPRETER: OFF
-- INTERN_BUILD_MOBILE :
-- TRACING_BASED : OFF
-- USE_BLAS : 1
-- BLAS : open
-- BLAS_HAS_SBGEMM :
-- USE_LAPACK : 1
-- LAPACK : open
-- USE_ASAN : OFF
-- USE_TSAN : OFF
-- USE_CPP_CODE_COVERAGE : OFF
-- USE_CUDA : 0
-- USE_ROCM : OFF
-- BUILD_NVFUSER : OFF
-- USE_EIGEN_FOR_BLAS :
-- USE_FBGEMM : OFF
-- USE_FAKELOWP : OFF
-- USE_KINETO : ON
-- USE_FFMPEG : OFF
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : OFF
-- USE_METAL : OFF
-- USE_PYTORCH_METAL : OFF
-- USE_PYTORCH_METAL_EXPORT : OFF
-- USE_MPS : OFF
-- USE_FFTW : OFF
-- USE_MKL :
-- USE_MKLDNN : OFF
-- USE_UCC : OFF
-- USE_ITT : OFF
-- USE_NCCL : OFF
-- USE_NNPACK : OFF
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : OFF
-- USE_OPENMP : ON
-- USE_TBB : OFF
-- USE_VULKAN : OFF
-- USE_PROF : OFF
-- USE_QNNPACK : OFF
-- USE_PYTORCH_QNNPACK : OFF
-- USE_XNNPACK : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : 0
-- Public Dependencies :
-- Private Dependencies : Threads::Threads;/usr/lib/riscv64-linux-gnu/libopenblas.so;cpuinfo;fp16;caffe2::openmp;foxi_loader;rt;fmt::fmt-header-only;kineto;gcc_s;gcc;dl
-- USE_COREML_DELEGATE : OFF
-- BUILD_LAZY_TS_BACKEND : ON
-- TORCH_DISABLE_GPU_ASSERTS : ON
-- Configuring done (184.9s)
-- Generating done (4.1s)
-- Build files have been written to: /home/pytorch-v2.0.1/build
[1/1252] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/api.c.o
FAILED: confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/api.c.o
/usr/bin/cc -DCPUINFO_LOG_LEVEL=2 -I/home/pytorch-v2.0.1/third_party/cpuinfo/src -I/home/pytorch-v2.0.1/third_party/cpuinfo/include -I/home/pytorch-v2.0.1/third_party/cpuinfo/deps/clog/include -isystem /home/pytorch-v2.0.1/third_party/protobuf/src -O3 -DNDEBUG -std=c99 -fPIC -MD -MT confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/api.c.o -MF confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/api.c.o.d -o confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/api.c.o -c /home/pytorch-v2.0.1/third_party/cpuinfo/src/api.c
In file included from /home/pytorch-v2.0.1/third_party/cpuinfo/src/cpuinfo/internal-api.h:11,
from /home/pytorch-v2.0.1/third_party/cpuinfo/src/api.c:6:
/home/pytorch-v2.0.1/third_party/cpuinfo/src/api.c: In function ‘cpuinfo_get_current_processor’:
/home/pytorch-v2.0.1/third_party/cpuinfo/src/api.c:318:37: error: implicit declaration of function ‘syscall’ [-Wimplicit-function-declaration]
318 | if CPUINFO_UNLIKELY(syscall(__NR_getcpu, &cpu, NULL, NULL) != 0) {
| ^~~~~~~
/home/pytorch-v2.0.1/third_party/cpuinfo/src/cpuinfo/common.h:16:66: note: in definition of macro ‘CPUINFO_UNLIKELY’
16 | #define CPUINFO_UNLIKELY(condition) (__builtin_expect(!!(condition), 0))
| ^~~~~~~~~
[2/1252] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/api.c.o
### Versions
2.0.1
| true
|
2,954,925,467
|
Pin cmake==3.31.6
|
clee2000
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
I'm not sure if this is the right think to do, but cmake 4.0.0 got released on pypi and our builds are failing with it
Example:
https://hud.pytorch.org/pytorch/pytorch/commit/aa70d62041c28fe35c416aa932b32ef0e4d5bc33#39555975425-box
I guess we have to go change all the cmake_minimum_required to >=3.5?
backwards compat still failing because its building with the base commit which this pr can't really change until it gets merged, but at least manywheel binary builds got past where they were originally failing
Also pin the conda installation, but the most recent version on conda is 3.31.2
| true
|
2,954,922,755
|
[MPS] Fix dot/mm for conj_tensors
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150157
- Distinguish between conjugated/non_conjugated inputs by appending conjugation to the operator key
- For matmul or dot, add `conjugateWithTensor:name:` calls before running the op
- Enable testing for conjugated ops by passing `include_conjugated_inputs` to opinfo
- Filter `include_conjugated_inputs` argument from `sample_inputs_window` (probably should have landed as separate PR)
- Preserve conj property when gathering the views, that fixes `cov` operator
Fixes https://github.com/pytorch/pytorch/issues/148156
| true
|
2,954,916,290
|
Fix docs format error in `torch.nn`
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: bug fixes",
"topic: docs"
] | 7
|
CONTRIBUTOR
|
Fixes #150152
Fix format error in [torch.nn.CosineSimilarity](https://pytorch.org/docs/stable/generated/torch.nn.CosineSimilarity.html#torch.nn.CosineSimilarity), [torch.nn.KLDivLoss](https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html#torch.nn.KLDivLoss) and other pages.
## Test Result
### Before
#### torch.nn.CosineSimilarity

#### torch.nn.KLDivLoss

### After
#### torch.nn.CosineSimilarity

#### torch.nn.KLDivLoss

| true
|
2,954,908,533
|
Performance regression in aten::mm on gfx1100 (7900 XTX) with PyTorch 2.7
|
timpalpant
|
closed
|
[
"high priority",
"triage review",
"needs reproduction",
"module: performance",
"module: rocm"
] | 1
|
NONE
|
### 🐛 Describe the bug
With PyTorch 2.7 I observed a ~40% performance regression in my training loop on AMD RX 7900 XTX (gfx1100). Profiling suggested `aten::mm` and `aten::bmm`. I believe it may be due to enabling `hipblaslt` for this card in https://github.com/pytorch/pytorch/commit/da578495cac53708db456114f011cb5b1c8febe0
Reverting this commit on the tip of `main` restores performance to a similar level as PyTorch 2.6.
Possibly related issue: https://github.com/pytorch/pytorch/issues/148883
I think this can be reproduced with https://github.com/pytorch/benchmark , for example:
torch 2.6.0
```
------------------------------------------------ benchmark 'hub': 1 tests -----------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
-------------------------------------------------------------------------------------------------------------------------
test_train[hf_GPT2-cuda] 126.6692 131.6201 129.7784 1.6370 130.2262 1.9213 3;0 7.7054 8 1
-------------------------------------------------------------------------------------------------------------------------
```
torch 2.8.0.dev20250326+rocm6.3
```
------------------------------------------------ benchmark 'hub': 1 tests -----------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
-------------------------------------------------------------------------------------------------------------------------
test_train[hf_GPT2-cuda] 583.8108 596.9090 591.3713 5.1371 593.3056 7.2410 2;0 1.6910 5 1
-------------------------------------------------------------------------------------------------------------------------
```
torch 2.8.0 + revert of https://github.com/pytorch/pytorch/commit/da578495cac53708db456114f011cb5b1c8febe0
```
------------------------------------------------ benchmark 'hub': 1 tests -----------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
-------------------------------------------------------------------------------------------------------------------------
test_train[hf_GPT2-cuda] 144.8958 151.4311 149.0116 2.3174 149.6700 3.5584 2;0 6.7109 8 1
-------------------------------------------------------------------------------------------------------------------------
```
### Versions
Collecting environment information...
PyTorch version: 2.8.0.dev20250326+rocm6.3
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: EndeavourOS Linux (x86_64)
GCC version: (GCC) 14.2.1 20250207
Clang version: 19.1.7
CMake version: version 3.31.6
Libc version: glibc-2.41
Python version: 3.11.11 | packaged by conda-forge | (main, Mar 3 2025, 20:43:55) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.12.20-1-lts-x86_64-with-glibc2.41
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 7900 XTX (gfx1100)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 24%
CPU max MHz: 5100.0000
CPU min MHz: 800.0000
BogoMIPS: 7222.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (12 instances)
L1i cache: 512 KiB (12 instances)
L2 cache: 12 MiB (9 instances)
L3 cache: 25 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.2
[pip3] onnx==1.17.0
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] pytorch-triton-rocm==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250326+rocm6.3
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.9.0
[pip3] torchaudio==2.6.0.dev20250327+rocm6.3
[pip3] torchmultimodal==0.1.0b0
[pip3] torchvision==0.22.0.dev20250327+rocm6.3
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,954,869,445
|
xpu: update filter out of dg2 AOT target
|
pytorchbot
|
open
|
[
"open source"
] | 1
|
COLLABORATOR
|
torch-xpu-ops has updated list of AOT targets to use and used `dg2` instead of `dg2-g10`. This requires an update in cpp_extension.py which currently filters out `dg2-` prefixed AOT targets.
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5
| true
|
2,954,849,410
|
enable out variant of 2-shot reduction
|
ngimel
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ci-no-td"
] | 12
|
COLLABORATOR
|
Per title, this version uses symm mem input both as input source and as a work buffer, so input is modified after the end (similar to what fbgemm car reduction does). It is intended to be wrapped in an op that would first copy the real inputs to symm mem buffers that wouldn't be exposed.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,954,834,023
|
Format error in `torch.nn` pages
|
zeshengzong
|
closed
|
[
"module: docs",
"triaged",
"actionable",
"module: python frontend"
] | 0
|
CONTRIBUTOR
|
### 📚 The doc issue
Page render error of `Input2`, `Example` section of [torch.nn.CosineSimilarity](https://pytorch.org/docs/stable/generated/torch.nn.CosineSimilarity.html#torch.nn.CosineSimilarity) and [torch.nn.KLDivLoss](https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html#torch.nn.KLDivLoss)
### torch.nn.CosineSimilarity

### torch.nn.KLDivLoss

### Suggest a potential alternative/fix
_No response_
cc @svekars @sekyondaMeta @AlannaBurke @albanD
| true
|
2,954,831,357
|
Ensure cuda_dlink_post_cflags are quoted as well
|
saagarjha
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
| null | true
|
2,954,823,351
|
Add RECORD_FUNCTION for AOTI
|
shiyang-weng
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 9
|
CONTRIBUTOR
|
Only add RECORD_FUNCTION for shim_fn now.
Next step need to add RECORD_FUNCTION for all the aoti_torch_* functions.
Fixes https://github.com/pytorch/pytorch/issues/148650
Some code gen by aoti
```c++
AtenTensorHandle buf1_handle;
AtenTensorHandle buf2_handle;
AtenTensorHandle buf3_handle;
AtenTensorHandle buf4_handle;
{RECORD_FUNCTION("aoti_torch_cpu__embedding_bag", c10::ArrayRef<c10::IValue>());AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_cpu__embedding_bag(L__self___sparse_arch_embedding_bag_collection_embedding_bags_t_cat_0_weight, arg80_1, arg81_1, 0, 0L, 0, nullptr, 1, -1L, &buf1_handle, &buf2_handle, &buf3_handle, &buf4_handle));}
RAIIAtenTensorHandle buf1(buf1_handle);
RAIIAtenTensorHandle buf2(buf2_handle);
RAIIAtenTensorHandle buf3(buf3_handle);
RAIIAtenTensorHandle buf4(buf4_handle);
arg80_1.reset();
arg81_1.reset();
```
On trace
```
{
"name": "aoti_torch_cpu__embedding_bag",
"ph": "X",
"ts": 68874.450000,
"dur": 361.291000,
"tid": 2,
"pid": "CPU Functions",
"args": {}
},
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,954,750,019
|
Build fails with cmake 4.0.0
|
saagarjha
|
closed
|
[
"high priority",
"triage review",
"module: build",
"module: third_party"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I did a fresh build of PyTorch this morning, then later this afternoon. Surprisingly, the second one failed. After debugging it seems like the issue is that cmake just released version 4.0.0, which breaks the build for PyTorch:
```
CMake Warning at cmake/public/cuda.cmake:140 (message):
Failed to compute shorthash for libnvrtc.so
Call Stack (most recent call first):
cmake/Dependencies.cmake:44 (include)
CMakeLists.txt:856 (include)
-- Found nvtx3: /root/pytorch/third_party/NVTX/c/include
-- Could NOT find CUDNN (missing: CUDNN_LIBRARY_PATH CUDNN_INCLUDE_PATH)
CMake Warning at cmake/public/cuda.cmake:218 (message):
Cannot find cuDNN library. Turning the option off
Call Stack (most recent call first):
cmake/Dependencies.cmake:44 (include)
CMakeLists.txt:856 (include)
-- Could NOT find CUSPARSELT (missing: CUSPARSELT_LIBRARY_PATH CUSPARSELT_INCLUDE_PATH)
CMake Warning at cmake/public/cuda.cmake:243 (message):
Cannot find cuSPARSELt library. Turning the option off
Call Stack (most recent call first):
cmake/Dependencies.cmake:44 (include)
CMakeLists.txt:856 (include)
-- Could NOT find CUDSS (missing: CUDSS_LIBRARY_PATH CUDSS_INCLUDE_PATH)
CMake Warning at cmake/public/cuda.cmake:259 (message):
Cannot find CUDSS library. Turning the option off
Call Stack (most recent call first):
cmake/Dependencies.cmake:44 (include)
CMakeLists.txt:856 (include)
-- Autodetected CUDA architecture(s): 8.6
-- Added CUDA NVCC flags for: -gencode;arch=compute_86,code=sm_86
CMake Warning at cmake/Dependencies.cmake:95 (message):
Not compiling with XPU. Could NOT find SYCL.Suppress this warning with
-DUSE_XPU=OFF.
Call Stack (most recent call first):
CMakeLists.txt:856 (include)
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
CMake Error at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required):
Compatibility with CMake < 3.5 has been removed from CMake.
Update the VERSION argument <min> value. Or, use the <min>...<max> syntax
to tell CMake that the project requires at least <min> but has been updated
to work with policies introduced by <max> or earlier.
Or, add -DCMAKE_POLICY_VERSION_MINIMUM=3.5 to try configuring anyway.
-- Configuring incomplete, errors occurred!
```
### Versions
main (I don't have it installed yet)
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere
| true
|
2,954,694,894
|
[dynamic shapes] C++ bindings for guard_or_false/true
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
C++ version. Would like to add it in one place to prove it works, but couldn't find one that doesn't expose a chain of data-dependent changes... so just gonna put up the base implementation
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,954,670,701
|
if blaslt fails, fall back to blas
|
jeffdaily
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm",
"ci-no-td"
] | 22
|
COLLABORATOR
|
Fixes #150016.
This is implemented for both cublaslt and hipblaslt. gemm_and_bias on failure will fall back to unfused path. lt gemm on failure falls back to gemm even if gemm preference is set to lt.
| true
|
2,954,662,367
|
[DTensor] part 2 - fix strided sharding for uneven padding
|
wconstab
|
closed
|
[
"oncall: distributed",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150393
* __->__ #150146
* #148894
this builds on the previous PR and corrects the full_tensor
reconstruction to account for padding in the case of strided
sharding with uneven tensor shape and padding.
Example: (copied from _StridedShard._to_replicate_tensor docstring)
-------
mesh = (DP=2, TP=2)
original = torch.arange(5)
tp sharded tensor
-----------------
`tp = distribute_tensor(x, world_mesh['tp'], [Shard(0)])`
local_tensors:
rank0: [0,1,2] rank1: [3,4]
rank1: [0,1,2] rank3: [3,4]
fsdp+tp sharded tensor
----------------------
`dp_tp = ...` (the process of creating a strided-shard tensor is skipped over as it is hacky and complicated #TODO put an example somewhre and ref to it)
dp_tp has placement (_StridedShard(0, split_factor=2), Shard(0))
local_tensors:
rank0: [0,1] rank1: [3]
rank1: [2] rank3: [4]
Now, say someone wants to reconstruct dp_tp's full tensor. This will invoke 'redistribute' to replicate.
redistribute will first replicate the "Shard(0)" placement on the rightmost mesh dim, then replicate the
StridedShard placement second, which is implemented by this function.
So our starting point (`local_tensor` arg) is the result of replicating the Shard(0) placement across the
TP dim, which looks like this.
Note the discrepancy with the 'tp sharded tensor' line above! We'll fix it by locally shuffling data.
local_tensors:
rank0: [0,1,3] rank1: [0,1,3]
rank1: [2,4] rank3: [2,4]
Step 1: replicate over the DP dimension. Afterwards, each rank can locally sort the values.
note: we need padding to do this allgather, and we'll need to keep track of the padding amount for later
local_tensors:
rank0: [0,1,3,2,4] rank1: [0,1,3,2,4]
rank1: [0,1,3,2,4] rank3: [0,1,3,2,4]
Step 2: chunk and shuffle values around to account for the wrong order of operations above
and get the original tensor content back
01324# <- our allgather includes padding, if padding was applied in step 1
01324 <- Remove the padding
013, 24 <- chunk once, 'undoing' the DP allgather
01, 3, 2, 4 <- chunk each chunk, 'undoing' the initial (wrong) TP allgather performed by Shard(0)->Replicate()
012, 34 <- interleave with stride=TP mesh dim size
01234 <- concatenate
| true
|
2,954,650,602
|
Dont exclude constant_pad_nd in prologue fusion
|
pytorchbot
|
closed
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149947
Originally, I excluded constant_pad_nd from fusing to be conservative on compilation time. But, on benchmarking, you do occasionally get speedups by fusing it. Also includes a fix for making single, contiguous dep for prologues.
For instance, the following benchmark gets a 7% speedup by fusing in the constant_pad_nd.
```
import torch
import torch.nn.functional as F
torch._inductor.config.force_disable_caches = True
padded_N = 2048
n_pad_rows = 100
K, N = 2048, 4096
tensor1 = torch.randn(padded_N - n_pad_rows, 4096, device="cuda").to(torch.bfloat16)
tensor2 = torch.randn(4096, 4096, device="cuda").to(torch.bfloat16)
@torch.compile(mode='max-autotune-no-cudagraphs')
def masked_linear(input, weight, n_pad_input_rows):
"""
Linear layer with input padded by `n_pad_input_rows` rows
"""
# Use constant_pad_nd to pad with zeros for the invalid rows
padded_input = F.pad(tensor1, (0, 0, 0, n_pad_input_rows), "constant", 0)
return F.linear(padded_input, weight)
# Invoke the function
masked_linear(tensor1, tensor2, n_pad_rows)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,954,597,897
|
infer dynamic shapes through additional inputs
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 9
|
CONTRIBUTOR
|
Summary:
Instead of explicitly specifying dynamic shapes, it is possible to infer them from additional example inputs. Together with the example inputs provided to export, we can basically make any varying dim dynamic and keep any fixed dim static. This should be useful for prod scenarios that have access to tests and/or profiling data, yet are somewhat removed from the model authoring process.
However this alone is not satisfactory: the exported program by design has only one graph, representing one path through the model, and we cannot necessarily guarantee that this graph works for the additional example inputs because different guards might have been created if we had exported with them instead (corresponding to different traced paths). However, checking that the additional example inputs satisfy the guards created by the original export should be sufficient for generalization.
Now, while we don't preserve all guards in the exported program, we do check a subset of them as part of input matching. So we add a verification step at the end of export when such additional example inputs are provided. This should be enough for now.
Test Plan: added test (positive and negative cases)
Differential Revision: D72001771
| true
|
2,954,593,426
|
cpp_wrapper: Miscellaneous fixups
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148773
* #144293
* __->__ #150143
1. Revisit preprocessing code in cpp_bulider.py, removing a hack that channels it through stdout.
2. Fix ops that return None.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D72053414](https://our.internmc.facebook.com/intern/diff/D72053414)
| true
|
2,954,592,031
|
[ROCm][TunableOp] Stricter unit tests for online and offline tuning
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"topic: not user facing"
] | 16
|
COLLABORATOR
|
Improvements to unit tests and warnings for unsupported cases in offline tuning. Here are more details:
- Previously we only compared the OpSig for the untuned vs. tuned entries. This was not strict enough so we now compare OpSig+ParamSig.
- The main offline and online UTs are now stricter to make sure we exercise the code paths for the four combinations of transA and transB.
- Offline tuning does not support some tensor shapes. Emit warning and skip tuning.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,954,472,653
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_complex128 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 8
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_complex128&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39537633264).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_complex128`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,954,469,025
|
FSDP2 issue with mp_policy, checkpoint() and float input
|
mori360
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
https://github.com/pytorch/pytorch/pull/150139
This bug happens at when using FSDP2 setting mp_policy, with checkpoint(model, input) and input to be float input.
Here are the finding of root cause
FSDP1:
- _root_pre_forward
- _cast_forward_inputs(state.mixed_precision.param_dtype)
- _pre_forward
- return if training state is HandleTrainingState.BACKWARD_PRE # return in the second time
FSDP2:
- _pre_forward
- return if training state is TrainingState.PRE_BACKWARD # return in the second time
- _root_pre_forward
- cast_fn(self._mp_policy.param_dtype)
In FSDP2, we have the training state fist before casting mixed_precision, thus when using `checkpoint(model, input)`, during `loss.backward()`, forward would be called twice and the 2nd time won't have the cast_fn, causing dtype error.
The cast_fn uses `_cast_fp_tensor` to cast, so if the input is not in float, it won't be cast in the 1st time, there's no issue if both 1st and 2nd time are not cast
### Versions
PyTorch version: 2.8.0a0+git49d7d66
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,954,453,368
|
[issue][do not land] Add unit test to show composability issue between mp_policy and checkpoint()
|
mori360
|
open
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,954,412,309
|
[CI] Remove the xpu env source for linux binary validate
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Due to we have enabled the xpu runtime pypi packages as dependencies directly
| true
|
2,954,402,711
|
Ignore meta ops in inductor
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150137
Fix for https://github.com/pytorch/pytorch/issues/144607
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,954,401,930
|
Ignore meta ops in inductor
|
eellison
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,954,384,434
|
[TEST - st]
|
muchulee8
|
closed
|
[
"fb-exported",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Differential Revision: D72000844
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
2,954,367,623
|
Documentation build errors caused by unsupported section titles
|
dscamiss
|
closed
|
[] | 1
|
CONTRIBUTOR
|
### 📚 The doc issue
The docstring for `torch.nn.attention.flex_attention.BlockMask` has `numpy` style sections with unsupported titles "Basics" and "Details".
This causes documentation build errors, for example `make html` gives
```bash
reading sources... [100%] torch.compiler_get_started .. xpu
/home/dscamiss/pytorch-fix-doc-build-error/torch/nn/attention/flex_attention.py:docstring of torch.nn.attention.flex_attention.BlockMask:5: CRITICAL: Unexpected section title.
Basics
------
/home/dscamiss/pytorch-fix-doc-build-error/torch/nn/attention/flex_attention.py:docstring of torch.nn.attention.flex_attention.BlockMask:39: CRITICAL: Unexpected section title.
Details
-------
```
The build successfully finishes, but reports the errors:
```bash
build finished with problems, 2 warnings.
make: *** [Makefile:51: html] Error 1
```
Tested with PyTorch 2.8.0.
### Suggest a potential alternative/fix
Two possible fixes:
1. Add `napoleon_custom_sections = ["Basics", "Details"]` to the Sphinx config in `docs/source/conf.py`
2. Adjust the `BlockMask` docstring
In either case, I can make a PR.
cc @svekars @sekyondaMeta @AlannaBurke @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,954,348,585
|
[fr] Added protection against missing stack frames in fr cont.
|
VieEeEw
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary: Previously we had D70358287, which didn't fully resolved the issue.
Test Plan:
# FR
`buck2 run @//mode/opt //caffe2/fb/flight_recorder:fr_trace -- --mast_job_id f710320638-TrainingApplication --mast_job_version 0 --mast_job_attempt 0 --bucket tlcm_log_blob --world_size 128 --dump_file_name_offset 0 --allow-incomplete-ranks`
Confirm no error
# FR analyzer
`buck2 run @//mode/opt //investigations/dr_patternson/analyzers/ai_observability:ai_observability-all-analyzers-cli -- flight_recorder_analyzer --mast_job_name f710320638-TrainingApplication --mast_job_version 0 --mast_job_attempt 0`
Confirm no error
Differential Revision: D71998980
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.