id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,785,869,806
|
Handle meta tensors in FX quantization
|
kausv
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: AO frontend"
] | 11
|
CONTRIBUTOR
|
Summary:
D66895899 got reverted in D67565250 because of pytorch OSS linter failure.
Adding back with the format the linter suggested
https://github.com/pytorch/pytorch/actions/runs/12443655335/job/34743090791
Test Plan: buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules
Reviewed By: emlin
Differential Revision: D68132568
| true
|
2,785,808,532
|
update IS_JETSON check
|
Fuzzkatt
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
update IS_JETSON check to include the latest SM
cc @eqy @tinglvv @nWEIdia
| true
|
2,785,783,528
|
ck: add explicit addmm test
|
coconutruben
|
closed
|
[
"fb-exported",
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Summary:
# Why
keep https://github.com/pytorch/pytorch/pull/144519 from regressing
# What
run addmm through CK only with a shape that previously caused a segfault
Test Plan:
```
buck2 test mode/dev-nosan-amd-gpu fbcode//caffe2/test/inductor:test_ck_backend -- --exact 'caffe2/test/inductor:test_ck_backend - test_addmm (caffe2.test.inductor.test_ck_backend.TestCKBackend)'
```
Differential Revision: D68119352
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,785,770,901
|
Drop unused num_elements variable
|
c-p-i-o
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 4
|
CONTRIBUTOR
|
Summary:
With the recent enforcement of unused variable as an error in D67329035, certain tests like
https://www.internalfb.com/intern/test/562950135258426?ref_report_id=0
can't build citing:
```
Action failed: fbcode//caffe2:libtorch_cuda (cfg:linux-x86_64-fbcode-platform010-clang17-no-san#2a7259832b2f5c67) (cxx_compile torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp (pic))
Remote command returned non-zero exit code 1
Remote action, reproduce with: `frecli cas download-action a95a6625d2b071a782a7a8ea2882f4adccf103b023df5ccb596f48c506101754:145`
Stdout: <empty>
Stderr:
fbcode/caffe2/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:3757:16: error: unused variable 'num_elements' [-Werror,-Wunused-variable]
3757 | size_t num_elements = output.numel();
| ^~~~~~~~~~~~
1 error generated.
```
This causes Sandcastle to turn off these tests, decreasing protection from other bad diffs. Clean up the unused variable to unblock.
Test Plan:
```
buck2 build --config hpc_comms.use_ncclx=dev --flagfile fbcode//mode/opt fbcode//ftar:ftar_py_e2e_test
```
https://www.internalfb.com/buck2/888dfc68-07eb-4ba1-add5-b38c12d52b33
Reviewed By: c-p-i-o
Differential Revision: D68126236
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,785,760,479
|
[Intel GPU] Support SparseCsrXPU codegen
|
cfgfung
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 24
|
CONTRIBUTOR
|
Adding a new dispatch key - `SparseCsrXPU` to enable Intel GPU support for SparseCsr Tensor.
Similar PR: https://github.com/pytorch/pytorch/pull/139267
| true
|
2,785,748,597
|
Add tests for different dtypes with max autotune
|
exclamaforte
|
open
|
[
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 15
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,785,717,403
|
fixup top
|
soulitzer
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144720
* #141842
* #141841
* #144719
| true
|
2,785,717,315
|
Support FunctionalTensor subclass in is_fake and maybe_get_fake_mode
|
soulitzer
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #141842
* #141841
* __->__ #144719
| true
|
2,785,709,316
|
[xpu] Compilation of pytorch failed, unable to generate RegisterSparseXPU.cpp
|
jgtong
|
closed
|
[
"triaged",
"module: xpu"
] | 3
|
NONE
|
### 🐛 Describe the bug
Description: pytorch installation cannot generate the file `RegisterSparseXPU.cpp`
Back trace of the error:
```
[4/617] Generating ../../../xpu/ATen/XPUFunctions.h, ../../../xpu/ATen/RegisterXPU.cpp, ../../../xpu/ATe...xtend/c_shim_xpu.h, /home/jaytong/pytorch/torch/csrc/inductor/aoti_torch/generated/extend/c_shim_xpu.cpp
FAILED: xpu/ATen/XPUFunctions.h xpu/ATen/RegisterXPU.cpp xpu/ATen/RegisterSparseXPU.cpp /home/jaytong/pytorch/torch/csrc/inductor/aoti_torch/generated/extend/c_shim_xpu.h /home/jaytong/pytorch/torch/csrc/inductor/aoti_torch/generated/extend/c_shim_xpu.cpp /home/jaytong/pytorch/build/xpu/ATen/XPUFunctions.h /home/jaytong/pytorch/build/xpu/ATen/RegisterXPU.cpp /home/jaytong/pytorch/build/xpu/ATen/RegisterSparseXPU.cpp
cd /home/jaytong/pytorch && /home/jaytong/pyenv/pytorch_nightly_2/bin/python -m torchgen.gen --source-path /home/jaytong/pytorch/third_party/torch-xpu-ops/yaml/ --install-dir /home/jaytong/pytorch/build/xpu/ATen/ --per-operator-headers --static-dispatch-backend --backend-whitelist XPU SparseXPU --xpu --update-aoti-c-shim --extend-aoti-c-shim --aoti-install-dir=/home/jaytong/pytorch/torch/csrc/inductor/aoti_torch/generated/extend && cat /home/jaytong/pytorch/third_party/torch-xpu-ops/src/ATen/native/xpu/XPUFallback.template >> /home/jaytong/pytorch/build/xpu/ATen//RegisterXPU.cpp && /home/jaytong/pyenv/pytorch_nightly_2/bin/python /home/jaytong/pytorch/third_party/torch-xpu-ops/tools/codegen/remove_headers.py --register_xpu_path /home/jaytong/pytorch/build/xpu/ATen//RegisterXPU.cpp && /home/jaytong/pyenv/pytorch_nightly_2/bin/python /home/jaytong/pytorch/third_party/torch-xpu-ops/tools/codegen/remove_headers.py --register_xpu_path /home/jaytong/pytorch/build/xpu/ATen//RegisterSparseXPU.cpp
```
### Versions
Pytorch version: From `main` branch from commit: `c15d6508bdb82580803ea4899230043bf6ac2c04`
OS: Ubuntu 22.04.5 LTS
GCC: 11.4.0
cmake: 3.31.4
python: 3.10.12
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,785,704,570
|
assert size/strides for fallback kernel
|
shunting314
|
closed
|
[
"high priority",
"good first issue",
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Inductor right now does not generate size/stride asserts for fallback kernel. This makes issue like [this](https://fb.workplace.com/groups/1075192433118967/posts/1567334737238065) very hard to debug (this is a meta internal link).
Actually in ir.FallbackKernel, we have the following code whose intention is to assert for size/strides for fallback kernel:
https://github.com/pytorch/pytorch/blob/c15d6508bdb82580803ea4899230043bf6ac2c04/torch/_inductor/ir.py#L6669-L6670
However, Fallback kernel usually generate a node with MultiOutputLayout which does not pass the if check.
A fix is to iterate thru each item for the FallbackKernel (check self.outputs) and assert size/stride for each of them.
I use the following testing script:
```
import torch
import einops
from torch._inductor import config as inductor_config
from torch._dynamo.testing import rand_strided, reset_rng_state
inductor_config.fallback_random = True
image_latent = torch.randn((24, 16, 32, 32), device="cuda").to(memory_format=torch.channels_last).view(2, 12, 16, 32, 32)
def f(image_latent):
indices = torch.argsort(torch.rand(2, 12), dim=-1)[:, : 6]
tar_latent = image_latent[
torch.arange(2).unsqueeze(-1), indices[:, 3:]
]
tar_latent_rearranged = einops.rearrange(
tar_latent, "b n c h w -> (b n) c h w"
)
return {
"tar_latent": tar_latent,
"tar_latent_rearranged": tar_latent_rearranged,
}
reset_rng_state()
ref = f(image_latent)
opt_f = torch.compile(f)
reset_rng_state()
act = opt_f(image_latent)
print(f"max dif {(act['tar_latent'] - ref['tar_latent']).abs().max()}")
print(f"max dif {(act['tar_latent_rearranged'] - ref['tar_latent_rearranged']).abs().max()}")
```
The script may not be able to repro anymore once we fix the layout problem for index.Tensor .
### Versions
..
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng @eellison
| true
|
2,785,677,267
|
[BE] [CD] Remove pygit2 dep for aarch64_wheel build
|
malfet
|
closed
|
[
"Merged",
"release notes: releng",
"topic: improvements",
"ciflow/binaries_wheel"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144698
* __->__ #144716
As it's incompatible with 3.13t and only used to fetch the branch name, which could be done by running
```
git rev-parse --abbrev-ref HEAD
```
Also, remove yet another reference to long gone `master` branch.
Test plan:
Download `manywheel-py3_11-cpu-aarch64.zip` produced by this PR, install it inside docker container and check it's version
```
# pip install torch-2.7.0.dev20250113+cpu-cp311-cp311-manylinux_2_28_aarch64.whl
...
Installing collected packages: mpmath, typing-extensions, sympy, networkx, MarkupSafe, fsspec, filelock, jinja2, torch
Successfully installed MarkupSafe-3.0.2 filelock-3.16.1 fsspec-2024.12.0 jinja2-3.1.5 mpmath-1.3.0 networkx-3.4.2 sympy-1.13.1 torch-2.7.0.dev20250113+cpu typing-extensions-4.12.2
root@434f2540345e:/# python
Python 3.11.9 (main, Aug 1 2024, 23:33:10) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'2.7.0.dev20250113+cpu'
```
| true
|
2,785,624,580
|
[mps/inductor] Add support for `ceil`
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4
|
MEMBER
|
inductor/test_index_dynamic_shapes passes after this change.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,785,611,635
|
[cutlass backend] cexpr the arg before writing to cpp file
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 15
|
CONTRIBUTOR
|
Summary: The problem is for certain shapes, see unit test, one of the dimensions is like `s0 // 2`. If we use cutlass backend, this means writing that to C++ file, which would lead to C++ compilation error.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,785,608,446
|
[dynamo] Delete DictKeysVariable - already have DictKeySetVariable
|
anijain2305
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144485
* __->__ #144713
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,785,586,534
|
[dynamo] skip frame recursively when no graph is traced
|
williamwen42
|
closed
|
[
"Stale",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144712
Fixes https://github.com/pytorch/pytorch/issues/144360. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @ydwu4
I'm considering refactoring the code flag logic in eval_frame (i.e. SKIP_CODE, SKIP_CODE_RECURSIVE, cache_limit_hit_fag, skip_frame_recursive_flag) to make things better defined and to have a cleaner, more consistent implementation.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,785,573,645
|
Tensor Stride Inconsistent (?) Behavior When One of the Dimension is 1
|
HanGuo97
|
open
|
[
"triaged",
"module: memory format"
] | 0
|
CONTRIBUTOR
|
Hi,
I noticed that when a Tensor has `1` in one of its dimensions, its `stride` exhibit inconsistent (?) behavior under transformations + `.contiguous()` compared to a new tensor initialized with the final shape.
Granted, since the dimension in question is `1`, we are never supposed to use index other than `0`. That being said, this could cause some custom (Triton) kernel that relied on certain stride behavior to fail.
```python
import torch
print("--- n = 1 ---")
X = torch.randn(16, 2048, 1, 128, device="cuda")
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
X = X.transpose(dim0=1, dim1=2).contiguous()
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
X = torch.randn(16, 1, 2048, 128, device="cuda")
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
print("--- n = 2 ---")
X = torch.randn(16, 2048, 2, 128, device="cuda")
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
X = X.transpose(dim0=1, dim1=2).contiguous()
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
X = torch.randn(16, 2, 2048, 128, device="cuda")
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
```
The above code would print out:
```python
--- n = 1 ---
shape: torch.Size([16, 2048, 1, 128]) stride: (262144, 128, 128, 1)
shape: torch.Size([16, 1, 2048, 128]) stride: (262144, 128, 128, 1) # <--- different
shape: torch.Size([16, 1, 2048, 128]) stride: (262144, 262144, 128, 1) # <--- different
--- n = 2 ---
shape: torch.Size([16, 2048, 2, 128]) stride: (524288, 256, 128, 1)
shape: torch.Size([16, 2, 2048, 128]) stride: (524288, 262144, 128, 1) # <--- the same
shape: torch.Size([16, 2, 2048, 128]) stride: (524288, 262144, 128, 1) # <--- the same
```
cc @jamesr66a
| true
|
2,785,566,841
|
Allow ROCm runner to upload benchmark results if found
|
huydhn
|
closed
|
[
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"test-config/default",
"ciflow/rocm"
] | 3
|
CONTRIBUTOR
|
https://github.com/pytorch/pytorch/wiki/How-to-integrate-with-PyTorch-OSS-benchmark-database. This will unblock AMD when they try to run benchmark MI300 benchmarks on CI.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,785,548,881
|
[aoti] Deduplicate "V.aot_compilation" and "V.graph.aot_mode" flags. [1/n]
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Summary:
According to angelayi, these two flags indicated different things when we have two-pass codegen but since now we basically keep the two flags all the same, we should merge two flags.
This can prevent some bug (e.g. we change value of aot_mode which will not cover branches like if V.aot_compialtion is True) from happening when we're trying to add different code paths to tweak the value of aot_mode in the future.
Test Plan: CI
Differential Revision: D68122536
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,785,533,964
|
[PT][PG] fix build error `nused variable 'num_elements'`
|
tianfengfrank
|
closed
|
[
"oncall: distributed",
"fb-exported",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 9
|
NONE
|
Summary: fix the build error revealed in D68075676. Build error exposed by added new `-Werror ` flag https://github.com/pytorch/pytorch/pull/136965
Test Plan: CI
Differential Revision: D68120898
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,785,463,492
|
functional compiled autograd
|
zou3519
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"skip-pr-sanity-checks",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 13
|
CONTRIBUTOR
|
This PR squashes together the following commits:
https://github.com/pytorch/pytorch/pull/144115
https://github.com/pytorch/pytorch/pull/143417
https://github.com/pytorch/pytorch/pull/143405
https://github.com/pytorch/pytorch/pull/143387
https://github.com/pytorch/pytorch/pull/143304
https://github.com/pytorch/pytorch/pull/143296
This is a refactor of compiled autograd to use "functional autograd". The end goal is that it gets compiled autograd's initial capture to stop specializing on Tensor metadata, therefore allowing compiled autograd to better handle Tensor subclasses.
For more information, please read the commit messages for each PR.
cc @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @xmfan
| true
|
2,785,456,089
|
Revert "Upload METADATA file with whl binaries (#143677)"
|
clee2000
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
This reverts commit 3eb3f4ed5580010a7961d996ccc6ee19c7ccbb5e.
Also reverts https://github.com/pytorch/pytorch/pull/144164
Manual revert because the above causes merge conflicts
Reverting in favor of https://github.com/pytorch/test-infra/pull/6159
| true
|
2,785,310,766
|
int_mm seems broken due to Triton upgrade
|
cpuhrsch
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: inductor",
"upstream triton"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
from torch._higher_order_ops.out_dtype import out_dtype
def quantized_matmul(x_vals_int8, x_scales, w_vals_int8):
return out_dtype(torch.ops.aten.mm.default, torch.int32, x_vals_int8, w_vals_int8) * x_scales
x_vals_int8 = torch.randn(65536, 144).to(dtype=torch.int8).cuda()
x_scales = torch.randn(65536, 1).to(dtype=torch.float32).cuda()
w_vals_int8 = torch.randn(432, 144).to(dtype=torch.int8).cuda().t()
qcm = torch.compile(quantized_matmul, mode='max-autotune-no-cudagraphs')
qcm(x_vals_int8, x_scales, w_vals_int8)
```
produces
```
python: /root/.triton/llvm/llvm-86b69c31-almalinux-x64/include/llvm/Support/Casting.h:566: decltype(auto) llvm::cast(const From &) [To = mlir::FloatAttr, From = mlir::Attribute]: Assertion `isa<To>(Val) && "cast<Ty>() argument of incompatible type!"' failed.
Aborted (core dumped)
```
This works on `nightly20241126py312` with `pytorch-triton 3.1.0+cf34004b8a`. Can do more fine-grained bisection if needed.
### Versions
```
ersions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250113+cu124
[pip3] torchaudio==2.6.0.dev20250113+cu124
[pip3] torchvision==0.22.0.dev20250113+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250113+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250113+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250113+cu124 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @bertmaher @int3 @davidberard98 @nmacchioni @embg @peterbell10
| true
|
2,785,233,817
|
Leave SCCACHE_S3_KEY_PREFIX empty to share the cache among all build jobs
|
huydhn
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
This is a follow-up of https://github.com/pytorch/pytorch/pull/144112#pullrequestreview-2528451214. After leaving https://github.com/pytorch/pytorch/pull/144112 running for more than a week, all build jobs were fine, but I failed to see any improvement in build time.
So, let's try @malfet suggestion by removing the prefix altogether to keep it simple. After this land, I will circle back on this to see if there is any improvements. Otherwise, it's still a simple BE change I guess.
Here is the query I'm using to gather build time data for reference:
```
with jobs as (
select
id,
name,
DATE_DIFF('minute', created_at, completed_at) as duration,
DATE_TRUNC('week', created_at) as bucket
from
workflow_job
where
name like '%/ build'
and html_url like concat('%', {repo: String }, '%')
and conclusion = 'success'
and created_at >= (CURRENT_TIMESTAMP() - INTERVAL 6 MONTHS)
),
aggregated_jobs_in_bucket as (
select
--groupArray(duration) as durations,
--quantiles(0.9)(duration),
avg(duration),
bucket
from
jobs
group by
bucket
)
select
*
from
aggregated_jobs_in_bucket
order by
bucket desc
```
| true
|
2,785,105,673
|
[XPU] Fix AOTI Runner Syntax Error
|
ratnampa
|
closed
|
[
"open source",
"topic: not user facing",
"module: xpu"
] | 6
|
CONTRIBUTOR
|
Syntax error in xpu aoti runner from commit: https://github.com/pytorch/pytorch/pull/142213/ leads to XPU build failure.
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,785,082,412
|
[PP] Don't allow for num_microbatches > num_stages for single stage schedules
|
H-Huang
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (pipeline)"
] | 8
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144702
There is an edge case where `Schedule1F1B` will hang when num_microbatches=1 (https://github.com/pytorch/torchtitan/issues/775). For validation it makes sense to check that the number of stages should be >= number of microbatches otherwise there will be an even larger bubble.
This can be removed when we have the single stage schedules to use an IR and updated to run with schedule runtime (issue tracker https://github.com/pytorch/pytorch/issues/144701)
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,785,072,714
|
[Pipelining] Update all schedules to use _PipelineScheduleRuntime
|
H-Huang
|
open
|
[
"triaged",
"better-engineering",
"module: pipelining"
] | 0
|
MEMBER
|
We have a new runtime for pipeline schedules that the existing schedules should be transitioned to.
Things we need to do:
- Update the `_step_microbatches` for each Schedule class to call into the `_PipelineScheduleRuntime._step_microbatches()`
- Update the `Schedule1F1B` and `ScheduleGpipe` to generate the pipeline_order (IR).
- Handle the differences between `PipelineScheduleSingle` vs `PipelineScheduleMulti`
- Update `test_schedule_multiproc.py` and `test_schedule.py` to work as expected
| true
|
2,785,016,599
|
inductor_config_logging: Don't drop keys
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144700
This bit me while I was trying to debug some trace issues.
In general this config is already quite large when dumping, so adding
more fields doesn't make it significantly worse.
Also a number of the items we are type checking for (except the test
configs), don't even show up. Primarily this will help us when debugging
rocm, halide, and trace configs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,785,011,820
|
inductor `full_like` decompositions give incorrect strides
|
bdhirsh
|
open
|
[
"high priority",
"triaged",
"actionable",
"module: correctness (silent)",
"oncall: pt2",
"module: inductor",
"ubn"
] | 14
|
CONTRIBUTOR
|
min repro:
```
import torch
def f(x):
return torch.full_like(x, 3)
x = torch.randn(4, 5, 6).transpose(1, -1)
out = f(x)
out_compiled = torch.compile(f, backend="aot_eager_decomp_partition")(x)
print(out.stride())
print(out_compiled.stride())
# prints
# (30, 1, 6)
# (30, 5, 1)
```
This seems like the root cause of an NJT compile crash that @jbschlosser was running into (see his [repro](https://www.internalfb.com/intern/paste/P1710266970), [njt_patch](https://www.internalfb.com/phabricator/paste/view/P1710266748) and [error](https://www.internalfb.com/phabricator/paste/view/P1710267237))
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,784,983,615
|
[CD] Enable python3.13t builds for aarch64
|
malfet
|
closed
|
[
"Merged",
"release notes: releng",
"topic: improvements",
"ciflow/binaries_wheel",
"no-runner-experiments"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144698
* #144716
But make sure that right numpy version is picked (2.0.2 does not support 3.13)
| true
|
2,784,975,450
|
[EZ] [CD] Add 3.13 to FULL_PYTHON_VERSIONS
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144698
* __->__ #144697
* #144696
Separation was necessary for Conda codegen, but now it's gone
| true
|
2,784,974,975
|
[EZ] [CD] Eliminate stale TODO
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144698
* #144697
* __->__ #144696
As 3.13 has been enabled across the board, which one can verify by running `./github/regenerate.sh` and observe that non of the configs have changed
| true
|
2,784,957,840
|
Output of nonzero is transposed, fix fake tensor
|
ezyang
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"ci-no-td"
] | 18
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144695
Needs this companion executorch PR: https://github.com/pytorch/executorch/pull/7657
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,784,942,012
|
Fix inductor periodic smoke test wrong artifact
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"test-config/default",
"ciflow/inductor-periodic",
"test-config/inductor_torchbench_smoketest_perf"
] | 5
|
CONTRIBUTOR
|
I'm not entirely sure why this failure starts to show up in periodic since Friday https://github.com/pytorch/pytorch/actions/runs/12716967189/job/35463656803. The artifact was uploaded to S3, but `use-gha: anything-non-empty-to-use-gh` was set and it was working. Maybe this is related to https://github.com/pytorch/pytorch/issues/144479
I also clean up the GCP/AWS A100 selection logic as the GCP cluster doesn't exist anymore.
cc @mlazos
| true
|
2,784,926,778
|
[PagedAttention] Support different input position for each batch index
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 6
|
CONTRIBUTOR
|
In LLM inference, each request usually has different prefill length, leading to different input position for each batch index. This PR adds such support for paged attention.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @Chillee @drisspg @yanboliang
| true
|
2,784,895,369
|
ROCm: Skip tests in elastic/utils/distributed_test
|
jagadish-amd
|
closed
|
[
"oncall: distributed",
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/periodic"
] | 7
|
CONTRIBUTOR
|
The tests are failing on ROCm machines due to the below error. The client socket has timed out after 1000ms while trying to connect to (gpu4f67.jax.cs.cpe.ice.amd.com, 0)
Disabling the tests.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,784,863,760
|
torch.cond + torch.non_zero does not work with torch.export.export
|
xadupre
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 14
|
COLLABORATOR
|
### 🐛 Describe the bug
I can't export the following model after rewriting the code with torch.cond. I tried with different configurations all listed below. None worked.
```python
import torch
class Model(torch.nn.Module):
def forward(
self,
input_ids,
image_features,
vocab_size,
):
if image_features.numel():
input_shape = input_ids.size()
input_ids = input_ids.view(-1, input_shape[-1])
# positions for image tokens
condition = (input_ids < 0) & (input_ids > -int(1e9))
positions = torch.where(condition)
# has_image = len(positions[0].tolist()) > 0
input_ids = input_ids.clamp_min(0).clamp_max(vocab_size)
return (input_ids, *positions)
return (input_ids, *torch.where(torch.zeros((1, 1), dtype=torch.bool)))
inputs = [
(
(torch.arange(24) - 8).reshape((2, -1)).to(torch.int64),
torch.arange(32).reshape((2, -1)).to(torch.float32),
1025,
),
(
(torch.arange(24) - 8).reshape((2, -1)).to(torch.int64),
torch.tensor([[], []], dtype=torch.float32),
1025,
),
]
model = Model()
expected = [model(*inp) for inp in inputs]
assert len(expected) == 2
assert len(expected[0]) == len(expected[1]) == 3
# Rewriting with torch.cond.
class Model2(torch.nn.Module):
def forward(self, input_ids, image_features, vocab_size):
def then_branch(input_ids, image_features, vocab_size):
input_shape = input_ids.size()
input_ids = input_ids.view(-1, input_shape[-1])
condition = (input_ids < 0) & (input_ids > -int(1e9))
positions = torch.nonzero(condition, as_tuple=True)
input_ids = input_ids.clamp_min(0).clamp_max(vocab_size)
return (input_ids, positions[0], positions[1])
def else_branch(input_ids, image_features, vocab_size):
r = torch.where(torch.zeros((1, 1), dtype=torch.bool))
return (input_ids, r[0], r[1])
a, b, c = torch.cond(
image_features.numel() > 0,
then_branch,
else_branch,
[input_ids, image_features, vocab_size],
)
return a, b, c
# Check that it is equivalent.
model2 = Model2()
new_out = [model2(*inp) for inp in inputs]
for i in range(2):
for j in range(3):
torch.testing.assert_close(expected[i][j], new_out[i][j])
batch = torch.export.Dim("batch")
seq_length = torch.export.Dim("seq_length")
dynamic_shapes = ({0: batch}, {0: batch, 1: seq_length}, None)
# We try to export with (tensor, tensor, int)
# ep = torch.export.export(model2, inputs[0], dynamic_shapes=dynamic_shapes, strict=False)
# fails with Expect operands to be a tuple of possibly nested dict/list/tuple that only consists of tensor leaves, but got [FakeTensor(..., size=(s1, 12), dtype=torch.int64), FakeTensor(..., size=(s2, s3)), 1025].
# print(ep)
# We try to export with (tensor, tensor, int)
new_inputs = (*inputs[0][:2], torch.tensor([1025], dtype=torch.int64))
# ep = torch.export.export(model2, new_inputs, dynamic_shapes=dynamic_shapes, strict=False)
# torch._dynamo.exc.Unsupported: dynamic shape operator: aten.nonzero.default; to enable, set torch._dynamo.config.capture_dynamic_output_shape_ops = True
# torch._dynamo.exc.UncapturedHigherOrderOpError: Cond doesn't work unless it is captured completely with torch.compile. Scroll up to find out what causes the graph break.
# print(ep)
torch._dynamo.config.capture_dynamic_output_shape_ops = True
ep = torch.export.export(model2, new_inputs, dynamic_shapes=dynamic_shapes, strict=False)
# torch._dynamo.exc.UncapturedHigherOrderOpError: Expected true_fn_output and false_fn_output to have same metadata but found:
# pair[1] differ in 'shape: torch.Size([u0]) vs torch.Size([u1])', where lhs is FakeTensor(..., size=(u0,), dtype=torch.int64) and rhs is FakeTensor(..., size=(u1,), dtype=torch.int64)
# pair[2] differ in 'shape: torch.Size([u0]) vs torch.Size([u1])', where lhs is FakeTensor(..., size=(u0,), dtype=torch.int64) and rhs is FakeTensor(..., size=(u1,), dtype=torch.int64)
print(ep)
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250113+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250113+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250113+cu126
[pip3] torchvision==0.22.0.dev20250113+cu126
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,784,701,156
|
Support new CUDA conda package layout natively in cpp_extension.CUDAExtension
|
vyasr
|
open
|
[
"module: cpp-extensions",
"module: cuda",
"triaged",
"enhancement"
] | 5
|
NONE
|
### 🚀 The feature, motivation and pitch
[`torch.utils.cpp_extension.CUDAExtension`](https://pytorch.org/docs/stable/cpp_extension.html#torch.utils.cpp_extension.CUDAExtension) is designed to simplify compiling extension modules that require CUDA. To facilitate CUDA usage, it adds library/include/etc paths and passes them along to setuptools. These paths are currently based on the standard layout for CUDA packages provided via standard package managers (e.g. for Linux distros). However, as of CUDA 12 this is not the layout when CUDA is installed via conda packages. Recent updates to the CUDA infrastructure on conda-forge have added support for compiling CUDA code using compilers installed from CUDA (which was previously not possible). Since conda environments need to support cross-compilation, the packages are installed into a splayed layout where all files are placed into a `${PREFIX}/targets` directory and only a subset of them are symlinked directly into normal directories. In particular, shared libraries are symlinked into `${PREFIX}/lib`, but the includes are not linked into `${PREFIX}/include` because instead the nvcc compiler in conda is configured (via nvcc.profile and environment variables) to know where to search for includes. As mentioned above, supporting cross-compilation in conda environments was a key point in these decisions (some discussion started in https://github.com/conda-forge/cuda-nvcc-feedstock/issues/12, happy to point to more threads if needed).
It would be ideal for PyTorch to also support compilation in these environments. To do so, the extension would need to also start searching these additional directories.
### Alternatives
At the moment this issue may be worked around by setting [`CUDA_INC_PATH`](https://github.com/pytorch/pytorch/blob/main/torch/utils/cpp_extension.py#L1240), so this issue is primarily to document a nice-to-have feature as well as to have something to point to in case future users encounter confusion around building extensions with pytorch inside modern conda environments.
### Additional context
_No response_
cc @malfet @zou3519 @xmfan @ptrblck @msaroufim @eqy
| true
|
2,784,695,373
|
DISABLED test_tcp (__main__.WorkerServerTest)
|
jeffdaily
|
open
|
[
"oncall: distributed",
"module: rocm",
"triaged",
"skipped"
] | 1
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22distributed%2Felastic%2Ftest_control_plane.py%3A%3AWorkerServerTest%3A%3Atest_tcp%22%5D)).
There is some setup issue with the ROCm CI self-hosted runners that blocks this port. Need to investigate further, but disable for now to improve the CI signal.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,784,672,782
|
topK for sparse Vectors
|
arthur-75
|
open
|
[
"module: sparse",
"triaged"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Hello, Thanks for this great package. is it possible to have topk with sparse vectors ?
Thanks
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
2,784,639,505
|
Loading sparse tensors in a `DataLoader` raises CUDA initialization error since `2.5.0`
|
douglas-boubert
|
closed
|
[
"module: sparse",
"module: dataloader",
"module: cuda",
"triaged",
"module: regression"
] | 16
|
NONE
|
### 🐛 Describe the bug
```python
import torch
from torch.utils.data import Dataset, DataLoader
def create_sparse_tensor():
tensor = torch.randn(5, 5)
sparse_tensor = tensor.to_sparse().to("cpu")
torch.save(sparse_tensor, "sparse_tensor.pth")
class OperatorDataset(Dataset):
def __init__(self):
self.files = ["sparse_tensor.pth"]
def __len__(self):
return len(self.files)
def __getitem__(self, idx):
_ = torch.load(self.files[idx], weights_only=True, map_location="cpu")
return None
if __name__ == '__main__':
print(torch.__version__)
create_sparse_tensor()
dataset = OperatorDataset()
dataloader = DataLoader(
dataset,
batch_size=None,
num_workers=1,
pin_memory=True,
)
for sparse_tensor in dataloader:
# Error raised here
pass
```
This code snippet succeeds on PyTorch 2.4.1 and fails on 2.5.0, 2.5.1 and the latest nightly:
```
2.5.1+cu124
Traceback (most recent call last):
File "/home/douglas/minimum_working_example.py", line 37, in <module>
for sparse_tensor in dataloader:
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 701, in __next__
data = self._next_data()
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1465, in _next_data
return self._process_data(data)
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1491, in _process_data
data.reraise()
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/_utils.py", line 715, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 351, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
data = self.dataset[possibly_batched_index]
File "/home/douglas/projects/gen11/research-lethe/minimum_working_example.py", line 19, in __getitem__
_ = torch.load(self.files[idx], weights_only=True, map_location="cpu")
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/serialization.py", line 1351, in load
return _load(
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/serialization.py", line 1851, in _load
torch._utils._validate_loaded_sparse_tensors()
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/_utils.py", line 254, in _validate_loaded_sparse_tensors
torch._validate_sparse_coo_tensor_args(
RuntimeError: CUDA error: initialization error
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.8 (Green Obsidian) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-18)
Clang version: Could not collect
CMake version: version 3.20.2
Libc version: glibc-2.28
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-477.13.1.el8_8.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 3500.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 24576K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 2.2.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @andrewkho @divyanshk @SsnL @VitalyFedyunin @dzhulgakov @ptrblck @msaroufim @eqy
| true
|
2,784,601,231
|
[export] Load side info about pos/kw argument kind for serialization.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 8
|
CONTRIBUTOR
|
Summary:
Fixing issue of nodes like
```
torch.ops.aten.linear.default(x, w, b)
```
being deserialized as
```
torch.ops.aten.linear.default(x, w, bias=b)
```
which breaks roundtripping.
Test Plan:
buck test mode/opt caffe2/test:test_export -- -r TestDeserialize
buck test mode/opt caffe2/test:test_export -- -r TestSerialize
Differential Revision: D67991410
| true
|
2,784,571,322
|
[RelEng] Add `--ami` option to build_aarch64
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Which should be mutually-exclusive with OS
For example, one can use the following to alloc one-off instance
```
./build_aarch64_wheel.py --alloc-instance --instance-type g5.4xlarge --key-name nshulga-key --ami ami-0f51103893c02957c --ebs-size 200
```
TODO:
- Figure out EBS volume name depending on the AMI (for `ami-05576a079321f21f8`(al2023) it's `/dev/xvda`, but for `ami-0f51103893c02957c`(deep learning container) it's `/dev/sda1`
| true
|
2,784,544,778
|
[export] Fix torchbind constant folding
|
yiming0416
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Summary: `CallTorchBind` should not be folded during constant folding
Test Plan:
```
buck2 run mode/dev-nosan sigmoid/inference/test:test_passes -- -r test_const_folding_torchbind
```
Reviewed By: henryoier
Differential Revision: D67721272
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,784,500,397
|
torch.export treats two of the same parameters as the same node
|
jackzhxng
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
`torch.export` where the same tensor is used for multiple args in the example inputs, e.g. (self.x, self.x) results in a confusing graph where the two parameters seem to be treated as the same node. As a basic example, when I would pass in something like ep.module()(torch.zeros(10), torch.ones(10)) as example inputs for the export, when tracing through the exported graph, for ops where I am expecting the arg to be the first parameter, torch.zeros(10), it takes the second parameter, torch.ones(10).
Discussed with @angelayi and we think that this should be expected behavior, since it is a valid use case that the two parameters passed in are references to the same object, in which case it would make sense for them to share a node in the graph. However, my case was that both are possible - they could refer to the same object in some scenarios and different objects in others. To capture this dual use case we would need to pass in the latter as example input instead of the former. We think it would be useful thought to add a warning log about this behavior.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,784,393,208
|
[BE]: Improve typing inference with TypeIs
|
Skylion007
|
closed
|
[
"oncall: distributed",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4
|
COLLABORATOR
|
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,784,131,177
|
Add heuristic to fail block pointer match early
|
kundaMwiza
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 8
|
CONTRIBUTOR
|
This PR adds a heuristic to potentially fail the block pointer match early. Expressions like below take a long time to match using sympy (e.g. > 100 seconds)
```python
# torch._inductor.config.triton.use_block_ptr = True
# torch._inductor.config.triton.prefer_nd_tiling = True
# Expression from pytest -k test_max_pool2d1_dynamic_shapes_cuda:
((xindex//ps1))*((s2 - 3//2))**2 + 2*((xindex//ps1))*((s2 - 3//2)) + ((xindex//ps1)) + ((s2 - 3//2))*(ModularIndexing(xindex, ps0, ps0)) + (ModularIndexing(xindex, 1, ps0)) + (ModularIndexing(xindex, ps0, ps0))
```
Additionally, the heuristic for the number of dimensions based on the indexing expression is refined to only add dimensions for FloorDiv(index, denom) and ModularIndexing(index, denom, modulo) instead of including FloorDiv/ModularIndexing expressions that don't involve the index.
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @blaine-rister
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,783,989,203
|
[BE][Easy] improve submodule discovery for `torch.ao` type annotations
|
XuehaiPan
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: AO frontend"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144680
| true
|
2,783,926,028
|
Improve softmax's perf in cuda
|
ywq880611
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 8
|
CONTRIBUTOR
|
Fixes #144645
| true
|
2,783,810,200
|
torch.distributed. pipelining source code page is not accessible.
|
kyoungbinkim
|
closed
|
[
"module: docs",
"triaged"
] | 2
|
NONE
|
### 📚 The doc issue
https://github.com/pytorch/pytorch/blob/main/docs/source/distributed.pipelining.rst?plain=1#L424C1-L491C21
https://pytorch.org/docs/stable/distributed.pipelining.html#torch.distributed.pipelining.pipeline
When accessing the source code page, a 404 error appears.
thanks
<img width="1458" alt="Image" src="https://github.com/user-attachments/assets/11cbfc02-878e-4967-9bc0-1eb90eab15b9" />
### Suggest a potential alternative/fix
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @svekars @brycebortree @sekyondaMeta @AlannaBurke
| true
|
2,783,582,709
|
Tensorboard SummaryWriter.add_hparams doesn't log hparam metrics dictionary
|
eren-ture
|
open
|
[
"triaged",
"module: tensorboard"
] | 1
|
NONE
|
### 🐛 Describe the bug
When trying to add_params, I cannot get the tensorboard to display the metrics.
```python
from torch.utils.tensorboard import SummaryWriter
import numpy as np
with SummaryWriter(r'.\runs\test_01') as writer:
for i in range(4, 7):
for j in range(3, 6):
batch_size, lr = 2**i, 10**(-j)
writer.add_hparams(
{
'batch_size': batch_size,
'learning_rate': lr
},
{
'accuracy': float(np.random.random())
},
run_name=f'{batch_size}_e-{j}'
)
```
The output doesn't have the accuracy metrics.

### Versions
```
PyTorch version: 2.4.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise (10.0.22631 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:20:11) [MSC v.1938 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 2000 Ada Generation Laptop GPU
Nvidia driver version: 556.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 13th Gen Intel(R) Core(TM) i7-13700H
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2400
MaxClockSpeed: 2400
L2CacheSize: 11776
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] torch==2.4.1+cu124
[pip3] torchaudio==2.4.1+cu124
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.19.1+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] torch 2.4.1+cu124 pypi_0 pypi
[conda] torchaudio 2.4.1+cu124 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchvision 0.19.1+cu124 pypi_0 pypi
```
| true
|
2,783,576,231
|
Allow KGEModel.test to collect hits@k for several values of k at once
|
ACHinrichs
|
closed
|
[] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Currently, `KGEModel.test` takes an optional integer-parameter `k` to calculate hits@k. I would like to collect hits for different `k`s, e.g. I would like to collect hits@1, hits@10 and hits@100 without having to re-run the model.
In the solution I envision `KGEModel.test` would take a list of values for the parameter `k` and instead of the current hits_at_k list it would return a dict with the corresponding hits@k. To maintain backwards compatibility, the current behaviour (`int` as parameter, list as return) could be maintained, giving `k` the type `int | List[int]`
I would be happy to implement the changes myself, if they are indeed desired.
### Alternatives
run `KGEModel.test` multiple times with different values for the parameter `k`
### Additional context
_No response_
| true
|
2,783,401,918
|
Apply clang-format for ATen/core/boxing cpp files
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Code change via add path config in .lintrunner.toml file and running
```bash
lintrunner -a --take CLANGFORMAT --all-files
```
| true
|
2,783,322,243
|
[inductor][cpu] fused attention Inductor tests fails with an error " name 'getitem' is not defined "
|
kareemshaik80
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I have regrenrated all the patterens with the flag PYTORCH_GEN_PATTERNS=1 by setting config flag "fallback_random" is set to true. By default this falg is false.
After setting this flag to True. I ran existing fused attention tests but the test failed with the following error.
**CMD to run the test:**
1. run once with the flag regenerte patterns
PYTORCH_GEN_PATTERNS=1 python -m pytest test_fused_attention.py -k test_sdpa_rewriter_1_cpu
2. run again
python -m pytest test_fused_attention.py -k test_sdpa_rewriter_1_cpu
**Error:**
NameError: name 'getitem' is not defined
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.10/unittest/case.py", line 59, in testPartExecutor
yield
File "/usr/lib/python3.10/unittest/case.py", line 591, in run
self._callTestMethod(testMethod)
File "/usr/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
method()
### Versions
Collecting environment information...
PyTorch version: 2.5.0a0+gite84e33f
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 4389.68
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] intel_extension_for_pytorch==2.5.10+git1104b12
[pip3] numpy==1.26.4
[pip3] torch==2.5.0a0+gite84e33f
[pip3] torchaudio==2.1.0+6ea1133
[pip3] torchvision==0.16.0+fbb4cc5
[conda] Could not collect
cc @soulitzer @chauhang @penguinwu
| true
|
2,783,291,020
|
Thread-safe approach on temporarily changing the `set_default_type`
|
baluyotraf
|
open
|
[
"triaged",
"enhancement",
"module: python frontend"
] | 5
|
NONE
|
### 🚀 The feature, motivation and pitch
There are cases in our code in which we rely on torch to infer the type of the data. There are code sections in which we would like to use higher precision for floating points and it would be nice to only set the default types in these code blocks.
### Alternatives
We are currently using a version that is not thread-safe. I don't think (?) there's a good way to do it given it's global nature, but maybe I'm missing the right API for it.
### Additional context
_No response_
cc @albanD
| true
|
2,783,246,761
|
[FlexAttention] Allow to pass mod_type info to `create_mask` via arg or keyword.
|
oraluben
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
`_get_mod_type` does not always work, e.g. `def some_score(score, *args)` in https://github.com/pytorch-labs/attention-gym/pull/103/commits/29f43391be8c952e86307c6cb1682022abf7de00. This PR allows user to specific the type (mask/score) via `create_mask(mod_fn, mod_type=_ModificationType.SCORE, ...)` or `create_mask(score_mod=mod_fn, ...)`
@drisspg
| true
|
2,783,246,531
|
torch.stack for sequences
|
yueyinqiu
|
open
|
[
"module: typing",
"triaged",
"enhancement",
"module: python frontend"
] | 6
|
NONE
|
### 🚀 The feature, motivation and pitch
I'm trying to let my tensors to have a static dimension count for type checking, like:
```python
import torch
import typing
Tensor2d = typing.NewType("Tensor2d", torch.Tensor)
def matmul(x: Tensor2d, y: Tensor2d) -> Tensor2d:
return Tensor2d(x.matmul(y))
# So that we won't pass any other tensors with wrong dims in accident.
```
However, I found that it is impossible to use functions like `torch.stack` on a `list[Tensor2d]`:
```python
import torch
import typing
Tensor2d = typing.NewType("Tensor2d", torch.Tensor)
my_list: list[Tensor2d] = []
torch.stack(my_list)
# Pylance for example, says:
# Argument of type "list[Tensor2d]" cannot be assigned to parameter "tensors" of type "Tuple[Tensor, ...] | List[Tensor]" in function "stack"
# Type "list[Tensor2d]" is not assignable to type "Tuple[Tensor, ...] | List[Tensor]"
# "list[Tensor2d]" is not assignable to "Tuple[Tensor, ...]"
# "list[Tensor2d]" is not assignable to "List[Tensor]"
# Type parameter "_T@list" is invariant, but "Tensor2d" is not the same as "Tensor"
# Consider switching from "list" to "Sequence" which is covariant
```
So I wonder if it is possible to switch the signature from
```python
def stack(tensors: Union[Tuple[Tensor, ...], List[Tensor]], dim: _int = 0, *, out: Optional[Tensor] = None) -> Tensor:
```
to
```python
def stack(tensors: Sequence[Tensor], dim: _int = 0, *, out: Optional[Tensor] = None) -> Tensor:
```
?
And then since `Sequence` is covariant, we could pass a `list[Tensor2d]` into it.
I guess there are some optimization measures that can only be applied on `tuple` and `list`, but is it possible to check the type at runtime? And if it's something other than a `list` or `tuple`, we could also automatically convert it.
And same for some other functions like `cat`. Thanks in advance.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @malfet @xuzhao9 @gramster @albanD
| true
|
2,783,237,681
|
Update slow tests
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 3
|
COLLABORATOR
|
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
| true
|
2,783,226,964
|
Fix Throughputbenchmark issue
|
shiyang-weng
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"module: dynamo"
] | 5
|
CONTRIBUTOR
|
Fixes [144461](https://github.com/pytorch/pytorch/issues/144461)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,783,187,401
|
[Update torch-xpu-ops] Update torch-xpu-ops to resolve XPU build error introduced by #144364
|
etaf
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/xpu"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144668
* #144667
As title.
| true
|
2,783,159,480
|
[XPU build] Fix XPU build error caused by wrong code change introduced by #142213
|
etaf
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144668
* __->__ #144667
The PR #142213 changed the function parameter name from `inputs` to `input_handles` but still use `inputs` in function, and caused XPU build failure. Since XPU CI did not gate the PR, we need to fix it here to unblock the XPU build.
| true
|
2,783,135,743
|
[mps/inductor] Add support for truncdiv().
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
Two other inductor tests pass after this change.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,783,106,932
|
[MPSInductor] Fix maximum/minimum for int types
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
`metal::isnan` is only defined for floats, so provide a generic wrapper
that is false for integral types
TODO: Figure out why type propagantion is not working (or should it?)
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,783,084,636
|
Generalize poison fork logic for each device backend
|
guangyey
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: improvements",
"topic: not user facing",
"ciflow/periodic",
"ciflow/mps",
"ciflow/rocm",
"ciflow/xpu",
"ci-no-td",
"module: accelerator"
] | 39
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144664
# Motivation
Generalize the posion_fork code to make it reusable across different devices.
cc @albanD @EikanWang
| true
|
2,783,078,906
|
Support loading and executing a ExportedProgram from torch.export in C++ environment
|
supercharleszhu
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 11
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Hi all, we are currently working on an online ML platform in the company which require us to
1. similar to torchscript, export a pytorch model graph and variable into an IR which can be executed in c++ environment
2. Update the model parameters when executing inference.
I did some doc and code search and [torch.export](https://pytorch.org/docs/stable/export.html) seems to be the closest way to achieve this, but there are some gaps, not sure if I missed anything
1. torch.export can only export forward pass and cannot export forward + backward + optimizer step all into the same graph . The backward graph is executed eagerly after loaded back in python environment (checked the latest pytorch 2.5 doc [here](https://pytorch.org/docs/stable/export.html)).
2. In order to run the graph in C++, We can only compile the graph into aot_inductor and put that into .so file, there is not C++ API to load the exported graph and programically call this graph
3. There is no way to call this compute graph while passing variable update to the compute graph
Do we have any plans to extend torch export to support such functionalities?
### Alternatives
_No response_
### Additional context
same issue posted here https://discuss.pytorch.org/t/support-loading-and-executing-a-exportedprogram-from-torch-export-with-forward-backward-optimizer-step-in-c-environment/213024
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,782,916,424
|
[MPSInductor] Add support for sizevars
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Just pass them as kernel arguments
After this change `pytest test/inductor/test_torchinduct.py -v -k _mps` reports 330 failed, 429 passed after and 335 failed, 424 passed before
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,782,872,702
|
Fix torch.logsumexp dim description
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Stale",
"release notes: python_frontend"
] | 3
|
CONTRIBUTOR
|
Fixes #144339
Remove `dim` optional description in `torch.logsumexp` doc.
**Test Result**
**Before**

**After**

| true
|
2,782,807,117
|
Implement the `mode` property for transformed distributions
|
hukz18
|
open
|
[
"module: distributions",
"triaged"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
Currently, the `TransformedDistribution` class of `torch.distributions.transformed_distribution` doesn't have a `mode` property. Calling `mode` on such an instance will raise a `NotImplementedError` that falls back to the `Distributions` base class.
Though I understand the `mode` of a transformed distribution should not necessarily equal to just applying the transforms onto the base distribution's `mode` value, as seen in the discussion [here](https://math.stackexchange.com/questions/2526473/modes-under-transformation). Is there a feasible way to get the actual mode value of a transformed distribution?
cc @fritzo @neerajprad @alicanb @nikitaved
| true
|
2,782,762,476
|
Incorrect formula in docstring for torch.nn.modules.normalization.RMSNorm
|
enedil
|
closed
|
[] | 3
|
NONE
|
### 📚 The doc issue
Issue is at this line:
https://github.com/pytorch/pytorch/blob/9ae35b8bb13f3c35803355ce26fb9ee9954f1bdf/torch/nn/modules/normalization.py#L328
Assume for simplicity, that len(x.shape) == 1.
Formula is
```
y = x / sqrt(RMS[x] + epsilon) * gamma
```
RMS, as indicated from the linked ARXIV paper, sqrt(mean(x^2)).
So, according to the docstring, this formula is correct:
```
y = x / sqrt(sqrt(mean(x^2)) + epsilon) * gamma
```
RMSNorm computes however something else, namely
```
y = x / sqrt(mean(x^2) + epsilon) * gamma
```
This is apparently what RMSNorm should do, so the issue is in the docs.
### Suggest a potential alternative/fix
Change the formula not to refer to RMS[x], if we want epsilon to be included, or introduce a term like RMS[x, eps]? It is not clear for me how to make this legible and compact.
| true
|
2,782,740,342
|
Update CONTRIBUTING.md
|
kaykenho
|
closed
|
[
"open source",
"topic: not user facing"
] | 3
|
NONE
|
Update documentation for the contributing process
- Clarified the steps
- Minor fixes to grammar and formatting for clarity.
Fixes #ISSUE_NUMBER
| true
|
2,782,732,952
|
Async distributed checkpointing works incorrectly with tensors on CPU
|
dimdi-y
|
closed
|
[
"oncall: distributed",
"triaged",
"oncall: distributed checkpointing"
] | 5
|
NONE
|
### 🐛 Describe the bug
If an update to model CPU parameters happens before an async distributed checkpoint (via `torch.distributed.checkpoint.async_save`) is finished, the new value is written instead of the original one.
Moving the model to GPU or waiting on the returned future helps with the issue, but doing an optimizer update before waiting on the future seems to be the intended use-case (for example, the [official recipe](https://pytorch.org/tutorials/recipes/distributed_async_checkpoint_recipe.html) does this).
I think the intended behaviour for `torch.distributed.checkpoint.async_save` should be to only return after the CPU state has been copied into a separate buffer.
Here is a short reproduction script:
```python
import os
import torch
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
from torch.distributed.checkpoint.state_dict import get_model_state_dict
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super().__init__()
self.weight = nn.Parameter(torch.ones(1, 1))
def forward(self, x):
return self.layer(x)
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12345"
os.environ["WORLD_SIZE"] = "1"
os.environ["RANK"] = "0"
dist.init_process_group()
model = Net()
state_dict = get_model_state_dict(model)
pg = dist.new_group(backend="gloo")
try:
steps = [10, 20, 30, 40, 50]
future = None
for step in steps:
# simulate a training step, e.g. optimizer updating values
with torch.no_grad():
model.weight.data.fill_(step)
if future is not None:
future.result()
future = None
future = dcp.async_save(
state_dict,
checkpoint_id=f"outputs/{step}",
process_group=pg,
)
future.result()
for step in steps:
dcp.load(
state_dict,
checkpoint_id=f"outputs/{step}",
process_group=pg,
)
assert state_dict["weight"][0, 0] == step, f"got {state_dict['weight'][0, 0]=} on {step=}"
finally:
dist.destroy_process_group(pg)
dist.destroy_process_group()
```
which fails with the following error:
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/ubuntu/dimdi-y/oss/torchtitan/../reproduce_cpu_dcp_save.py", line 55, in <module>
[rank0]: assert state_dict["weight"][0, 0] == step, f"got {state_dict['weight'][0, 0]=} on {step=}"
[rank0]: AssertionError: got state_dict['weight'][0, 0]=tensor(20.) on step=10
```
the script does several iterations of updating the model weights to be equal to step id and then saving the model.
Each of the checkpoints (except for the last one) has the state of the model that should only be written in the subsequent checkpoint.
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250110+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1031-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H200
GPU 1: NVIDIA H200
GPU 2: NVIDIA H200
GPU 3: NVIDIA H200
GPU 4: NVIDIA H200
GPU 5: NVIDIA H200
GPU 6: NVIDIA H200
GPU 7: NVIDIA H200
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250110+cu126
[pip3] torchaudio==2.6.0.dev20250110+cu126
[pip3] torchdata==0.10.1
[pip3] torchtitan==0.0.2
[pip3] torchvision==0.22.0.dev20250110+cu126
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @pradeepfn
| true
|
2,782,728,512
|
remove allow-untyped-defs from torch/ao/nn/quantized/reference/modules/linear.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: AO frontend"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144656
| true
|
2,782,728,489
|
remove allow-untyped-defs from torch/_C/_dynamo/eval_frame.pyi
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144655
| true
|
2,782,728,470
|
remove allow-untyped-defs from torch/nn/parameter.pyi
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144656
* #144655
* __->__ #144654
| true
|
2,782,728,455
|
remove allow-untyped-defs from torch/distributed/checkpoint/api.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144656
* #144655
* #144654
* __->__ #144653
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,782,728,434
|
remove allow-untyped-defs from torch/ao/nn/intrinsic/__init__.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: AO frontend"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144652
| true
|
2,782,698,709
|
[MPS] lu factor ex implementation
|
Isalia20
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 7
|
COLLABORATOR
|
Implements `torch.linalg.lu_factor_ex`
| true
|
2,782,604,784
|
[BE]: Update literal typing for torch/fx/graph nodelist
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 3
|
COLLABORATOR
|
Mentioned in discussion for #144631
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,782,592,023
|
[MPSInductor] Better error when kernel fails to compile
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"release notes: mps",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144649
* #144648
* #144647
Now error message looks as follows:
```
% python ../test/inductor/test_torchinductor.py -v -k test_cat_unbacked_2d_mps
test_cat_unbacked_2d_mps (__main__.GPUTests) ... inline_call []
stats [('calls_captured', 6)]
inductor [('extern_calls', 2), ('fxgraph_cache_miss', 1)]
aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('not_ok', 1)]
ERROR
======================================================================
ERROR: test_cat_unbacked_2d_mps (__main__.GPUTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/malfet/git/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 3126, in wrapper
method(*args, **kwargs)
File "/Users/malfet/git/pytorch/pytorch/build/../test/inductor/test_torchinductor.py", line 12254, in new_test
return value(self)
File "/Users/malfet/miniconda3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/Users/malfet/git/pytorch/pytorch/build/../test/inductor/test_torchinductor.py", line 5885, in test_cat_unbacked_2d
self.common(
File "/Users/malfet/miniconda3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/Users/malfet/git/pytorch/pytorch/build/../test/inductor/test_torchinductor.py", line 620, in check_model_gpu
check_model(
File "/Users/malfet/git/pytorch/pytorch/build/../test/inductor/test_torchinductor.py", line 461, in check_model
actual = run(*example_inputs, **kwargs)
File "/Users/malfet/git/pytorch/pytorch/torch/_dynamo/eval_frame.py", line 580, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/Users/malfet/git/pytorch/pytorch/torch/_inductor/compile_fx.py", line 704, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/Users/malfet/git/pytorch/pytorch/torch/_inductor/compile_fx.py", line 689, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/Users/malfet/git/pytorch/pytorch/torch/_inductor/compile_fx.py", line 1149, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/Users/malfet/git/pytorch/pytorch/torch/_inductor/compile_fx.py", line 1064, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/Users/malfet/git/pytorch/pytorch/torch/_inductor/graph.py", line 1977, in compile_to_module
return self._compile_to_module()
File "/Users/malfet/git/pytorch/pytorch/torch/_inductor/graph.py", line 2018, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/Users/malfet/git/pytorch/pytorch/torch/_inductor/codecache.py", line 2768, in load_by_key_path
mod = _reload_python_module(key, path)
File "/Users/malfet/git/pytorch/pytorch/torch/_inductor/runtime/compile_tasks.py", line 51, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/var/folders/sc/2thx6_x95h7_h9qs8s48yh140000gn/T/tmpmyfz2ju8/lt/cltm34ognlgcc6oxoe6bexvtbwcdtdfgnkjj5miz7vhkemitacp7.py", line 40, in <module>
File "/var/folders/sc/2thx6_x95h7_h9qs8s48yh140000gn/T/tmpmyfz2ju8/lt/cltm34ognlgcc6oxoe6bexvtbwcdtdfgnkjj5miz7vhkemitacp7.py", line 32, in _compile_mps_shader
torch._inductor.exc.InductorError: SyntaxError: failed to compile
kernel void generated_kernel(
device float* out_ptr0,
constant float* in_ptr0,
uint xindex [[thread_position_in_grid]]
) {
long x1 = (xindex) / (3);
auto tmp0 = x1;
auto tmp1 = static_cast<long>(tmp0);
auto tmp2 = 0;
auto tmp3 = tmp1 >= tmp2;
auto tmp4 = 2;
auto tmp5 = tmp1 < tmp4;
long x0 = (xindex) % (3);
auto tmp6 = in_ptr0[x0 + 3*(x1)];
auto tmp7 = tmp5 ? tmp6 : 0.0;
auto tmp8 = tmp1 >= tmp4;
auto tmp9 = 2 + ks0;
auto tmp10 = static_cast<long>(tmp9);
auto tmp11 = tmp1 < tmp10;
auto tmp12 = 1.0;
auto tmp13 = tmp8 ? tmp12 : 0.0;
auto tmp14 = tmp5 ? tmp7 : tmp13;
long x2 = xindex;
out_ptr0[x2] = static_cast<float>(tmp14);
}
with program_source:18:25: error: use of undeclared identifier 'ks0'
auto tmp9 = 2 + ks0;
^
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor.py GPUTests.test_cat_unbacked_2d_mps
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 1 test in 0.472s
FAILED (errors=1)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,782,585,380
|
[MPS][BE] Surface syntax errors shader compilation
|
malfet
|
closed
|
[
"better-engineering",
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144649
* __->__ #144648
* #144647
Before this change
```python
>>> import torch
>>> torch.mps._compile_shader('What')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/malfet/miniconda3/envs/py311/lib/python3.11/site-packages/torch/mps/__init__.py", line 157, in _compile_shader
return torch._C._mps_compileShader(source)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Failed to create metal library, error: Error Domain=MTLLibraryErrorDomain Code=3 "program_source:1:1: error: unknown type name 'What'
What
^
program_source:1:5: error: expected unqualified-id
What
^
" UserInfo={NSLocalizedDescription=program_source:1:1: error: unknown type name 'What'
What
^
program_source:1:5: error: expected unqualified-id
What
^
}
```
After this change
```python
>>> import torch
>>> torch.mps._compile_shader('What')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/malfet/git/pytorch/pytorch/torch/mps/__init__.py", line 157, in _compile_shader
return torch._C._mps_compileShader(source)
SyntaxError: program_source:1:1: error: unknown type name 'What'
What
^
program_source:1:5: error: expected unqualified-id
What
^
```
| true
|
2,782,585,353
|
[BE] Introduce `c10::SyntaxError`
|
malfet
|
closed
|
[
"better-engineering",
"Merged",
"release notes: python_frontend",
"topic: improvements"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144649
* #144648
* __->__ #144647
Which will be translated into Python's SyntaxError
| true
|
2,782,545,207
|
[Inductor] Unifiy Low Precision FP Legalization for to_dtype_bitcast & constant
|
DDEle
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
The upcast in `to_dtype_bitcast()` breaks following operations that only works with the target type (I uses `bitwise_and` in the updated UT).

This PR fixes this problem. Let's check the CI results to make sure it doesn't bring accuracy problems.
- Unified the type promotion of low-precision FP operations in the legalize func, grouping ops into sources (whose results may be promoted) and sinks (whose input may be cast back). (The term of _sink_ and _source_ are from [graph theory](https://en.wikipedia.org/wiki/Directed_graph#Indegree_and_outdegree).)
## Test
```bash
pytest -vs test/inductor/test_torchinductor.py::CpuTests::test_float16_to_int16_cpu
pytest -vs test/inductor/test_torchinductor.py::CpuTests::test_bfloat16_to_int16_cpu
pytest -vs test/inductor/test_torchinductor.py::CpuTests::test_float32_to_int32_cpu
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,782,541,376
|
[RFC] Improve performance for softmax op for cuda in some specific size
|
ywq880611
|
closed
|
[
"module: performance",
"module: cuda",
"triaged"
] | 4
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
In the triton [tutorial](https://triton-lang.org/main/getting-started/tutorials/02-fused-softmax.html) for softmax, we could compare the performance of pytorch op and triton, here is the result on my local (RTX3080):
### Compare between triton and torch

We could see there is a dramatic perf drop after **N=1024** for torch.
### Mini repro
Here is my mini repro to test the perf around **N=1024**
```python
import torch
DEVICE=torch.device('cuda')
# Time cost for near 1024
for cnt in range(1020, 1030):
x = torch.randn(4096, cnt, device=DEVICE, dtype=torch.float32)
#x = torch.randn(M, N, device=DEVICE, dtype=torch.float32)
#warm up
need_warmup = True
round = 5
if need_warmup:
for _ in range(round):
output = torch.softmax(x, dim=-1)
torch.cuda.synchronize()
start_time = torch.cuda.Event(enable_timing=True)
end_time = torch.cuda.Event(enable_timing=True)
# Start time
start_time.record()
# Apply softmax
for _ in range(round):
output = torch.softmax(x, dim=-1)
# End time
end_time.record()
torch.cuda.synchronize()
# Calculate elapsed time
elapsed_time_ms = start_time.elapsed_time(end_time)
# print(f"CUDA Time: {elapsed_time_ms:.6f} ms")
gbps = lambda ms: round * 2 * x.numel() * x.element_size() * 1e-9 / (ms * 1e-3)
print(f"n as {cnt} of softmax: {gbps(elapsed_time_ms):.6f} gb/s")
```
Its output is:
```
n as 1020 of softmax: 645.059274 gb/s
n as 1021 of softmax: 653.439969 gb/s
n as 1022 of softmax: 644.096473 gb/s
n as 1023 of softmax: 649.523815 gb/s
n as 1024 of softmax: 656.015990 gb/s
n as 1025 of softmax: 209.183680 gb/s
n as 1026 of softmax: 208.490244 gb/s
n as 1027 of softmax: 201.126073 gb/s
n as 1028 of softmax: 278.307944 gb/s
n as 1029 of softmax: 205.510996 gb/s
```
We could see there is about ***>3x*** perf drop after **n=1024** `(656 vs 209)`.
### Investigation
#### Current state
Let's look at the below code snippet:
https://github.com/pytorch/pytorch/blob/1664033e13cc8831e2bb66e5c975ffb4dfc24eda/aten/src/ATen/native/cuda/SoftMax.cu#L846-L880
There are two kinds of softmax op in CUDA:
1. `dispatch_softmax_forward`
2. `cunn_SoftMaxForwardSmem` or `cunn_SoftMaxForward`
The implementation of `cunn_SoftMaxForwardSmem` or `cunn_SoftMaxForward` will be invoked if **N>1024**, but it's not efficient in the rough range 1025~2000.
#### root cause
The reason why the `cunn_SoftMaxForwardSmem` and `cunn_SoftMaxForward` is not efficient is **too much global memory and share memory access**, because they didn't cache the data each thread would like to access in register, so they may have to load data from memory much times.
### Alternatives
We could try to use registers to cache the data which a thread would like to use.
Pros:
Improve performance by reducing memory access
Cons:
Increase register pressure, but we could just do it in range of `N` in about 1025~2000, which may just increase about `1~2` register for a thread (I guess it's acceptable).
Now I have a draft implement in my local, it shows **~50%** performance gain compared with current torch, for the above case, its output is:
```
n as 1020 of softmax: 613.605899 gb/s
n as 1021 of softmax: 628.307691 gb/s
n as 1022 of softmax: 627.565386 gb/s
n as 1023 of softmax: 658.589191 gb/s
n as 1024 of softmax: 632.968698 gb/s
n as 1025 of softmax: 297.640651 gb/s
n as 1026 of softmax: 297.408139 gb/s
n as 1027 of softmax: 298.221413 gb/s
n as 1028 of softmax: 279.757637 gb/s
n as 1029 of softmax: 298.785226 gb/s
```
I'm still working on to make it better now.
WDYT? Any insights or comments are appreciated!
### Additional context
A [doc](https://docs.google.com/document/d/1K030-wgNlzyYePBAPvwZ5W0Tm7jebFAjSat6QvmWoz4/edit?usp=sharing) contains some nsight profiler screen shot.
cc @msaroufim @ptrblck @eqy
| true
|
2,782,524,234
|
remove Windows XPU build workaround.
|
xuhancn
|
closed
|
[
"module: windows",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel",
"intel",
"ciflow/xpu",
"module: xpu"
] | 34
|
COLLABORATOR
|
From the RFC: https://github.com/pytorch/pytorch/issues/141946
Fixes https://github.com/pytorch/pytorch/issues/134989
After we land these fixing PRs:
1. https://github.com/pytorch/pytorch/pull/142245
2. https://github.com/pytorch/pytorch/pull/141943
We can remove the Windows XPU workaround.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,782,469,720
|
nn.functional.interpolate doesn't work correctly with NCW tensors when output W exceeds 2^23
|
alexlyulkov
|
open
|
[
"module: nn",
"triaged",
"module: edge cases"
] | 0
|
NONE
|
### 🐛 Describe the bug
nn.functional.interpolate returns strange results for NCW tensors when output W exceeds 2^23.
**Reproduction:**
```
import torch
n = (1 << 22) + 4
x = []
for i in range(n // 2):
x.append(0.0)
x.append(1.0)
x = torch.tensor(x, device="cuda:0", dtype=torch.float)
x = x.view(1, 1, -1)
y = torch.nn.functional.interpolate(x, size = n * 2, mode="linear")
print(y[0, 0, :20])
print(y[0, 0, -20:])
```
**Output:**
```
tensor([0.0000, 0.2500, 0.7500, 0.7500, 0.2500, 0.2500, 0.7500, 0.7500, 0.2500,
0.2500, 0.7500, 0.7500, 0.2500, 0.2500, 0.7500, 0.7500, 0.2500, 0.2500,
0.7500, 0.7500], device='cuda:0')
tensor([0.2500, 0.2500, 0.7500, 0.7500, 0.2500, 0.2500, 0.7500, 0.7500, 0.2500,
0.2500, 0.7500, 0.7500, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 1.0000], device='cuda:0')
```
Only the first 2^23 values are correct.
It reproduces with float32 and float16 types on CPU and CUDA
### Versions
PyTorch: 2.5.1 CUDA 12.4
Python: 3.12.3
OS: Ubuntu 24.04
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,782,424,962
|
[Not4Land] test `optree` version compatibility
|
XuehaiPan
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"not4land",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,782,409,175
|
FSDP OOM error
|
blurmemo
|
closed
|
[] | 1
|
NONE
|
I use two 40G A100 GPUs and one 80G GPUs to fine-tune my model through lora and FSDP which ShardingStrategy is FULL SHARD. When I use command(CUDA_VISIBLE_DEVICES=5,3,4 torchrun --standalone --nnodes=1 --nproc-per-node=3 finetuning.py) to begin my work. I still get problems which are OOM on two 40G A100 GPUs. I watch my GPUs and find all GPUs will load total model weights when using FullyShardedDataParallel to init model. So I am so confused about them and do not know how to fix them.
Bug logs
```
[rank2]: Traceback (most recent call last):
[rank2]: File "/data0/home/ening/NICA/cogmllm/src/cogmllm/tools/finetuning.py", line 438, in <module>
[rank2]: fire.Fire(main)
[rank2]: File "/data0/home/ening/software/miniconda3/envs/cogmllm/lib/python3.10/site-packages/fire/core.py", line 135, in Fire
[rank2]: component_trace = _Fire(component, args, parsed_flag_args, context, name)
[rank2]: File "/data0/home/ening/software/miniconda3/envs/cogmllm/lib/python3.10/site-packages/fire/core.py", line 468, in _Fire
[rank2]: component, remaining_args = _CallAndUpdateTrace(
[rank2]: File "/data0/home/ening/software/miniconda3/envs/cogmllm/lib/python3.10/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
[rank2]: component = fn(*varargs, **kwargs)
[rank2]: File "/data0/home/ening/NICA/cogmllm/src/cogmllm/tools/finetuning.py", line 281, in main
[rank2]: model = FSDP(
[rank2]: File "/data0/home/ening/software/miniconda3/envs/cogmllm/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 509, in __init__
[rank2]: _init_param_handle_from_module(
[rank2]: File "/data0/home/ening/software/miniconda3/envs/cogmllm/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py", line 636, in _init_param_handle_from_module
[rank2]: _init_param_handle_from_params(state, managed_params, fully_sharded_module)
[rank2]: File "/data0/home/ening/software/miniconda3/envs/cogmllm/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py", line 648, in _init_param_handle_from_params
[rank2]: handle = FlatParamHandle(
[rank2]: File "/data0/home/ening/software/miniconda3/envs/cogmllm/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 584, in __init__
[rank2]: self._init_flat_param_and_metadata(
[rank2]: File "/data0/home/ening/software/miniconda3/envs/cogmllm/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 739, in _init_flat_param_and_metadata
[rank2]: self.flat_param: FlatParameter = self.flatten_tensors_into_flat_param(
[rank2]: File "/data0/home/ening/software/miniconda3/envs/cogmllm/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 852, in flatten_tensors_into_flat_param
[rank2]: flat_param_data = self.flatten_tensors(tensors, aligned_numel)
[rank2]: File "/data0/home/ening/software/miniconda3/envs/cogmllm/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 844, in flatten_tensors
[rank2]: return torch.cat(flat_tensors, dim=0)
[rank2]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 19.88 GiB. GPU 2 has a total capacity of 39.38 GiB of which 18.80 GiB is free. Including non-PyTorch memory, this process has 20.57 GiB memory in use. Of the allocated memory 19.89 GiB is allocated by PyTorch, and 208.63 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
```
| true
|
2,782,406,282
|
[FX] Refactor immutable collections implementation
|
XuehaiPan
|
closed
|
[
"open source",
"Merged",
"release notes: fx",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147691
* __->__ #144640
* #147699
Get rid of dynamic class creation via `type(name, bases, ...)`. Convert it to classic static class definition for better readability and static analysis support.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,782,317,400
|
torch.vmap + autograd.Function + current_level bug
|
yanboliang
|
open
|
[
"module: autograd",
"triaged",
"module: vmap",
"module: functorch"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It we call ```torch._C._functorch.current_level()``` inside of an autograd function's ```setup_context``` method, and then apply ```torch.vmap``` on top of it, it errors out.
Repro:
* Checkout and apply https://github.com/pytorch/pytorch/pull/143811
* Replace ```key = id(Generated)``` with ```key = current_level()``` in ```setup_context```.
* Run the following example:
```
import torch
class LinearFunction(torch.autograd.Function):
generate_vmap_rule = True
# Note that forward, setup_context, and backward are @staticmethods
@staticmethod
def forward(input, weight, bias):
output = input.mm(weight.t())
if bias is not None:
output += bias.unsqueeze(0).expand_as(output)
return output
@staticmethod
# inputs is a Tuple of all of the inputs passed to forward.
# output is the output of the forward().
def setup_context(ctx, inputs, output):
input, weight, bias = inputs
ctx.save_for_backward(input, weight, bias)
# This function has only a single output, so it gets only one gradient
@staticmethod
def backward(ctx, grad_output):
input, weight, bias = ctx.saved_tensors
grad_input = grad_weight = grad_bias = None
if ctx.needs_input_grad[0]:
grad_input = grad_output.mm(weight)
if ctx.needs_input_grad[1]:
grad_weight = grad_output.t().mm(input)
if bias is not None and ctx.needs_input_grad[2]:
grad_bias = grad_output.sum(0)
return grad_input, grad_weight, grad_bias
def fn(input, weight, bias=None):
return torch.vmap(LinearFunction.apply)(input, weight, bias)
torch.manual_seed(124)
batch_input = torch.randn(4, 2, 2, dtype=torch.double, requires_grad=True)
batch_weight = torch.randn(4, 3, 2, dtype=torch.double, requires_grad=True)
batch_bias = torch.randn(4, 3, dtype=torch.double, requires_grad=True)
output = fn(batch_input, batch_weight, batch_bias)
print(output)
```
Then it errors out:
```
Traceback (most recent call last):
File "/data/users/ybliang/debug/debug7.py", line 44, in <module>
output = fn(batch_input, batch_weight, batch_bias)
File "/data/users/ybliang/debug/debug7.py", line 37, in fn
return torch.vmap(LinearFunction.apply)(input, weight, bias)
File "/home/ybliang/local/pytorch/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "/home/ybliang/local/pytorch/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "/home/ybliang/local/pytorch/torch/_functorch/vmap.py", line 481, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/home/ybliang/local/pytorch/torch/autograd/function.py", line 585, in apply
return custom_function_call(cls, *args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_functorch/autograd_function.py", line 49, in __call__
return super().__call__(autograd_function, *args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_ops.py", line 439, in __call__
return wrapper()
File "/home/ybliang/local/pytorch/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_ops.py", line 435, in wrapper
return self.dispatch(
File "/home/ybliang/local/pytorch/torch/_ops.py", line 305, in dispatch
return dispatch_functorch(self, args, kwargs)
File "/home/ybliang/local/pytorch/torch/_functorch/pyfunctorch.py", line 294, in dispatch_functorch
return interpreter.process(op, args, kwargs)
File "/home/ybliang/local/pytorch/torch/_functorch/pyfunctorch.py", line 130, in process
return kernel(self, *args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_functorch/autograd_function.py", line 300, in custom_function_call_vmap
return custom_function_call_vmap_generate_rule(
File "/home/ybliang/local/pytorch/torch/_functorch/autograd_function.py", line 384, in custom_function_call_vmap_generate_rule
outputs = custom_function_call(vmapped_function, *unwrapped_operands)
File "/home/ybliang/local/pytorch/torch/_functorch/autograd_function.py", line 50, in __call__
return autograd_function.apply(*args, **kwargs)
File "/home/ybliang/local/pytorch/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/ybliang/local/pytorch/torch/_functorch/autograd_function.py", line 410, in setup_context
key = current_level()
RuntimeError: maybe_layer.has_value() INTERNAL ASSERT FAILED at "/data/users/ybliang/pytorch/torch/csrc/functorch/init.cpp":370, please report a bug to PyTorch.
```
### Versions
main
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,782,298,517
|
[MPSInductor] Implement bitcasts
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
That will be used to compile something like `torch.rand(32, device='mps').view(dtype=torch.int32)`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,782,277,604
|
CheckpointError with `torch.distributed.algorithms._checkpoint.checkpoint_wrapper` and `torch.compile`
|
eliphatfs
|
open
|
[
"module: activation checkpointing",
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import functools
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import apply_activation_checkpointing, checkpoint_wrapper, CheckpointImpl
@torch.compile(mode='reduce-overhead')
class SelfAttention(nn.Module):
def __init__(self, num_heads: int, head_dim: int, norm_eps: float, causal: bool):
super().__init__()
self.num_heads = num_heads
self.head_dim = head_dim
self.causal = causal
total_dim = num_heads * head_dim
self.to_qkv = nn.Linear(total_dim, total_dim * 3, bias=False)
self.to_out = nn.Linear(total_dim, total_dim, bias=False)
self.q_norm = nn.RMSNorm(head_dim, eps=norm_eps)
self.k_norm = nn.RMSNorm(head_dim, eps=norm_eps)
def forward(self, x_btc: torch.Tensor):
states = x_btc
batch_size, sequence_length, _ = states.shape
proj: torch.Tensor = self.to_qkv(states)
proj = proj.view(batch_size, sequence_length, self.num_heads, 3, self.head_dim).transpose(1, 2)
query, key, value = proj.unbind(-2)
query: torch.Tensor = self.q_norm(query)
key: torch.Tensor = self.k_norm(key)
hidden_states = F.scaled_dot_product_attention(
query, key, value, is_causal=self.causal
)
hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, sequence_length, self.num_heads * self.head_dim)
hidden_states = hidden_states.to(query.dtype)
return self.to_out(hidden_states)
class Block(nn.Module):
def __init__(self):
super().__init__()
self.attn = SelfAttention(1, 64, 1e-5, False)
def forward(self, x):
return x + self.attn(x)
class Transformer(nn.Module):
def __init__(self):
super().__init__()
self.blocks = nn.ModuleList([Block() for _ in range(4)])
def forward(self, x):
for block in self.blocks:
x = block(x)
return x
if __name__ == '__main__':
mod = Transformer().cuda()
non_reentrant_wrapper = functools.partial(
checkpoint_wrapper,
checkpoint_impl=CheckpointImpl.NO_REENTRANT,
)
apply_activation_checkpointing(
mod, checkpoint_wrapper_fn=non_reentrant_wrapper,
check_fn=lambda mod: isinstance(mod, Block)
)
mod(torch.randn(3, 77, 64).cuda()).sum().backward()
```
Output:
```
/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:167: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
Traceback (most recent call last):
File "/root/bug/repro.py", line 74, in <module>
mod(torch.randn(3, 77, 64).cuda()).sum().backward()
File "/opt/conda/lib/python3.11/site-packages/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "/opt/conda/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/opt/conda/lib/python3.11/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1740, in backward
ctx_saved_tensors = ctx.saved_tensors
^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 1129, in unpack_hook
frame.check_recomputed_tensors_match(gid)
File "/opt/conda/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 903, in check_recomputed_tensors_match
raise CheckpointError(
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
tensor at position 12:
saved metadata: {'shape': torch.Size([]), 'dtype': torch.int64, 'device': device(type='cpu')}
recomputed metadata: {'shape': torch.Size([]), 'dtype': torch.int64, 'device': device(type='cuda', index=0)}
tensor at position 13:
saved metadata: {'shape': torch.Size([]), 'dtype': torch.int64, 'device': device(type='cpu')}
recomputed metadata: {'shape': torch.Size([]), 'dtype': torch.int64, 'device': device(type='cuda', index=0)}
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-477.21.1.el8_8.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.154.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.20.1
[pip3] onnxscript==0.1.0.dev20250108
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.5.1+cu121
[pip3] torch.redstone==0.0.6
[pip3] torchaudio==2.5.1+cu121
[pip3] torchdiffeq==0.2.5
[pip3] torchelastic==0.2.2
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.20.1+cu121
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.1.0.70 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.21.5 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.5.1+cu121 pypi_0 pypi
[conda] torch-redstone 0.0.6 pypi_0 pypi
[conda] torchaudio 2.5.1+cu121 pypi_0 pypi
[conda] torchdiffeq 0.2.5 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchprofile 0.0.4 pypi_0 pypi
[conda] torchvision 0.20.1+cu121 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @soulitzer @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @pradeepfn
| true
|
2,782,277,352
|
Error loading "pytorch\torch\lib\shm.dll" or one of its dependencies when building from source (Windows 11)
|
Panchovix
|
closed
|
[] | 0
|
NONE
|
### 🐛 Describe the bug
I have built torch from source, on https://github.com/pytorch/pytorch/commit/63569d9745b0530f8d66721e2a462c9f042e6b16 commit, using:
Magma for CUDA 12.6 (2.5.4)
MKL 2025.0 and 2020.2
cudNN 9.6
cuSPARSELt 0.6
cuDSS 0.4
On VS 2022 with MSVC v143
Using
```
cmake .. -GNinja ^
-DCUDNN_LIBRARY_PATH="C:/Program Files/NVIDIA/CUDNN/v9.6/lib/12.6/x64/cudnn.lib" ^
-DCUDNN_INCLUDE_PATH="C:/Program Files/NVIDIA/CUDNN/v9.6/include/12.6" ^
-DCUSPARSELT_LIBRARY_PATH="C:/Program Files/NVIDIA cuSPARSELt/v0.6/lib/cusparseLt.lib" ^
-DCUSPARSELT_INCLUDE_PATH="C:/Program Files/NVIDIA cuSPARSELt/v0.6/include" ^
-DCUDSS_LIBRARY_PATH="C:/Program Files/NVIDIA cuDSS/v0.4/lib/12/cudss.lib" ^
-DCUDSS_INCLUDE_PATH="C:/Program Files/NVIDIA cuDSS/v0.4/include" ^
-DUSE_CUDA=ON ^
-DUSE_FLASH_ATTENTION=ON ^
-DUSE_CUDNN=ON ^
-DUSE_CUSPARSELT=ON ^
-DUSE_CUDSS=ON ^
-DCMAKE_BUILD_TYPE=Release ^
-DUSE_STATIC_DISPATCH=OFF ^
-DCMAKE_INSTALL_PREFIX=../torch
```
And then doing
```
cmake --build . --target install --config Release -j 6
cd ..
pip install -e .
```
When importing, I get
```
(py310) C:\Users\User\Desktop\pytorch_compile\pytorch\build>python
Python 3.10.16 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:19:12) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\User\Desktop\pytorch_compile\pytorch\torch\__init__.py", line 274, in <module>
_load_dll_libraries()
File "C:\Users\User\Desktop\pytorch_compile\pytorch\torch\__init__.py", line 270, in _load_dll_libraries
raise err
OSError: [WinError 126] No se puede encontrar el módulo especificado. Error loading "C:\Users\User\Desktop\pytorch_compile\pytorch\torch\lib\shm.dll" or one of its dependencies.
```
Issue happens with both Python 3.10 and 3.12
Complete log when running cmake is
```
(py310) C:\Users\User\Desktop\pytorch_compile\pytorch\build>cmake .. -GNinja ^
¿Más? -DCUDNN_LIBRARY_PATH="C:/Program Files/NVIDIA/CUDNN/v9.6/lib/12.6/x64/cudnn.lib" ^
¿Más? -DCUDNN_INCLUDE_PATH="C:/Program Files/NVIDIA/CUDNN/v9.6/include/12.6" ^
¿Más? -DCUSPARSELT_LIBRARY_PATH="C:/Program Files/NVIDIA cuSPARSELt/v0.6/lib/cusparseLt.lib" ^
¿Más? -DCUSPARSELT_INCLUDE_PATH="C:/Program Files/NVIDIA cuSPARSELt/v0.6/include" ^
¿Más? -DCUDSS_LIBRARY_PATH="C:/Program Files/NVIDIA cuDSS/v0.4/lib/12/cudss.lib" ^
¿Más? -DCUDSS_INCLUDE_PATH="C:/Program Files/NVIDIA cuDSS/v0.4/include" ^
¿Más? -DUSE_CUDA=ON ^
¿Más? -DUSE_FLASH_ATTENTION=ON ^
¿Más? -DUSE_CUDNN=ON ^
¿Más? -DUSE_CUSPARSELT=ON ^
¿Más? -DUSE_CUDSS=ON ^
¿Más? -DCMAKE_BUILD_TYPE=Release ^
¿Más? -DUSE_STATIC_DISPATCH=OFF ^
¿Más? -DCMAKE_INSTALL_PREFIX=../torch
-- The CXX compiler identification is MSVC 19.42.34435.0
-- The C compiler identification is MSVC 19.42.34435.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Not forcing any particular BLAS to be found
CMake Warning at CMakeLists.txt:422 (message):
TensorPipe cannot be used on Windows. Set it to OFF
CMake Warning at CMakeLists.txt:424 (message):
KleidiAI cannot be used on Windows. Set it to OFF
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Success
-- Performing Test C_HAS_AVX512_1
-- Performing Test C_HAS_AVX512_1 - Success
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Success
-- Performing Test CXX_HAS_AVX512_1
-- Performing Test CXX_HAS_AVX512_1 - Success
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success
-- Current compiler supports avx512f extension. Will build fbgemm.
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Compiler does not support SVE extension. Will not build perfkernels.
-- Performing Test HAS/UTF_8
-- Performing Test HAS/UTF_8 - Success
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6 (found version "12.6")
-- The CUDA compiler identification is NVIDIA 12.6.85
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/bin/nvcc.exe - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Found CUDAToolkit: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/include (found version "12.6.85")
-- PyTorch: CUDA detected: 12.6
-- PyTorch: CUDA nvcc is: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/bin/nvcc.exe
-- PyTorch: CUDA toolkit directory: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6
-- PyTorch: Header version is: 12.6
-- Found Python: C:/Users/User/anaconda3/envs/py310/python.exe (found version "3.10.16") found components: Interpreter
CMake Warning at cmake/public/cuda.cmake:140 (message):
Failed to compute shorthash for libnvrtc.so
Call Stack (most recent call first):
cmake/Dependencies.cmake:44 (include)
CMakeLists.txt:865 (include)
-- Found nvtx3: C:/Users/User/Desktop/pytorch_compile/pytorch/third_party/NVTX/c/include
-- Found CUDNN: C:/Program Files/NVIDIA/CUDNN/v9.6/lib/12.6/x64/cudnn.lib
-- Found CUSPARSELT: C:/Program Files/NVIDIA cuSPARSELt/v0.6/lib/cusparseLt.lib
-- Found CUDSS: C:/Program Files/NVIDIA cuDSS/v0.4/lib/12/cudss.lib
-- USE_CUFILE is set to 0. Compiling without cuFile support
-- Autodetected CUDA architecture(s): 8.9 8.9 8.6
-- Added CUDA NVCC flags for: -gencode;arch=compute_89,code=sm_89;-gencode;arch=compute_86,code=sm_86
CMake Warning at cmake/Dependencies.cmake:95 (message):
Not compiling with XPU. Could NOT find SYCL.Suppress this warning with
-DUSE_XPU=OFF.
Call Stack (most recent call first):
CMakeLists.txt:865 (include)
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
--
-- 3.13.0.0
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:C:/Users/User/Desktop/pytorch_compile/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Trying to find preferred BLAS backend of choice: MKL
-- MKL_THREADING = OMP
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Looking for cblas_sgemm
-- Looking for cblas_sgemm - found
-- Looking for cblas_gemm_bf16bf16f32
-- Looking for cblas_gemm_bf16bf16f32 - found
-- Looking for cblas_gemm_f16f16f32
-- Looking for cblas_gemm_f16f16f32 - found
-- MKL libraries: C:/Program Files (x86)/Intel/oneAPI/mkl/2025.0/lib/mkl_intel_lp64_dll.lib;C:/Program Files (x86)/Intel/oneAPI/mkl/2025.0/lib/mkl_intel_thread_dll.lib;C:/Program Files (x86)/Intel/oneAPI/mkl/2025.0/lib/mkl_core_dll.lib;C:/Users/User/anaconda3/envs/py310/Library/lib/libiomp5md.lib
-- MKL include directory: C:/Program Files (x86)/Intel/oneAPI/mkl/2025.0/include
-- MKL OpenMP type: Intel
-- MKL OpenMP library: C:/Users/User/anaconda3/envs/py310/Library/lib/libiomp5md.lib
-- The ASM compiler identification is MSVC
-- Found assembler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe
-- Building for XNNPACK_TARGET_PROCESSOR: x86_64
-- Generating microkernels.cmake
Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avx256vnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c (1th function)
Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-scalar.c
No microkernel found in src\reference\binary-elementwise.cc
No microkernel found in src\reference\packing.cc
No microkernel found in src\reference\unary-elementwise.cc
CMake Warning (dev) at third_party/fbgemm/CMakeLists.txt:93 (find_package):
Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
are removed. Run "cmake --help-policy CMP0148" for policy details. Use
the cmake_policy command to set the policy and suppress this warning.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found PythonInterp: C:/Users/User/anaconda3/envs/py310/python.exe (found version "3.10.16")
-- Performing Test COMPILER_SUPPORTS_AVX512
-- Performing Test COMPILER_SUPPORTS_AVX512 - Success
-- Check OMP with lib C:/Users/User/anaconda3/envs/py310/Library/lib/libiomp5md.lib and flags -openmp:experimental
-- Check OMP with lib C:/Users/User/anaconda3/envs/py310/Library/lib/libiomp5md.lib and flags -openmp:experimental
CMake Warning (dev) at C:/Users/User/anaconda3/envs/py310/Library/share/cmake-3.27/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:136 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_C: -openmp:experimental
CMake Warning (dev) at C:/Users/User/anaconda3/envs/py310/Library/share/cmake-3.27/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:136 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_CXX: -openmp:experimental
-- Found OpenMP: TRUE
CMake Warning at third_party/fbgemm/CMakeLists.txt:138 (message):
OpenMP found! OpenMP_C_INCLUDE_DIRS =
CMake Warning at third_party/fbgemm/CMakeLists.txt:232 (message):
==========
CMake Warning at third_party/fbgemm/CMakeLists.txt:233 (message):
CMAKE_BUILD_TYPE = Release
CMake Warning at third_party/fbgemm/CMakeLists.txt:234 (message):
CMAKE_CXX_FLAGS_DEBUG is /Z7 /Ob0 /Od /RTC1 /bigobj
CMake Warning at third_party/fbgemm/CMakeLists.txt:235 (message):
CMAKE_CXX_FLAGS_RELEASE is /O2 /Ob2 /DNDEBUG /bigobj
CMake Warning at third_party/fbgemm/CMakeLists.txt:236 (message):
==========
** AsmJit Summary **
ASMJIT_DIR=C:/Users/User/Desktop/pytorch_compile/pytorch/third_party/fbgemm/third_party/asmjit
ASMJIT_TEST=FALSE
ASMJIT_TARGET_TYPE=SHARED
ASMJIT_DEPS=
ASMJIT_LIBS=asmjit
ASMJIT_CFLAGS=
ASMJIT_PRIVATE_CFLAGS=-MP;-GF;-Zc:__cplusplus;-Zc:inline;-Zc:strictStrings;-Zc:threadSafeInit-;-W4
ASMJIT_PRIVATE_CFLAGS_DBG=-GS
ASMJIT_PRIVATE_CFLAGS_REL=-GS-;-O2;-Oi
CMake Deprecation Warning at third_party/ittapi/CMakeLists.txt:7 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
CMake Deprecation Warning at third_party/FP16/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
CMake Deprecation Warning at third_party/psimd/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
-- Using third party subdirectory Eigen.
-- Found Python: C:/Users/User/anaconda3/envs/py310/python.exe (found version "3.10.16") found components: Interpreter Development.Module NumPy
-- Using third_party/pybind11.
-- pybind11 include dirs: C:/Users/User/Desktop/pytorch_compile/pytorch/cmake/../third_party/pybind11/include
-- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS)
-- Using third_party/opentelemetry-cpp.
-- opentelemetry api include dirs: C:/Users/User/Desktop/pytorch_compile/pytorch/cmake/../third_party/opentelemetry-cpp/api/include
-- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
-- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
CMake Warning at cmake/Dependencies.cmake:945 (message):
Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF
Call Stack (most recent call first):
CMakeLists.txt:865 (include)
-- Adding OpenMP CXX_FLAGS: -openmp:experimental
-- Will link against OpenMP libraries: C:/Users/User/anaconda3/envs/py310/Library/lib/libiomp5md.lib
-- Found CUB: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/include
CMake Deprecation Warning at third_party/gloo/CMakeLists.txt:1 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:21 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'BUILD_BENCHMARK'.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:35 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'USE_NCCL'.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:36 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'USE_RCCL'.
This warning is for project developers. Use -Wno-dev to suppress it.
-- MSVC detected
-- Set USE_REDIS OFF
-- Set USE_IBVERBS OFF
-- Set USE_NCCL OFF
-- Set USE_RCCL OFF
-- Set USE_LIBUV ON
-- Only USE_LIBUV is supported on Windows
-- Enabling sccache for CXX
-- Enabling sccache for C
-- Gloo build as SHARED library
CMake Warning (dev) at third_party/gloo/cmake/Cuda.cmake:109 (find_package):
Policy CMP0074 is not set: find_package uses <PackageName>_ROOT variables.
Run "cmake --help-policy CMP0074" for policy details. Use the cmake_policy
command to set the policy and suppress this warning.
CMake variable CUDAToolkit_ROOT is set to:
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6
For compatibility, CMake is ignoring the variable.
Call Stack (most recent call first):
third_party/gloo/cmake/Dependencies.cmake:115 (include)
third_party/gloo/CMakeLists.txt:111 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found CUDAToolkit: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/include (found suitable version "12.6.85", minimum required is "7.0")
-- CUDA detected: 12.6.85
CMake Warning (dev) at third_party/onnx/CMakeLists.txt:106 (find_package):
Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
are removed. Run "cmake --help-policy CMP0148" for policy details. Use
the cmake_policy command to set the policy and suppress this warning.
This warning is for project developers. Use -Wno-dev to suppress it.
Generated: C:/Users/User/Desktop/pytorch_compile/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Generated: C:/Users/User/Desktop/pytorch_compile/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Generated: C:/Users/User/Desktop/pytorch_compile/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto
--
-- ******** Summary ********
-- CMake version : 3.27.4
-- CMake command : C:/Users/User/anaconda3/envs/py310/Library/bin/cmake.exe
-- System : Windows
-- C++ compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe
-- C++ compiler version : 19.42.34435.0
-- CXX flags : /DWIN32 /D_WINDOWS /GR /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL /EHsc /wd26812
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;__STDC_FORMAT_MACROS
-- CMAKE_PREFIX_PATH : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6
-- CMAKE_INSTALL_PREFIX : C:/Users/User/Desktop/pytorch_compile/pytorch/torch
-- CMAKE_MODULE_PATH : C:/Users/User/Desktop/pytorch_compile/pytorch/cmake/Modules;C:/Users/User/Desktop/pytorch_compile/pytorch/cmake/public/../Modules_CUDA_fix
--
-- ONNX version : 1.17.0
-- ONNX NAMESPACE : onnx_torch
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : OFF
-- Protobuf_USE_STATIC_LIBS : ON
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_DISABLE_STATIC_REGISTRATION : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_SHARED_LIBS :
-- BUILD_SHARED_LIBS : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
-- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
-- Adding -DNDEBUG to compile flags
-- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2
-- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - False
-- Compiling with MAGMA support
-- MAGMA INCLUDE DIRECTORIES: C:/magma_dir/include
-- MAGMA LIBRARIES: C:/magma_dir/lib/magma.lib
-- MAGMA V2 check: 0
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Looking for sbgemm_
-- Looking for sbgemm_ - not found
-- Found a library with LAPACK API (mkl).
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
-- Will build oneDNN UKERNEL
-- MKLDNN_CPU_RUNTIME = OMP
CMake Deprecation Warning at third_party/ideep/mkl-dnn/CMakeLists.txt:17 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
-- DNNL_TARGET_ARCH: X64
-- DNNL_LIBRARY_NAME: dnnl
CMake Warning (dev) at C:/Users/User/anaconda3/envs/py310/Library/share/cmake-3.27/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/cmake/OpenMP.cmake:55 (find_package)
third_party/ideep/mkl-dnn/CMakeLists.txt:119 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_C: -openmp:experimental
CMake Warning (dev) at C:/Users/User/anaconda3/envs/py310/Library/share/cmake-3.27/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/cmake/OpenMP.cmake:55 (find_package)
third_party/ideep/mkl-dnn/CMakeLists.txt:119 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_CXX: -openmp:experimental
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.47.0.windows.1")
-- Enabled testing coverage: CI
-- Enabled workload: TRAINING
-- Enabled primitives: ALL
-- Enabled primitive CPU ISA: ALL
-- Enabled primitive GPU ISA: ALL
-- Enabled GeMM kernels ISA: ALL
-- Primitive cache is enabled
-- Experimental functionality for ukernels is enabled
-- The ASM_MASM compiler identification is MSVC
-- Found assembler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/ml64.exe
-- Graph component is enabled
-- Graph compiler backend is disabled.
-- Found MKL-DNN: TRUE
-- {fmt} version: 11.1.1
-- Build type: Release
-- Using CPU-only version of Kineto
-- Configuring Kineto dependency:
-- KINETO_SOURCE_DIR = C:/Users/User/Desktop/pytorch_compile/pytorch/third_party/kineto/libkineto
-- KINETO_BUILD_TESTS = OFF
-- KINETO_LIBRARY_TYPE = static
CMake Warning (dev) at third_party/kineto/libkineto/CMakeLists.txt:15 (find_package):
Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
are removed. Run "cmake --help-policy CMP0148" for policy details. Use
the cmake_policy command to set the policy and suppress this warning.
This warning is for project developers. Use -Wno-dev to suppress it.
INFO CUDA_SOURCE_DIR =
INFO ROCM_SOURCE_DIR =
INFO CUPTI unavailable or disabled - not building GPU profilers
-- Kineto: FMT_SOURCE_DIR = C:/Users/User/Desktop/pytorch_compile/pytorch/third_party/fmt
-- Kineto: FMT_INCLUDE_DIR = C:/Users/User/Desktop/pytorch_compile/pytorch/third_party/fmt/include
INFO CUPTI_INCLUDE_DIR = /extras/CUPTI/include
INFO ROCTRACER_INCLUDE_DIR = /include/roctracer
INFO DYNOLOG_INCLUDE_DIR = C:/Users/User/Desktop/pytorch_compile/pytorch/third_party/kineto/libkineto/third_party/dynolog/
INFO IPCFABRIC_INCLUDE_DIR = C:/Users/User/Desktop/pytorch_compile/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/
-- Configured Kineto (CPU)
-- Performing Test HAS/WD4624
-- Performing Test HAS/WD4624 - Success
-- Performing Test HAS/WD4068
-- Performing Test HAS/WD4068 - Success
-- Performing Test HAS/WD4067
-- Performing Test HAS/WD4067 - Success
-- Performing Test HAS/WD4267
-- Performing Test HAS/WD4267 - Success
-- Performing Test HAS/WD4661
-- Performing Test HAS/WD4661 - Success
-- Performing Test HAS/WD4717
-- Performing Test HAS/WD4717 - Success
-- Performing Test HAS/WD4244
-- Performing Test HAS/WD4244 - Success
-- Performing Test HAS/WD4804
-- Performing Test HAS/WD4804 - Success
-- Performing Test HAS/WD4273
-- Performing Test HAS/WD4273 - Success
-- Performing Test HAS_WNO_STRINGOP_OVERFLOW
-- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed
--
-- Use the C++ compiler to compile (MI_USE_CXX=ON)
--
-- Library base name: mimalloc
-- Version : 1.8
-- Build type : release
-- C++ Compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe
-- Compiler flags : /Zc:__cplusplus
-- Compiler defines :
-- Link libraries : psapi;shell32;user32;advapi32;bcrypt
-- Build targets : static
--
-- Performing Test HAS_WDEPRECATED
-- Performing Test HAS_WDEPRECATED - Failed
-- don't use NUMA
-- Looking for backtrace
-- Looking for backtrace - not found
-- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR)
-- Autodetected CUDA architecture(s): 8.9 8.9 8.6
-- headers outputs:
-- sources outputs:
-- declarations_yaml outputs:
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed
-- Using ATen parallel backend: OMP
-- Found OpenSSL: C:/Users/User/anaconda3/envs/py310/Library/lib/libcrypto.lib (found version "3.4.0")
-- Check size of long double
-- Check size of long double - done
-- Performing Test COMPILER_SUPPORTS_FLOAT128
-- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed
-- Performing Test COMPILER_SUPPORTS_SSE2
-- Performing Test COMPILER_SUPPORTS_SSE2 - Success
-- Performing Test COMPILER_SUPPORTS_SSE4
-- Performing Test COMPILER_SUPPORTS_SSE4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX
-- Performing Test COMPILER_SUPPORTS_AVX - Success
-- Performing Test COMPILER_SUPPORTS_FMA4
-- Performing Test COMPILER_SUPPORTS_FMA4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX2
-- Performing Test COMPILER_SUPPORTS_AVX2 - Success
-- Performing Test COMPILER_SUPPORTS_AVX512F
-- Performing Test COMPILER_SUPPORTS_AVX512F - Success
-- Found OpenMP_C: -openmp:experimental (found version "2.0")
-- Found OpenMP_CXX: -openmp:experimental (found version "2.0")
-- Found OpenMP: TRUE (found version "2.0")
-- Performing Test COMPILER_SUPPORTS_OPENMP
-- Performing Test COMPILER_SUPPORTS_OPENMP - Success
-- Performing Test COMPILER_SUPPORTS_OMP_SIMD
-- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed
-- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM
-- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed
-- Configuring build for SLEEF-v3.7.0
Target system: Windows-10.0.26100
Target processor: AMD64
Host system: Windows-10.0.26100
Host processor: AMD64
Detected C compiler: MSVC @ C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe
CMake: 3.27.4
Make program: C:/Users/User/anaconda3/envs/py310/Library/bin/ninja.exe
-- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef
-- Building shared libs : OFF
-- Building static test bins: OFF
-- MPFR : LIB_MPFR-NOTFOUND
-- GMP : LIBGMP-NOTFOUND
-- RT :
-- FFTW3 : LIBFFTW3-NOTFOUND
-- OPENSSL : 3.4.0
-- SDE : SDE_COMMAND-NOTFOUND
-- COMPILER_SUPPORTS_OPENMP : FALSE
AT_INSTALL_INCLUDE_DIR include/ATen/core
core header install: C:/Users/User/Desktop/pytorch_compile/pytorch/build/aten/src/ATen/core/TensorBody.h
core header install: C:/Users/User/Desktop/pytorch_compile/pytorch/build/aten/src/ATen/core/aten_interned_strings.h
core header install: C:/Users/User/Desktop/pytorch_compile/pytorch/build/aten/src/ATen/core/enum_tag.h
-- Autodetected CUDA architecture(s): 8.9 8.9 8.6
CMake Warning at CMakeLists.txt:1275 (message):
Generated cmake files are only fully tested if one builds with system glog,
gflags, and protobuf. Other settings may generate files that are not well
tested.
--
-- ******** Summary ********
-- General:
-- CMake version : 3.27.4
-- CMake command : C:/Users/User/anaconda3/envs/py310/Library/bin/cmake.exe
-- System : Windows
-- C++ compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe
-- C++ compiler id : MSVC
-- C++ compiler version : 19.42.34435.0
-- Using ccache if found : OFF
-- CXX flags : /DWIN32 /D_WINDOWS /GR /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273
-- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099
-- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099
-- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;IDEEP_USE_MKL;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;FLASHATTENTION_DISABLE_ALIBI;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC
-- CMAKE_PREFIX_PATH : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6
-- CMAKE_INSTALL_PREFIX : C:/Users/User/Desktop/pytorch_compile/pytorch/torch
-- USE_GOLD_LINKER : OFF
--
-- TORCH_VERSION : 2.7.0
-- BUILD_STATIC_RUNTIME_BENCHMARK: OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_PYTHON : ON
-- Python version : 3.10.16
-- Python executable : C:/Users/User/anaconda3/envs/py310/python.exe
-- Python library : C:/Users/User/anaconda3/envs/py310/libs/python310.lib
-- Python includes : C:/Users/User/anaconda3/envs/py310/include
-- Python site-package : C:\Users\User\anaconda3\envs\py310\Lib\site-packages
-- BUILD_SHARED_LIBS : ON
-- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF
-- BUILD_TEST : OFF
-- BUILD_JNI : OFF
-- BUILD_MOBILE_AUTOGRAD : OFF
-- BUILD_LITE_INTERPRETER: OFF
-- INTERN_BUILD_MOBILE :
-- TRACING_BASED : OFF
-- USE_BLAS : 1
-- BLAS : mkl
-- BLAS_HAS_SBGEMM :
-- USE_LAPACK : 1
-- LAPACK : mkl
-- USE_ASAN : OFF
-- USE_TSAN : OFF
-- USE_CPP_CODE_COVERAGE : OFF
-- USE_CUDA : ON
-- Split CUDA :
-- CUDA static link : OFF
-- USE_CUDNN : ON
-- USE_CUSPARSELT : ON
-- USE_CUDSS : ON
-- USE_CUFILE : OFF
-- CUDA version : 12.6
-- USE_FLASH_ATTENTION : OFF
-- USE_MEM_EFF_ATTENTION : ON
-- cuDNN version : 9.6.0
-- cuSPARSELt version : 0.6.3
-- CUDA root directory : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6
-- CUDA library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/cuda.lib
-- cudart library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/cudart.lib
-- cublas library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/cublas.lib
-- cufft library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/cufft.lib
-- curand library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/curand.lib
-- cusparse library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/cusparse.lib
-- cuDNN library : C:/Program Files/NVIDIA/CUDNN/v9.6/lib/12.6/x64/cudnn.lib
-- cuSPARSELt library : C:/Program Files/NVIDIA cuSPARSELt/v0.6/lib/cusparseLt.lib
-- cuDSS library : C:/Program Files/NVIDIA cuDSS/v0.4/lib/12/cudss.lib
-- nvrtc : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/nvrtc.lib
-- CUDA include path : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/include
-- NVCC executable : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/bin/nvcc.exe
-- CUDA compiler : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/bin/nvcc.exe
-- CUDA flags : -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -Xcompiler /Zc:__cplusplus -Xcompiler /w -w -Xcompiler /FS -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_89,code=sm_89 -gencode arch=compute_86,code=sm_86 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__
-- CUDA host compiler :
-- CUDA --device-c : OFF
-- USE_TENSORRT :
-- USE_XPU : OFF
-- USE_ROCM : OFF
-- BUILD_NVFUSER :
-- USE_EIGEN_FOR_BLAS :
-- USE_FBGEMM : ON
-- USE_FAKELOWP : OFF
-- USE_KINETO : ON
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LITE_PROTO : OFF
-- USE_PYTORCH_METAL : OFF
-- USE_PYTORCH_METAL_EXPORT : OFF
-- USE_MPS : OFF
-- CAN_COMPILE_METAL :
-- USE_MKL : ON
-- USE_STATIC_MKL : OFF
-- USE_MKLDNN : ON
-- USE_MKLDNN_ACL : OFF
-- USE_MKLDNN_CBLAS : OFF
-- USE_UCC : OFF
-- USE_ITT : ON
-- USE_NCCL : OFF
-- USE_NNPACK : OFF
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENMP : ON
-- USE_MIMALLOC : ON
-- USE_MIMALLOC_ON_MKL : OFF
-- USE_VULKAN : OFF
-- USE_PROF : OFF
-- USE_PYTORCH_QNNPACK : OFF
-- USE_XNNPACK : ON
-- USE_DISTRIBUTED : ON
-- USE_MPI : OFF
-- USE_GLOO : ON
-- USE_GLOO_WITH_OPENSSL : OFF
-- USE_TENSORPIPE : OFF
-- Public Dependencies : caffe2::mkl
-- Private Dependencies : Threads::Threads;pthreadpool;cpuinfo;XNNPACK;microkernels-prod;fbgemm;ittnotify;fp16;caffe2::openmp;gloo;fmt::fmt-header-only;kineto
-- Public CUDA Deps. :
-- Private CUDA Deps. : caffe2::curand;caffe2::cufft;caffe2::cublas;torch::cudnn;torch::cusparselt;gloo_cuda;fmt::fmt-header-only;C:/Program Files (x86)/Intel/oneAPI/mkl/2025.0/lib/mkl_lapack95_lp64.lib;C:/Program Files (x86)/Intel/oneAPI/mkl/2025.0/lib/mkl_intel_lp64_dll.lib;C:/Program Files (x86)/Intel/oneAPI/mkl/2025.0/lib/mkl_intel_thread_dll.lib;C:/Program Files (x86)/Intel/oneAPI/mkl/2025.0/lib/mkl_core_dll.lib;C:/Users/User/anaconda3/envs/py310/Library/lib/libiomp5md.lib;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/cudart_static.lib;CUDA::cusparse;CUDA::cufft;CUDA::cusolver;torch::magma;ATEN_CUDA_FILES_GEN_LIB
-- USE_COREML_DELEGATE : OFF
-- BUILD_LAZY_TS_BACKEND : ON
-- USE_ROCM_KERNEL_ASSERT : OFF
-- Performing Test HAS_WMISSING_PROTOTYPES
-- Performing Test HAS_WMISSING_PROTOTYPES - Failed
-- Performing Test HAS_WERROR_MISSING_PROTOTYPES
-- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed
-- Configuring done (71.9s)
-- Generating done (12.1s)
CMake Warning:
Manually-specified variables were not used by the project:
USE_STATIC_DISPATCH
-- Build files have been written to: C:/Users/User/Desktop/pytorch_compile/pytorch/build
```
### Versions
Not applicable (can't import torch)
| true
|
2,782,232,134
|
[MPSInductor] Implement `check_bounds`
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Although at the moment it returns rather than rasises assert due to https://github.com/pytorch/pytorch/pull/144632
`pytest test/inductor/test_torchinductor.py -v -k _mps` score is `368
failed, 391 passed, 32 skipped`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,782,226,572
|
[MPS] `torch.mps.synchronize` hangs on error
|
malfet
|
open
|
[
"triaged",
"module: deadlock",
"module: mps"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Consider following code
```python
import torch
lib=torch.mps._compile_shader("kernel void foo(device float* x) {__builtin_trap();}")
lib.foo(torch.rand(3, device="mps"))
torch.mps.synchronize()
```
It will hang the process, and few attempt to reproduce the same resulted in system hang
### Versions
Nightly
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
2,782,217,891
|
144x less efficient CPU usage when training NN past a certain width
|
dustinboswell
|
open
|
[
"module: performance",
"triaged",
"module: arm"
] | 2
|
NONE
|
### 🐛 Describe the bug
The code below is a minimal NN training loop with a fully connected NN of shape (10->width->width->10).
When width is 45, everything is fine, the code takes about 1 second, and only uses 1 CPU.
When width is 46, the code takes 9 seconds, and uses all 16 cpus.
So it's 144x less efficient (what are all those cycles doing?)
I'm guessing it switches over to a multi-threaded implementation when multiplying matrices of a certain size. But something doesn't seem right.
I also tried the "mps" backend, which surprisingly has enough overhead that it isn't faster until the network is very wide.
```
import time
import os
import torch
from torch import nn, optim
print(f"{torch.__version__=}, {os.uname()=}")
batch_size = 128
all_inputs = torch.randn((batch_size * 100, 10))
all_targets = all_inputs + 0.01 * torch.randn((batch_size * 100, 10))
for device, omp_num_threads in [("cpu", None), ("cpu", 1), ("mps", 1)]:
if omp_num_threads is not None:
torch.set_num_threads(omp_num_threads)
for width in [32, 45, 46, 64, 128, 256, 512, 1024, 2048, 4096]:
if device == "cpu" and width > 256: break # too slow, don't bother
network = nn.Sequential(nn.Linear(10, width), nn.Linear(width, width), nn.Linear(width, 10)).to(device)
optimizer = optim.Adam(network.parameters(), lr=3e-4)
t_start = time.time()
for epoch in range(50):
for offset in range(0, len(all_inputs), batch_size):
inputs = all_inputs[offset:offset+batch_size].to(device)
targets = all_targets[offset:offset+batch_size].to(device)
optimizer.zero_grad()
((network(inputs) - targets) ** 2).mean().backward()
optimizer.step()
final_loss = ((network(all_inputs.to(device)) - all_targets.to(device)) ** 2).mean()
print(f"{torch.get_num_threads()=}, device={device}, nn_width={width}, final_loss={final_loss:2.5f}, took {time.time() - t_start:2.1f} secs")
```
output on my machine is:
```
torch.__version__='2.3.1.post100', os.uname()=posix.uname_result(sysname='Darwin', nodename='username.macbook.pro.m3.lan', release='23.5.0', version='Darwin Kernel Version 23.5.0: Wed May 1 20:17:33 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6031', machine='arm64')
torch.get_num_threads()=16, device=cpu, nn_width=32, final_loss=0.00010, took 0.8 secs
torch.get_num_threads()=16, device=cpu, nn_width=45, final_loss=0.00010, took 1.0 secs
torch.get_num_threads()=16, device=cpu, nn_width=46, final_loss=0.00010, took 8.8 secs <---- 16 cpus, and 9x slower!
torch.get_num_threads()=16, device=cpu, nn_width=64, final_loss=0.00011, took 6.8 secs
torch.get_num_threads()=16, device=cpu, nn_width=128, final_loss=0.00012, took 19.9 secs
torch.get_num_threads()=16, device=cpu, nn_width=256, final_loss=0.00015, took 65.6 secs
# everything is way faster with just 1 thread (below)
torch.get_num_threads()=1, device=cpu, nn_width=32, final_loss=0.00010, took 0.9 secs
torch.get_num_threads()=1, device=cpu, nn_width=45, final_loss=0.00010, took 1.0 secs
torch.get_num_threads()=1, device=cpu, nn_width=46, final_loss=0.00010, took 2.5 secs <---- 1 cpu, and faster!
torch.get_num_threads()=1, device=cpu, nn_width=64, final_loss=0.00011, took 1.9 secs
torch.get_num_threads()=1, device=cpu, nn_width=128, final_loss=0.00012, took 2.5 secs
torch.get_num_threads()=1, device=cpu, nn_width=256, final_loss=0.00015, took 4.2 secs
# mps has a lot of overhead, but eventually is faster
torch.get_num_threads()=1, device=mps, nn_width=32, final_loss=0.00010, took 8.7 secs
torch.get_num_threads()=1, device=mps, nn_width=45, final_loss=0.00010, took 8.7 secs
torch.get_num_threads()=1, device=mps, nn_width=46, final_loss=0.00010, took 8.9 secs
torch.get_num_threads()=1, device=mps, nn_width=64, final_loss=0.00011, took 8.4 secs
torch.get_num_threads()=1, device=mps, nn_width=128, final_loss=0.00012, took 11.8 secs
torch.get_num_threads()=1, device=mps, nn_width=256, final_loss=0.00015, took 8.4 secs
torch.get_num_threads()=1, device=mps, nn_width=512, final_loss=0.00019, took 11.2 secs
torch.get_num_threads()=1, device=mps, nn_width=1024, final_loss=0.00027, took 9.2 secs
torch.get_num_threads()=1, device=mps, nn_width=2048, final_loss=0.00033, took 10.0 secs
torch.get_num_threads()=1, device=mps, nn_width=4096, final_loss=0.00032, took 27.9 secs. <-- quadratic runtime starts here as expected
```
### Versions
```
Collecting environment information...
PyTorch version: 2.3.1.post100
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.0 | packaged by conda-forge | (main, Jan 14 2023, 12:26:40) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.3.1.post100
[conda] numpy 1.26.4 py311he598dae_0
[conda] numpy-base 1.26.4 py311hfbfe69c_0
[conda] pytorch 2.3.1 gpu_mps_py311h7b7e308_100
```
cc @msaroufim @malfet @snadampal @milpuz01
| true
|
2,782,208,242
|
[MPSInductor] Properly generate index expressions
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Now test_slice_scatter4_mps passes
Before this change test_torchinductor.py reported 422 failed and 337 passed, after this change 412 failed 347 passed.
Fixes https://github.com/pytorch/pytorch/issues/144630
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,782,173,571
|
Micro-optimization in Graph.nodes.__iter__
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144631
This generates slightly better code (removing a generator frame) and
drops a redundant assert.
```py
>>> import timeit
>>> def a():
... yield from range(3)
...
>>> def b():
... return range(3)
...
>>> timeit.timeit(lambda: [*a()])
0.2714634328149259
>>> timeit.timeit(lambda: [*b()])
0.12076826114207506
>>>
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,782,090,892
|
[mps/inductor] Adjust lowering to not emit comments
|
dcci
|
closed
|
[
"module: mps",
"oncall: pt2"
] | 7
|
MEMBER
|
### 🐛 Describe the bug
**Edit**
The problem weren't VLAs, rather, the fact that `//` is a comment in metal and an operation in python.
** Original report **
There are currently a fair amount of inductor tests that are failing because the lowering emits VLAs, which aren't supported by the Metal shading language (according to https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf -- only fixed sized arrays are supported).
We will need to massage the codegen a little bit to not emit VLAs, or find some alternative solution. Filing this issue so that I don't forget.
cc: @malfet
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @chauhang @penguinwu
| true
|
2,782,086,391
|
[mps/inductor] Add support for trunc().
|
dcci
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6
|
MEMBER
|
inductor/test_div1 passes after this change.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,782,074,987
|
Patch Weibull.mode
|
j-wilson
|
closed
|
[
"module: distributions",
"triaged",
"open source",
"Stale"
] | 4
|
CONTRIBUTOR
|
This PR fixes the Weibull distribution's `mode` property, which is currently incorrect for `concentration < 1`.
Before:
```
dist = Weibull(scale=torch.ones(4), concentration=torch.tensor([0.5, 0.75, 1.0, 1.25]))
dist.mode
> tensor([1.0000, nan, 0.0000, 0.2759])
```
After:
```
dist = Weibull(scale=torch.ones(4), concentration=torch.tensor([0.5, 0.75, 1.0, 1.25]))
dist.mode
> tensor([0.0000, 0.0000, 0.0000, 0.2759])
```
cc @fritzo @neerajprad @alicanb @nikitaved
| true
|
2,782,054,624
|
remove allow-untyped-defs from torch/distributed/_checkpointable.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144627
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.