id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,040,294,136
|
[precompile] [easy] Refactor FxGraphCache to add cache_hit_post_compile function
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152840
* __->__ #152839
* #152836
This PR refactors CompiledFxGraph by adding a new post_compile step that only runs on cache hit. This refactors a bunch of code in _lookup_graph to its own function so that we can use it in BundledAOTAutogradCacheEntry. No difference in behavior here.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,040,292,658
|
[ROCm] Fix SymmetricMemory build error on NAVI arch
|
pragupta
|
closed
|
[
"oncall: distributed",
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ciflow/periodic",
"ciflow/rocm",
"ciflow/periodic-rocm-mi300"
] | 6
|
CONTRIBUTOR
|
NAVI arch doesn't support `__builtin_amdgcn_s_memtime()`, using `clock64()` instead which works for both NAVI and MI archs.
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,040,279,361
|
[nativert] Move MPMCQueue to torch/nativert.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 21
|
CONTRIBUTOR
|
Summary:
Torch Native Runtime RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md
To land the runtime into PyTorch core, we will gradually land logical parts of the code into the Github issue and get each piece properly reviewed.
This diff adds a small library implementing a multi producer multi consumer queue which will be used to synchronize taks for Torch Native Runtime.
Differential Revision: D74184245
| true
|
3,040,243,576
|
[precompile] Refactor AOTAutogradCacheEntry to be generic
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152840
* #152839
* __->__ #152836
The purpose of this stack is to create a new BundledAOTAutogradCacheEntry, which is an AOTAutogradCacheEntry that is self contained, i.e. it contains all of the CompiledFxGraph directly in the entry, instead of relying on FxGraphCache._lookup_graph.
Because this woudl balloon the size of the actual cache entry to do this, our goal is not to use BundledAOTAutogradCacheEntry in cache scenarios: only for precompile use cases. Thus, it's important we make this whole setup generic, to be able to support these two workflows clearly.
This PR genericizes AOTAutogradCacheEntry considerably, so that it can take in different types of Forwards and Backwards.
Each GenericAOTAutogradCacheEntry is composed of two parts, a TForward and a TBackward. The forward and backward can be loaded in multiple ways, either via FxGraphCache._lookup_graph, or by saving the entire CompiledFxGraph.
For simplicify, this PR only implements the generic code refactors needed, but does not fully implement BundledAOTAutogradCacheEntry, which is an AOTAutogradCacheEntry that takes a full CompiledForward. We'll handle and implement BundledAOTAutogradCacheEntry in the PR above this, for easier review.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,040,225,776
|
[DRAFT] Test nccl
|
atalman
|
open
|
[
"ciflow/binaries"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
3,040,174,808
|
[c10d] Fix extra CUDA context created by barrier
|
kwen2501
|
open
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 1
|
CONTRIBUTOR
|
Fixes #149119.
In ProcessGroup.hpp, we create a dummy tensor for dispatching. This requires a correct device index. This PR uses `device_id` given by user when calling `init_process_group`.
This PR also uses `torch._C._get_accelerator()` to determine the device type.
ghstack-source-id: 96c32b9565794d995c26bd1794856d1ef7961652
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149144
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,040,167,991
|
Document that dampening is skipped in SGD momentum first step
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: docs",
"release notes: optim"
] | 3
|
CONTRIBUTOR
|
Pointed out by https://x.com/hi_tysam/status/1917318692276174977/photo/2.
It would be BC breaking to change this behavior 7 years after it has been decided, so we are documenting it first at the very least.
<img width="642" alt="image" src="https://github.com/user-attachments/assets/3febcb07-e0ed-44a1-bd3b-a8e685711cb4" />
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152833
| true
|
3,040,134,501
|
Allow to set custom PYTHONPATH for torch.inductor
|
gdippolito
|
open
|
[
"triaged",
"open source",
"oncall: pt2",
"module: inductor",
"release notes: inductor"
] | 4
|
NONE
|
When using Bazel, it’s common to encounter issues like [this](https://github.com/bazelbuild/bazel/issues/14640) and [this](https://github.com/bazel-contrib/rules_python/issues/792) where the `PYTHONPATH` environment variable becomes too long and results in an error such as: `OSError: [Errno 7] Argument list too long` . To work around this, users often resort to custom logic to manipulate PYTHONPATH.
Currently, PyTorch Inductor constructs the PYTHONPATH for a subprocess using sys.path, which can lead to this issue in certain environments.
This PR introduces support for a new environment variable, `TORCH_CUSTOM_PYTHONPATH`, allowing users to override the default `PYTHONPATH` passed to the subprocess. This provides a clean way to avoid an exception when using PyTorch in Bazel.
Please let me know if I need to add some documentation to support this PR. I haven't found an open issue specific to this change but I'm confident that this change (or a similar one) would be appreciated by few.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,040,131,353
|
[pytorch][PR][inductor] Fix one instance of launch_enter_hook
|
devashishshankar
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Summary: One usage seems missed in https://github.com/pytorch/pytorch/pull/152457
Test Plan: EMS local benchmark
Differential Revision: D74159749
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,040,101,312
|
[BE]: Improve aten formatter with fmtlib
|
Skylion007
|
open
|
[
"open source"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
3,040,027,504
|
Don't hardcoded support for DTensor to_local/from_local/redistribute into dynamo
|
bdhirsh
|
open
|
[
"oncall: distributed",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
There has been a long-standing hack in dynamo around support for DTensor - there are a few primitive functions (listed above) that accept opaque python types (`DTensorSpec/Placement/DeviceMesh`) and therefore cannot go in the dynamo graph, that have hardcoded support in dynamo.
This is bad for several reasons:
(1) it is brittle (these functions aren't supported in all cases - recent internal example where `.to_local()` on a model causes extra graph breaks / recompiles)
(2) it is an invariant violation (dynamo shouldn't really need to know anything about DTensor)
(3) it prevents @jamesjwu 's AOTDispatcher warm cache from kicking in (the hacks we use to handle these functions in dynamo are not easily pickleable by FX and we therefore cache miss on them). This will be even more critical if we want any sort of pre-compilation to work with distributed.
Now that we have a `flat_apply` HOP that can support non-tensor/symint primitives (thanks @StrongerXi and @zou3519), it should be possible to have dynamo support these functions more generically:
(1) these functions all desugar into a custom `autograd.Function`, which we support in dynamo
(2) the autograd.Function here accepts custom python types, which we can handle through the `flat_apply` HOP.
One difference that needs some figuring out, though, is that this flat_apply should "disappear" as part of AOTDispatcher tracing, since the DTensor subclass will desugar these arguments. We need to make sure this works properly.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
3,040,016,632
|
[MSVC] Enable updated lambda processor by setting compiler flag /Zc:lambda globally
|
taras-janea
|
open
|
[
"module: build",
"module: windows",
"module: cpu",
"open source",
"topic: not user facing",
"skip-url-lint"
] | 1
|
COLLABORATOR
|
Fixes:
- https://github.com/pytorch/pytorch/issues/92600
[Enable updated lambda processor](https://learn.microsoft.com/en-us/cpp/build/reference/zc-lambda?view=msvc-170) by setting compiler flag `/Zc:lambda` globally.
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,039,923,454
|
Pipeline Parallelism Fails when stage input does not produce gradients in all stages.
|
man2machine
|
open
|
[
"oncall: distributed"
] | 0
|
NONE
|
### 🐛 Describe the bug
TLDR: Pipeline parallelism fails if stage input does not have gradients produced
Consider the case where a outputs from each pipeline stage is passed to the next stage, but whether or not the output is used or not for a particular batch is conditional (based on the code of the model). Hence, in many cases (such as in conditional or mixture models), these weights may not be used for a particular stage, thus resulting in an error from `get_bwd_send_ops`: `"[{self.stage_index}] for chunk {bwd_chunk_id} has gradients {grad} and is expecting to send gradients to stage {grad_recv_stage}"`.
As a result, such a model fails when using pipeline parallelism, although with FSDP (or no parallelism) it has no issues. This is caused because for each input tensor to a stage, after calculating the gradient for that tensor, if there is no gradient it produces that error, even if that tensor would otherwise be passed onto a subsequent stage that would result in gradients being produced.
Currently it uses the `dist.isend` to send the tensor, but in order to send None, a different asynchronous P2P commutation operation is needed, to be able to asynchronously send or recv objects (which may or may not be tensors).
It would be great if this can be implemented, as this pipeline parallelism is critical to achieving high throughput in distributed execution, and conditional or mixture models are limited due to this bug.
### Versions
Pytorcn 2.6+
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,039,919,509
|
Only do shallow clone when checkout nccl
|
YouJiacheng
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Note: `--depth` implies `--single-branch` since git 2.7.6
```sh
git clone https://github.com/NVIDIA/nccl.git
Cloning into 'nccl'...
remote: Enumerating objects: 4205, done.
remote: Counting objects: 100% (238/238), done.
remote: Compressing objects: 100% (122/122), done.
remote: Total 4205 (delta 144), reused 126 (delta 116), pack-reused 3967 (from 3)
Receiving objects: 100% (4205/4205), 4.22 MiB | 7.01 MiB/s, done.
Resolving deltas: 100% (2858/2858), done.
```
```sh
git clone --depth 1 --branch v2.25.1-1 https://github.com/NVIDIA/nccl.git
Cloning into 'nccl'...
remote: Enumerating objects: 249, done.
remote: Counting objects: 100% (249/249), done.
remote: Compressing objects: 100% (227/227), done.
remote: Total 249 (delta 31), reused 111 (delta 15), pack-reused 0 (from 0)
Receiving objects: 100% (249/249), 657.44 KiB | 2.14 MiB/s, done.
Resolving deltas: 100% (31/31), done.
Note: switching to '80f6bda4378b99d99e82b4d76a633791cc45fef0'.
```
| true
|
3,039,882,069
|
Use gcc13 in Manylinux 2.28 images
|
atalman
|
open
|
[
"ciflow/binaries",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Related to: https://github.com/pytorch/pytorch/issues/152426
| true
|
3,039,706,050
|
`mypy` stage of `lintrunner -a` has intermittent but continuing crashes
|
rec
|
open
|
[
"module: crash",
"module: lint",
"triaged",
"module: flaky-tests",
"bug"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
Sometimes (5-10% of the time?) when I run `lintrunner init && lintrunner -a` I get a Python traceback in the second step (listed below). Almost always this does not happen again when I rerun the command.
I've been sort of ignoring it for a long time but figured I should finally report it!
There's a similar but different traceback I get occasionally, almost from mypy, when (my guess is that) `ruff` modifies a file it is linting, but in this case I am reporting, `ruff` has not rewritten any files.
The error message notes that we are not running a release version, or I'd report this right to `mypy`, but perhaps I should anyway?
Thanks in advance! 🙂
```
Error (MYPY) command-failed
torch/_inductor/ir.py:7649: error: INTERNAL ERROR -- Please try using mypy
master on GitHub:
https://mypy.readthedocs.io/en/stable/common_issues.html#using-a-
development-mypy-build
Please report a bug at https://github.com/python/mypy/issues
version: 1.14.0
torch/_inductor/ir.py:7649: error: INTERNAL ERROR -- Please try using mypy
master on GitHub:
https://mypy.readthedocs.io/en/stable/common_issues.html#using-a-
development-mypy-build
Please report a bug at https://github.com/python/mypy/issues
version: 1.14.0
torch/_inductor/ir.py:7649: error: INTERNAL ERROR -- Please try using mypy
master on GitHub:
https://mypy.readthedocs.io/en/stable/common_issues.html#using-a-
development-mypy-build
Please report a bug at https://github.com/python/mypy/issues
version: 1.14.0
torch/_inductor/ir.py:7643: error: INTERNAL ERROR -- Please try using mypy
master on GitHub:
https://mypy.readthedocs.io/en/stable/common_issues.html#using-a-
development-mypy-build
Please report a bug at https://github.com/python/mypy/issues
version: 1.14.0
torch/_inductor/ir.py:7641: error: INTERNAL ERROR -- Please try using mypy
master on GitHub:
https://mypy.readthedocs.io/en/stable/common_issues.html#using-a-
development-mypy-build
Please report a bug at https://github.com/python/mypy/issues
version: 1.14.0
Daemon crashed!
Traceback (most recent call last):
File "mypy/dmypy_server.py", line 236, in serve
File "mypy/dmypy_server.py", line 285, in run_command
File "mypy/dmypy_server.py", line 353, in cmd_run
File "mypy/dmypy_server.py", line 432, in check
File "mypy/dmypy_server.py", line 700, in
fine_grained_increment_follow_imports
File "mypy/server/update.py", line 285, in update
File "mypy/errors.py", line 1288, in report_internal_error
File "mypy/checker.py", line 592, in accept
File "mypy/nodes.py", line 827, in accept
File "mypy/checker.py", line 1046, in visit_func_def
File "mypy/checker.py", line 1050, in _visit_func_def
File "mypy/checker.py", line 1084, in check_func_item
File "mypy/checker.py", line 1360, in check_func_def
File "mypy/checker.py", line 594, in accept
File "mypy/errors.py", line 1288, in report_internal_error
File "mypy/checker.py", line 592, in accept
File "mypy/nodes.py", line 1277, in accept
File "mypy/checker.py", line 2952, in visit_block
File "mypy/checker.py", line 594, in accept
File "mypy/errors.py", line 1288, in report_internal_error
File "mypy/checker.py", line 592, in accept
File "mypy/nodes.py", line 1364, in accept
File "mypy/checker.py", line 3001, in visit_assignment_stmt
File "mypy/checker.py", line 3218, in check_assignment
File "mypy/checkexpr.py", line 5892, in accept
File "mypy/errors.py", line 1288, in report_internal_error
File "mypy/checkexpr.py", line 5890, in accept
File "mypy/nodes.py", line 2415, in accept
File "mypy/checkexpr.py", line 5627, in visit_list_comprehension
File "mypy/checkexpr.py", line 5688, in check_generator_or_comprehension
File "mypy/checkexpr.py", line 1571, in check_call
File "mypy/checkexpr.py", line 1785, in check_callable_call
File "mypy/checkexpr.py", line 1959, in infer_arg_types_in_context
File "mypy/checkexpr.py", line 5892, in accept
File "mypy/errors.py", line 1288, in report_internal_error
File "mypy/checkexpr.py", line 5890, in accept
File "mypy/nodes.py", line 1984, in accept
File "mypy/checkexpr.py", line 484, in visit_call_expr
File "mypy/checkexpr.py", line 618, in visit_call_expr_inner
File "mypy/checkexpr.py", line 1475, in check_call_expr_with_callee_type
File "mypy/checkexpr.py", line 1571, in check_call
File "mypy/checkexpr.py", line 1798, in check_callable_call
File "mypy/checkexpr.py", line 2593, in check_argument_types
File "mypy/checkexpr.py", line 2630, in check_arg
File "mypy/messages.py", line 838, in incompatible_argument
File "mypy/messages.py", line 289, in fail
File "mypy/messages.py", line 264, in report
File "mypy/errors.py", line 463, in report
File "mypy/errors.py", line 529, in add_error_info
File "mypy/errors.py", line 466, in _add_error_info
AssertionError
```
### Versions
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 3970X 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 55%
CPU max MHz: 4549.1211
CPU min MHz: 2200.0000
BogoMIPS: 7400.32
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s):
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0a0+gita6c8246
[conda] cuda-cudart 12.6.77 h5888daf_0 conda-forge
[conda] cuda-cudart-dev 12.6.77 h5888daf_0 conda-forge
[conda] cuda-cudart-dev_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-cudart-static 12.6.77 h5888daf_0 conda-forge
[conda] cuda-cudart-static_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-cudart_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-cupti 12.6.80 hbd13f7d_0 conda-forge
[conda] cuda-cupti-dev 12.6.80 h5888daf_0 conda-forge
[conda] cuda-libraries-dev 12.6.3 ha770c72_0 conda-forge
[conda] cuda-nvrtc 12.6.85 hbd13f7d_0 conda-forge
[conda] cuda-nvrtc-dev 12.6.85 h5888daf_0 conda-forge
[conda] cuda-nvtx 12.6.77 hbd13f7d_0 conda-forge
[conda] cuda-nvtx-dev 12.6.77 ha770c72_0 conda-forge
[conda] cuda-opencl 12.6.77 hbd13f7d_0 conda-forge
[conda] cuda-opencl-dev 12.6.77 h5888daf_0 conda-forge
[conda] cudnn 9.8.0.87 h81d5506_0 conda-forge
[conda] libcublas 12.6.4.1 h5888daf_1 conda-forge
[conda] libcublas-dev 12.6.4.1 h5888daf_1 conda-forge
[conda] libcufft 11.3.0.4 hbd13f7d_0 conda-forge
[conda] libcufft-dev 11.3.0.4 h5888daf_0 conda-forge
[conda] libcurand 10.3.7.77 hbd13f7d_0 conda-forge
[conda] libcurand-dev 10.3.7.77 h5888daf_0 conda-forge
[conda] libcusolver 11.7.1.2 h5888daf_1 conda-forge
[conda] libcusolver-dev 11.7.1.2 h5888daf_1 conda-forge
[conda] libcusparse 12.5.4.2 hbd13f7d_0 conda-forge
[conda] libcusparse-dev 12.5.4.2 h5888daf_0 conda-forge
[conda] libmagma 2.9.0 h19665d7_1 conda-forge
[conda] libmagma_sparse 2.9.0 h19665d7_0 conda-forge
[conda] libnvjitlink 12.6.85 hbd13f7d_0 conda-forge
[conda] libnvjitlink-dev 12.6.85 h5888daf_0 conda-forge
[conda] magma 2.9.0 h3d470c8_0 conda-forge
[conda] mkl 2024.2.2 ha957f24_16 conda-forge
[conda] mkl-include 2025.0.1 hf2ce2f3_21 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.3.0+git96316ce5 pypi_0 pypi
[conda] torch 2.8.0a0+gita6c8246 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
cc @clee2000
| true
|
3,039,582,622
|
Performance Regression nightly 03/11→03/12, on nanogpt speedrun
|
YouJiacheng
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"upstream triton",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 9
|
CONTRIBUTOR
|
### 🐛 Describe the bug
code: https://gist.github.com/YouJiacheng/687efdab59a3c3b4ad89864804bd918a
I manually applied changes from #152641
03/10: 1469.0-1470.4s (3 runs)
03/11: 1469.4-1470.5s
03/12: 1486.0-1487.4s (a few runs)
03/15: ≈1487.5s (a single run)
FWD diffs (03/10 vs. 03/15): https://www.diffchecker.com/bLNEBIii/
BWD diffs (03/10 vs. 03/15): https://www.diffchecker.com/bbiVBsPU/
#### Bisection 03/12
runtime 1486.0-1487.4s (a few runs)
Inductor output code is identical to 03/15
#### Bisection 03/11
runtime 1469.4-1470.5s
Inductor output code:
BWD is identical to 03/10
FWD diffs (~no diffs): https://www.diffchecker.com/wQxaVYL3/
Optimizer diffs (~no diffs): https://www.diffchecker.com/Og8kGihp/ https://www.diffchecker.com/N2qJ4DyA/
### Versions 03/10
Collecting environment information...
PyTorch version: 2.7.0.dev20250310+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.9 (main, Feb 5 2025, 19:10:45) [Clang 19.1.6 ] (64-bit runtime)
Python platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 168
On-line CPU(s) list: 0-161
Off-line CPU(s) list: 162-167
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 42
Socket(s): 2
Stepping: 8
BogoMIPS: 5199.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.9 MiB (84 instances)
L1i cache: 2.6 MiB (84 instances)
L2 cache: 168 MiB (84 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-83
NUMA node1 CPU(s): 84-167
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.24.3
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250310+cu126
[conda] Could not collect
### Versions 03/15
Collecting environment information...
PyTorch version: 2.8.0.dev20250315+cu126
[omitted]
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.24.3
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250315+cu126
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
3,039,556,091
|
TorchRun: Option to specify which GPUs to run on
|
bjourne
|
open
|
[
"oncall: distributed"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
TorchRun has an `--nproc-per-node` option to specify how many processes/gpus to use. But it has no option for specifying *which* gpus to use. So if you run torchrun multiple times the same gpus will be used. You can get around that as follows:
CUDA_VISIBLE_DEVICES="2,4,7" torchrun --nnodes=1 --nproc-per-node=3
This works if you have a single-node setup (perhaps not if you have multiple nodes?), but is not intuitive and error prone because you are passing some configuration in an environment variable and some in options. I think it would better if torchrun had an option such as `--bind-devices=2,4,7` for it, supplanting/replacing `--nproc-per-node`.
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,039,454,175
|
[Easy][Inductor] Adds safety checks in get_estimated_runtime
|
Aidyn-A
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 11
|
COLLABORATOR
|
This PR adds checks on `gpu_memory_bandwidth` and `gpu_flops` in `get_estimated_runtime`. This will prevent division by zero and other potential incorrect values:
https://github.com/pytorch/pytorch/blob/9210a98b9203c5ff42f39241304a8e38435110b8/torch/_inductor/scheduler.py#L864-L865
https://github.com/pytorch/pytorch/blob/9210a98b9203c5ff42f39241304a8e38435110b8/torch/_inductor/scheduler.py#L874
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,039,435,245
|
[DO NOT MERGE] update build tools version
|
alinpahontu2912
|
open
|
[
"triaged",
"open source",
"ciflow/binaries_wheel"
] | 2
|
COLLABORATOR
|
Use latest msvc to build pytorch and check if avs512 instructions are correctly set
| true
|
3,039,424,780
|
[TEST][Quantization] Skip test_learnable due to hypothesis
|
Aidyn-A
|
open
|
[
"triaged",
"open source",
"release notes: quantization",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
As per comment in https://github.com/pytorch/pytorch/issues/111471#issuecomment-1866933243 the tests are failing due to hypothesis. This PR adds a skip to those tests.
| true
|
3,039,320,406
|
fix: correct typo in randomness/reproducibility documentation
|
nachodieez
|
closed
|
[
"open source",
"topic: not user facing"
] | 4
|
NONE
|
Fixes #152817 by using the correct word in the documentation file.
| true
|
3,039,309,286
|
Mention of nondeterministic index_add when deterministic implementation is being used
|
nachodieez
|
closed
|
[] | 1
|
NONE
|
### 📚 The doc issue
In [this documentation page](https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms) it is mentioned that the nodeterministic CUDA implementation of `index_add` is being used when in fact the one that is being used and is giving error is the deterministic version, since there is no implementation.
### Suggest a potential alternative/fix
Edit the documentation page
| true
|
3,039,170,171
|
Depthwise Separable Convolutions with Large Tensors (> 2**31) Elements) Fail Despite cuDNN 64-bit Indexing Support
|
lely475
|
open
|
[
"module: cudnn",
"module: cuda",
"module: convolution",
"triaged",
"module: 64-bit"
] | 3
|
NONE
|
### 🐛 Describe the bug
The forward pass on a 2D convolutional layer using grouped convolutions (e.g., depthwise separable convolutions) fails when operating on tensors with more than 2\**31 elements. This limitation persists even when cuDNN v9.7.1 is used, which should theoretically support 64-bit indexing for large tensors since [PR #134890](https://github.com/pytorch/pytorch/pull/134890) ([cuDNN][64-bit indexing] cuDNN v9.3+ supports non-batch-splittable convolutions with > 2\**31 elements). Below is a minimal example to reproduce the issue.
```python
import torch
import torch.nn as nn
device = torch.device("cuda")
# Define an extremely large input tensor (exceeding 2**31 elements for a single sample), use grouped (depthwise separable) convolutions
# For example: Batch size = 1, Channels = 2, Height = 32,800, Width = 32,800
# Total elements = 1 * 2 * 32,800 * 32,800 = 2,151,680,000 > 2**31 (2,147,483,648)
num_channels=2
input_tensor = torch.randn(1, num_channels, 32800, 32800, device=device)
# Define a convolution layer
conv_layer = nn.Conv2d(num_channels, num_channels, kernel_size=3, stride=1, padding=1, groups=num_channels).to(device)
# Perform the forward pass
try:
output_tensor = conv_layer(input_tensor)
print("Convolution operation completed successfully. Output shape:", output_tensor.shape)
except RuntimeError as e:
print("Error occurred:", e)
```
Running the above code produces the following error:
```
Error occurred: Expected canUse32BitIndexMath(input) && canUse32BitIndexMath(output) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
Additional Context:
- The issue specifically occurs when using **depthwise separable convolutions** (i.e., `groups > 1` in `nn.Conv2d`). Regular convolutions (`groups=1`) appear to work as expected with tensors exceeding \(2^{31}\) elements.
- This suggests that the fix in [PR #134890](https://github.com/pytorch/pytorch/pull/134890) does not fully account for grouped convolutions or depthwise separable convolutions.
- Splitting the tensor further along the batch or channel dimensions is not an option in this case due to the nature of the operation.
### Versions
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.10.17 | packaged by conda-forge | (main, Apr 10 2025, 22:19:12) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H200
Nvidia driver version: 570.124.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 29%
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] optree==0.15.0
[pip3] torch==2.7.0+cu128
[pip3] torchvision==0.21.0
[pip3] torchvision-extra-decoders==0.0.2
[pip3] triton==3.3.0
[conda] cuda-cudart 12.8.90 h5888daf_1 conda-forge
[conda] cuda-cudart_linux-64 12.8.90 h3f2d84a_1 conda-forge
[conda] cuda-cupti 12.8.90 h5888daf_1 conda-forge
[conda] cuda-nvrtc 12.8.93 h5888daf_1 conda-forge
[conda] cuda-nvtx 12.8.90 h5888daf_1 conda-forge
[conda] cudnn 9.8.0.87 h81d5506_1 conda-forge
[conda] libblas 3.9.0 31_hfdb39a5_mkl conda-forge
[conda] libcblas 3.9.0 31_h372d94f_mkl conda-forge
[conda] libcublas 12.8.4.1 h9ab20c4_1 conda-forge
[conda] libcufft 11.3.3.83 h5888daf_1 conda-forge
[conda] libcurand 10.3.9.90 h9ab20c4_1 conda-forge
[conda] libcusolver 11.7.3.90 h9ab20c4_1 conda-forge
[conda] libcusparse 12.5.8.93 h5888daf_1 conda-forge
[conda] liblapack 3.9.0 31_hc41d3b0_mkl conda-forge
[conda] libmagma 2.9.0 h19665d7_1 conda-forge
[conda] libnvjitlink 12.8.93 h5888daf_1 conda-forge
[conda] libtorch 2.6.0 cuda126_mkl_h99b69db_304 conda-forge
[conda] mkl 2024.2.2 ha957f24_16 conda-forge
[conda] nccl 2.26.2.1 ha44e49d_1 conda-forge
[conda] numpy 2.2.5 py310hefbff90_0 conda-forge
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.7.1.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] optree 0.15.0 py310h3788b33_0 conda-forge
[conda] torch 2.7.0+cu128 pypi_0 pypi
[conda] torchvision 0.21.0 cuda126_py310_h4459643_1 conda-forge
[conda] torchvision-extra-decoders 0.0.2 py310h9a3ef1b_2 conda-forge
[conda] triton 3.3.0 pypi_0 pypi
cc @csarofeen @ptrblck @xwang233 @eqy @msaroufim @jerryzh168
| true
|
3,039,108,164
|
[Cutlass] E2E Tests for EVT
|
mlazos
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152815
* #150907
* #151406
* #150906
* #152733
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,039,106,703
|
[TEST][ATen][CUDA] Skip row-wise scaled matrix mmultiplication tests on sm_120+
|
Aidyn-A
|
open
|
[
"module: cuda",
"triaged",
"open source",
"topic: not user facing"
] | 10
|
COLLABORATOR
|
The float8 row-wise scaled matmuls are not supported on Blackwell yet. This PR adds skips to those tests to decrease the noise on `sm_120+` machines.
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,039,101,868
|
Mismatch in dynamic quantization performance for torchao and torch.quantization
|
PioneerAlexander
|
open
|
[
"oncall: quantization"
] | 0
|
NONE
|
Hi everyone!
Can someone explain, why I get different performance, when I apply torch.quantization.quantize_dynamic and torchao.quantize_?
More specifically, I have an LSTM model with two fully connected layers (in the front and in the back). In order to quantize it with torchao, I reimplemented a lstm layer (checked that it works as a nn.LSTM implementation)
Then I compare DynamicInt8ActivationInt8Weight quantization in both libraries:
quantize_(model, Int8DynamicActivationInt8WeightConfig())
model = torch.quantization.quantize_dynamic(
model, {nn.Linear}, dtype=torch.qint8
)
The first torchao solution was tested on GPU (NVIDIA A100 80GB PCIe, not MI300), nvcc version 12.1, cudnn 9.8, torch 2.5.1
Metric value drops by 1%
But when I run the second solution (on CPU, as GPU is not yet supported for torch.quantization), metric value drops by 35%.
what could be possibly wrong?
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
3,038,948,509
|
Fix typo on `test_multi_device_context_manager` for XPU
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 11
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152812
# Motivation
Align https://github.com/pytorch/pytorch/pull/152474, fix the typo on UT for XPU introduced by https://github.com/pytorch/pytorch/issues/148864
| true
|
3,038,926,403
|
[Quant][X86] add an op to compute uint8 batch norm 2d
|
Xia-Weiwen
|
open
|
[
"module: cpu",
"open source",
"release notes: quantization",
"intel"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152811
* #152411
**Summary**
This PR adds a new op, `onednn.qbatch_norm2d`, which accepts uint8 inputs on CPU device (instead of QuantizedCPU).
The new ops are implemented with AVX512 instructions and it provides similar performance as its counterpart for QuantizedCPU device `quantized.batch_norm2d`.
The new op supports output dtypes other than uint8 (fp32, fp16 and bf16 are supported).
**Test plan**
```
pytest test/quantization/core/test_quantized_op.py -k test_int8_batch_norm_onednn
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,038,863,226
|
Upgrade to NCCL 2.26.5 for CUDA 12
|
tinglvv
|
open
|
[
"open source",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 19
|
COLLABORATOR
|
Upgrade NCCL to latest 2.26.5
cc @atalman @ptrblck @malfet @eqy @nWEIdia
| true
|
3,038,859,346
|
[xla hash update] update the pinned xla hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned xla hash.
| true
|
3,038,821,202
|
another try
|
hl475
|
open
|
[
"module: cpu",
"fb-exported",
"release notes: quantization"
] | 2
|
CONTRIBUTOR
|
Differential Revision: D74161994
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,038,800,154
|
wip
|
hl475
|
open
|
[
"module: cpu",
"fb-exported",
"release notes: quantization"
] | 2
|
CONTRIBUTOR
|
Differential Revision: D74161784
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,038,780,293
|
[invoke_subgraph] Force the output stride to be same as eager
|
anijain2305
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152806
* #152675
* #152770
* #152772
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,038,763,925
|
False INTERNAL ASSERT FAILED
|
noaft
|
closed
|
[
"needs reproduction",
"oncall: jit"
] | 3
|
NONE
|
### 🐛 Describe the bug
This my code:
import torch
# Đây là model sau khi convert rồi
quantized_model.eval()
# Convert sang TorchScript
scripted_model = torch.jit.script(quantized_model)
# Lưu lại bằng TorchScript
scripted_model.save("resnet50_int8_scripted.pt")
I want to save my model quantitied with jit and have this eror.
### Versions

This version pytorch
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,038,592,496
|
Segmentation fault (core dumped) in torch.nn.functional.max_unpool2d
|
cx104906
|
closed
|
[
"triage review",
"module: crash",
"topic: fuzzer"
] | 3
|
NONE
|
### 🐛 Describe the bug
reproduce
```
curl -L -o 004-args "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000004-args"
curl -L -o 004-kwargs "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000004-kwargs"
python cxtest1.py
```
cxtest1.py
```
import torch
import pickle
print(torch.__version__)
mylist = torch.load("/home/cx/cxtemp/004-args",weights_only=True)
mydict = torch.load("/home/cx/cxtemp/004-kwargs",weights_only=True)
print("test.....")
torch.nn.functional.max_unpool2d(*mylist,**mydict)
```
output
```
2.8.0a0+gitcbcf677
test.....
Segmentation fault (core dumped)
```
### Versions
versions
```
Collecting environment information...
PyTorch version: 2.8.0a0+gitcbcf677
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.31
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
Stepping: 7
CPU MHz: 2095.076
BogoMIPS: 4190.15
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 128 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] optree==0.15.0
[pip3] torch==2.8.0a0+gitcbcf677
[conda] numpy 2.2.5 pypi_0 pypi
[conda] optree 0.15.0 pypi_0 pypi
[conda] torch 2.8.0a0+gitcbcf677 dev_0 <develop>
```
| true
|
3,038,557,505
|
same test for guard_or_false 2
|
laithsakka
|
open
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152803
* #152802
* #152784
* #152722
* #148872
| true
|
3,038,556,240
|
same test for guard_or_false 1
|
laithsakka
|
open
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152803
* __->__ #152802
* #152784
* #152722
* #148872
| true
|
3,038,539,405
|
Thread through options so GraphPickler can allow all ops
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152801
Fixes #151904
In #151904 we discussed the feasibility of including all ops in the GraphPickler. This PR changes it so we can filter which ops are allowed and which are blocked.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,038,484,054
|
Add "#pragma once" to CachingHostAllocator.h
|
jhapradip
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
| null | true
|
3,038,430,994
|
[float16]: Fast path for torch.dot with float16/bfloat16
|
f2013519
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: linalg_frontend",
"topic: performance",
"ci-no-td"
] | 21
|
CONTRIBUTOR
|
Fixes #152798
Add the fast path for dot with contiguous tensors for float16/bfloat16 types.
Performance with patch (see issue for benchmark and current performance):

**We see up to 10x+ improvement in performance.**
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,038,418,140
|
Poor performance of torch.dot with float16 & bfloat16
|
f2013519
|
closed
|
[
"triaged",
"module: bfloat16",
"module: half",
"module: linear algebra",
"topic: performance"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
torch.dot is an order of magnitude slower(or more) with float16/bfloat16 versus float32:
```python
import torch
import timeit
import sys
import platform
import matplotlib.pyplot as plt
import numpy as np
import warnings
import math
# --- Configuration ---
# Vector sizes (N) - Powers of 10 from 10^1 to 10^8
vector_sizes_log = range(1, 9) # 1 to 8
vector_sizes = [10**i for i in vector_sizes_log]
num_runs = 100 # Number of times to run each operation for timing
num_warmup = 5 # Number of warm-up runs
dtypes_to_test = [torch.float64, torch.float32, torch.bfloat16, torch.float16]
# --- End Configuration ---
# --- Setup ---
device = torch.device("cpu")
print(f"Forcing CPU device.")
print(f"PyTorch version: {torch.__version__}")
print(f"Platform: {platform.system()} {platform.release()} ({platform.machine()})")
print(f"CPU Threads: {torch.get_num_threads()}")
results = {dtype: [] for dtype in dtypes_to_test}
actual_sizes_run = {dtype: [] for dtype in dtypes_to_test}
# --- Benchmarking Loop ---
for N in vector_sizes:
print(f"\nBenchmarking vector size: {N:,}")
for dtype in dtypes_to_test:
print(f" Testing dtype: {dtype}...")
try:
# Create tensors on CPU
# Use .to(dtype) for float16/bfloat16 as randn doesn't directly support them on CPU sometimes
x = torch.randn(N, device=device).to(dtype=dtype)
y = torch.randn(N, device=device).to(dtype=dtype)
# --- Benchmark Function ---
def run_dot():
# Ensure computation happens
C = torch.dot(x, y)
# No sync needed for CPU
# --- Warm-up ---
# print(f" Performing {num_warmup} warm-up runs...")
for _ in range(num_warmup):
run_dot()
# print(" Warm-up complete.")
# --- Benchmarking ---
# print(f" Benchmarking {num_runs} runs...")
# Use timeit.Timer for more precise timing
timer = timeit.Timer(stmt='run_dot()', globals=globals())
total_time = timer.timeit(number=num_runs)
avg_time_ms = (total_time / num_runs) * 1000
print(f" Avg time: {avg_time_ms:.4f} ms")
results[dtype].append(avg_time_ms)
actual_sizes_run[dtype].append(N) # Record size for which the run was successful
except RuntimeError as e:
print(f" Error during benchmark for dtype {dtype} at size {N:,}: {e}")
print(f" Skipping this dtype for size {N:,} and subsequent sizes.")
# Add NaN for this and future sizes for this dtype to keep plot alignment if needed
num_remaining = len(vector_sizes) - vector_sizes.index(N)
results[dtype].extend([float('nan')] * num_remaining)
actual_sizes_run[dtype].extend([N] * num_remaining) # Keep track of attempted sizes
break # Stop testing this dtype if it failed (e.g., unsupported or out of memory)
except Exception as e:
print(f" Unexpected error for dtype {dtype} at size {N:,}: {e}")
results[dtype].append(float('nan')) # Mark as failed
actual_sizes_run[dtype].append(N)
# --- Plotting ---
print("\nPlotting results...")
plt.figure(figsize=(12, 7))
markers = {torch.float64: 'd', torch.float32: 'o', torch.bfloat16: 's', torch.float16: '^'}
colors = {torch.float64: 'purple', torch.float32: 'blue', torch.bfloat16: 'green', torch.float16: 'red'}
for dtype, times in results.items():
# Filter out NaN values and corresponding sizes for plotting
valid_indices = ~np.isnan(times)
plot_sizes = np.array(actual_sizes_run[dtype])[valid_indices]
plot_times = np.array(times)[valid_indices]
if len(plot_sizes) > 0:
plt.plot(
plot_sizes,
plot_times,
label=str(dtype).replace('torch.', ''),
marker=markers.get(dtype, 'x'),
color=colors.get(dtype, 'black'),
linestyle='-'
)
plt.xlabel("Vector Size (N)")
plt.ylabel("Average Time (ms)")
plt.title(f"CPU torch.dot Performance")
plt.legend()
plt.grid(True, which="both", ls="--")
plt.xscale('log') # Use log scale for vector size
plt.yscale('log') # Use log scale for time
# Add CPU info to the plot title or as text
cpu_info = f"Threads: {torch.get_num_threads()}"
plt.text(0.01, 0.01, cpu_info, transform=plt.gca().transAxes, fontsize=9, verticalalignment='bottom')
plt.tight_layout()
plt.show()
print("\nBenchmark complete.")
```

### Versions
PyTorch version: 2.8.0a0+git66eb9c8
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3 (arm64)
GCC version: Could not collect
Clang version: 14.0.6
CMake version: version 3.31.6
Libc version: N/A
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 08:22:19) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] numpydoc==1.7.0
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.13.0
[pip3] torch==2.7.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] numpydoc 1.7.0 py312hca03da5_0
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.7.0 pypi_0 pypi
cc @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,038,373,933
|
DISABLED test_comprehensive_fliplr_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_fliplr_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41618142892).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_fliplr_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 676, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 895, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 879, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1495, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1382, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2234, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2281, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpi6ao_3ns/v7/cv7xqtvkgaesdn45yqpwu37ajtgzz4ft7ceebmg7nsqljsrxokbl.py", line 75, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpiaqyjy3o/triton/G6G5HH47IDRV6T6XU2RQH66DUCWI2S7FRG6TNM4JKX4D7MAQLZQA/triton_poi_fused_flip_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(5, 10, 5), device="cuda:0", dtype=torch.float16], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_fliplr_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,038,373,932
|
DISABLED test_comprehensive_rot90_cuda_float32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_rot90_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41618142889).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_rot90_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads
return torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2191, in backward
return impl_fn()
^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2177, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2272, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
disable(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 857, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2242, in bw_compiler
return inner_compile(
^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 727, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 845, in _compile_fx_inner
mb_compiled_graph, cache_info = FxGraphCache.load_with_key(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 1405, in load_with_key
compiled_graph, cache_info = FxGraphCache._lookup_graph(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 1156, in _lookup_graph
artifact_path = graph.after_deserialization(constants)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/output_code.py", line 709, in after_deserialization
code_cache = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpwlisbobx/xy/cxy6fehopxvinfubvfwaz4dk7rh2zk6ivtm5q4j6shyfkp6w7s7n.py", line 73, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3528, in result
self.static_autotuner.precompile( # type: ignore[union-attr]
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpi__u7hno/triton/VMGVS3UAKQL4ALQOTSOGKZCQWKNLXOVV4AZQNM2MOXXQUVTFSYNQ/triton_poi_fused_rot90_0.cubin')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 26: SampleInput(input=Tensor[size=(5, 5, 5), device="cuda:0", dtype=torch.float32], args=(3,(1,2)), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=26 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_rot90_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,038,373,442
|
DISABLED test_comprehensive_unbind_copy_cuda_int32 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 14
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_unbind_copy_cuda_int32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41618142889).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_unbind_copy_cuda_int32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 676, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 895, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 879, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1495, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1382, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2234, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2281, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpohkjxnik/5x/c5xmgpdu3ww4nicoiksxq4gdfib6o4ayh5ylqr63lnwg2nieqiqi.py", line 217, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpjluyy_jz/triton/YRN4HDZF6KRA3DC255SPSLVRBBJPH4HIRFM44HDPRAKPBFR3O2QA/triton_poi_fused_unbind_copy_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(5,), device="cuda:0", dtype=torch.int32], args=(0), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_unbind_copy_cuda_int32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,038,373,416
|
DISABLED test_comprehensive_slice_scatter_cuda_bool (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 12
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_slice_scatter_cuda_bool&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41618142892).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_slice_scatter_cuda_bool`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 676, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 895, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 879, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1495, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1382, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2234, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2281, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpyw4zwim5/ze/czewho5vkgkrqtofsza4fhkp5ekgi63wwmfq2gnsipwipcbbbwqx.py", line 71, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmp8a7m1g0u/triton/FI3E67QWFFTOXWHEZZOB7TYVFCQQZ5QIBA2JYP4XGAC7THDE34UA/triton_poi_fused_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(20, 20, 20), device="cuda:0", dtype=torch.bool], args=(Tensor[size=(20, 20, 20), device="cuda:0", dtype=torch.bool],0,0,20,1), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_slice_scatter_cuda_bool
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,038,373,413
|
DISABLED test_comprehensive_linalg_pinv_singular_cuda_float64 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_linalg_pinv_singular_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41618142832).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_linalg_pinv_singular_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads
return torch.autograd.grad(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2191, in backward
return impl_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2177, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2272, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
disable(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 857, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2242, in bw_compiler
return inner_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 727, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 895, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 879, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1495, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1382, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2234, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2281, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_jenkins/om/comcbnipvgbb6fiijvservg6armihzkrk3dr3skq6gumgyax2pp6.py", line 135, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 448, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures
scope[key] = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 343, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 323, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 480, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1277, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1269, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/torchinductor_jenkins/triton/0/7JPCB3RLVLVTKKU632MTGXK4JII44YV6JDMMD5AKN6XD6QQYCQGQ/triton_poi_fused_sub_0.cubin')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2263, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(3, 0), device="cuda:0", dtype=torch.float64], args=TensorList[Tensor[size=(3, 0), device="cuda:0", dtype=torch.float64]], kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_WITH_SLOW=1 PYTORCH_TEST_SKIP_FAST=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_linalg_pinv_singular_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,038,371,840
|
Pass UNINSTALL_DILL to docker build
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
`UNINSTALL_DILL` was not really passed to docker before.
| true
|
3,038,326,479
|
Inconsistent export behavior for nonzero+grid_sample between CUDA and CPU/MPS backends
|
sachin-skyline
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 1
|
NONE
|
### 🐛 Describe the bug
I am trying to `export` a model that contains a `nonzero` call followed by a `grid_sample` (for use in `aoti_compile_and_package`). When exporting for cpu or mps, no error is thrown, but when using cuda, "torch._dynamo.exc.UserError: Could not guard on data-dependent expression Eq(2*u0, 0) (unhinted: Eq(2*u0, 0)). (Size-like symbols: u0)" is thrown.
Adding manual checks in the user model definition solves the issue, like
```python
torch._check(grid.shape[1] > 0)
torch._check(grid.shape[1] + (grid.shape[1] - 1) % grid.shape[1] < 2147483647)
```
but in actual model code, a couple more checks are required. I would expect the need for user manual checks to be consistent between the cpu/mps and cuda backends. In this case, no checks required being the correct, or at least ideal behavior.
Reproducer:
```python
import torch
class Test(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, pos):
pos = torch.nonzero(pos[0, 0] > 0.5)
grid = pos.to(x.dtype)[None, :, None, :]
return torch.nn.functional.grid_sample(x, grid, align_corners=False)
device = "cuda"
torch.export.export(
Test(),
(
torch.randn(1, 1, 16, 16, device=device),
torch.randn(1, 1, 32, 32, device=device),
),
)
```
Ran with `TORCHDYNAMO_VERBOSE=1` produces:
```
W0504 23:29:04.745000 3388865 site-packages/torch/fx/experimental/symbolic_shapes.py:6679] [0/0] failed during evaluate_expr(Eq(2*u0, 0), hint=None, size_oblivious=False, forcing_spec=False
E0504 23:29:04.746000 3388865 site-packages/torch/fx/experimental/recording.py:299] [0/0] failed while running evaluate_expr(*(Eq(2*u0, 0), None, False, False), **{})
Traceback (most recent call last):
File "site-packages/torch/_dynamo/utils.py", line 3284, in run_node
return node.target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/nn/functional.py", line 5023, in grid_sample
return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/sym_node.py", line 536, in guard_bool
r = self.evaluate()
^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/sym_node.py", line 510, in evaluate
return self.shape_env.evaluate_sym_node(self, size_oblivious)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/symbolic_shapes.py", line 6655, in evaluate_sym_node
return self.evaluate_expr(
^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/symbolic_shapes.py", line 6671, in evaluate_expr
return self._evaluate_expr(
^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/symbolic_shapes.py", line 6894, in _evaluate_expr
raise self._make_data_dependent_error(
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(2*u0, 0) (unhinted: Eq(2*u0, 0)). (Size-like symbols: u0)
ATTENTION: guard_size_oblivious would fix the error, evaluating expression to False.
Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance.
Caused by: return torch.nn.functional.grid_sample(x, grid, align_corners=False) # repro.py:10 in forward (nn/functional.py:5023 in grid_sample)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "repro.py", line 10, in forward
return torch.nn.functional.grid_sample(x, grid, align_corners=False)
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "site-packages/torch/_dynamo/utils.py", line 3127, in get_fake_value
ret_val = wrap_fake_exception(
^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/utils.py", line 2641, in wrap_fake_exception
return fn()
^^^^
File "site-packages/torch/_dynamo/utils.py", line 3128, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/utils.py", line 3325, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "site-packages/torch/_dynamo/utils.py", line 3284, in run_node
return node.target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/nn/functional.py", line 5023, in grid_sample
return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/sym_node.py", line 536, in guard_bool
r = self.evaluate()
^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/sym_node.py", line 510, in evaluate
return self.shape_env.evaluate_sym_node(self, size_oblivious)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/symbolic_shapes.py", line 6655, in evaluate_sym_node
return self.evaluate_expr(
^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/symbolic_shapes.py", line 6671, in evaluate_expr
return self._evaluate_expr(
^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/fx/experimental/symbolic_shapes.py", line 6894, in _evaluate_expr
raise self._make_data_dependent_error(
RuntimeError: Dynamo failed to run FX node with fake tensors: call_function <function grid_sample at 0x7a11819c1080>(*(FakeTensor(..., device='cuda:0', size=(1, 1, 16, 16)), FakeTensor(..., device='cuda:0', size=(1, u0, 1, 2))), **{'align_corners': False}): got GuardOnDataDependentSymNode('Could not guard on data-dependent expression Eq(2*u0, 0) (unhinted: Eq(2*u0, 0)). (Size-like symbols: u0)\n\nATTENTION: guard_size_oblivious would fix the error, evaluating expression to False.\nMaybe you need to add guard_size_oblivious to framework code, see doc below for more guidance.\n\nCaused by: return torch.nn.functional.grid_sample(x, grid, align_corners=False) # repro.py:10 in forward (nn/functional.py:5023 in grid_sample)\nFor more information, run with TORCH_LOGS="dynamic"\nFor extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"\nIf you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1\nFor more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing\n\nUser Stack (most recent call last):\n (snipped, see stack below for prefix)\n File "repro.py", line 10, in forward\n return torch.nn.functional.grid_sample(x, grid, align_corners=False)\n\nFor C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "repro.py", line 14, in <module>
torch.export.export(
File "site-packages/torch/export/__init__.py", line 360, in export
return _export(
^^^^^^^^
File "site-packages/torch/export/_trace.py", line 1092, in wrapper
raise e
File "site-packages/torch/export/_trace.py", line 1065, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/export/_trace.py", line 2112, in _export
ep = _export_for_training(
^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/export/_trace.py", line 1092, in wrapper
raise e
File "site-packages/torch/export/_trace.py", line 1065, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/export/_trace.py", line 1975, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/export/_trace.py", line 1344, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/export/_trace.py", line 739, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/eval_frame.py", line 1677, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/convert_frame.py", line 1432, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/convert_frame.py", line 598, in __call__
return _compile(
^^^^^^^^^
File "site-packages/torch/_dynamo/convert_frame.py", line 1059, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/convert_frame.py", line 761, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/convert_frame.py", line 797, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "site-packages/torch/_dynamo/convert_frame.py", line 257, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/convert_frame.py", line 715, in transform
tracer.run()
File "site-packages/torch/_dynamo/symbolic_convert.py", line 3500, in run
super().run()
File "site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
^^^^^^^^^^^
File "site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "site-packages/torch/_dynamo/symbolic_convert.py", line 819, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/symbolic_convert.py", line 2933, in CALL
self._call(inst)
File "site-packages/torch/_dynamo/symbolic_convert.py", line 2927, in _call
self.call_function(fn, args, kwargs)
File "site-packages/torch/_dynamo/symbolic_convert.py", line 1170, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/variables/torch.py", line 1181, in call_function
tensor_variable = wrap_fx_proxy(
^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/variables/builder.py", line 2302, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/variables/builder.py", line 2368, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/variables/builder.py", line 2464, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/_dynamo/utils.py", line 3214, in get_fake_value
raise UserError( # noqa: B904
torch._dynamo.exc.UserError: Could not guard on data-dependent expression Eq(2*u0, 0) (unhinted: Eq(2*u0, 0)). (Size-like symbols: u0)
ATTENTION: guard_size_oblivious would fix the error, evaluating expression to False.
Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance.
Caused by: return torch.nn.functional.grid_sample(x, grid, align_corners=False) # repro.py:10 in forward (nn/functional.py:5023 in grid_sample)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "repro.py", line 10, in forward
return torch.nn.functional.grid_sample(x, grid, align_corners=False)
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example
from user code:
File "repro.py", line 10, in forward
return torch.nn.functional.grid_sample(x, grid, align_corners=False)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1027-gcp-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.37
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.7.0
[pip3] triton==3.3.0
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,038,266,053
|
[CXX11ABI] torch 2.6.0-cu126 and cu124 have different exported symbols
|
vadimkantorov
|
open
|
[
"module: binaries",
"module: cuda",
"triaged"
] | 15
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The symbol `_ZN3c105ErrorC2ENS_14SourceLocationESs` is exported in cu124's version, but missing in cu126: some `nm` outputs in https://github.com/Dao-AILab/flash-attention/issues/1644
I understand that because of missing symbols, flash_attention has stopped working with torch 2.7. But it was a bit surprising that the exported symbols differ between cu124 and cu126 version of the same release...
Also, a question is why torch exported `_ZN3c105ErrorC2ENS_14SourceLocationESs` and why flash_attention depends on it...
@malfet
### Versions
torch 2.6.0-cu126 and cu124
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @osalpekar @atalman @ptrblck @eqy @jerryzh168
| true
|
3,038,259,121
|
Fixed rerr computation in lobpcg
|
ignasa007
|
open
|
[
"open source",
"release notes: linalg_frontend"
] | 15
|
NONE
|
Fixes #101075
This PR fixes an issue with the computation of residuals in the LOBPCG algorithm.
**Bug**: [Line 788](https://github.com/pytorch/pytorch/blob/8f54e56e62692bcebf218f2e4c1855a3be97baf2/torch/_lobpcg.py#L788) is supposed to compute the denominator in Equation 9 of [Duersch et al., 2018](https://arxiv.org/abs/1704.07458), as also suggested in [line 776](https://github.com/pytorch/pytorch/blob/8f54e56e62692bcebf218f2e4c1855a3be97baf2/torch/_lobpcg.py#L776), but it uses the raw eigenvalue-estimates instead of their absolute values.
**Consequence**: This made the algorithm's success sensitive to initialization of eigenvectors.
**Tests**:
- I have tested @jtorde's [script](https://github.com/pytorch/pytorch/issues/101075#issuecomment-1545349559), and I did NOT run into any assertion errors for a few minutes (as opposed to the original implementation, which fails after a few seconds).
- I have also tried @pearu's specific [test case](https://github.com/pytorch/pytorch/issues/101075#issuecomment-1548483685), which also executes successfully - the residuals remain positive, and the final output is the same as one returned by SciPy (with and without enforcing the use of LOBPCG).
- I extracted the relevant test cases from [test/test_autograd.py](https://github.com/pytorch/pytorch/blob/main/test/test_autograd.py) and [test/test_linalg.py](https://github.com/pytorch/pytorch/blob/main/test/test_linalg.py), and they ran successfully.
Let me know if further test cases or benchmarks are needed.
cc @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,038,243,246
|
[MPSInductor] Fix `truncdiv` implementation
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152788
For integral dtypes it should be just an alias for division
Fixes `GPUTests.test_div7_mps`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,038,209,346
|
Implement DeviceType.h as header-only
|
desertfire
|
open
|
[
"oncall: jit",
"module: cpu",
"module: mkldnn",
"ciflow/trunk",
"release notes: quantization",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152787
Summary: Move c10/core/DeviceType.h to a separate torch/csrc/header_only directory. Still keep a copy of c10/core/DeviceType.h for backwrad compatibility. More header files will be moved as follow-up. CI to guard "header-only-ness" will be added later.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mingfeima @XiaobingSuper @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
Differential Revision: [D74184681](https://our.internmc.facebook.com/intern/diff/D74184681)
| true
|
3,038,188,163
|
Update CMakeLists.txt
|
gisp-cubicon
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 2
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
3,038,177,115
|
Fix negative dim issue in for parallel loss context manager
|
abhilash1910
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing"
] | 6
|
NONE
|
Facing similar issue as on #152016 , and added as per @tianyu-l 's solution.
Fixes #152016
Tagging @tianyu-l @atalman for review.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,038,168,322
|
test that guard_or_true change can only make valid results null but does not change result or make invalid valid
|
laithsakka
|
open
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152803
* #152802
* __->__ #152784
* #152722
* #148872
| true
|
3,038,134,742
|
undefined symbol: __nvJitLinkCreate_12_8, version libnvJitLink.so.12
|
FurkanGozukara
|
open
|
[
"triage review",
"module: binaries"
] | 3
|
NONE
|
I am trying to use Torch 2.7 with CUDA 12.8 on Linux with Kohya trainer and I am getting this error
Exactly same installation and setup works on Windows
I tried Torch 2.7 official and latest Torch 2.8 nightly all CUDA 12.8 and same error
```
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/Ubuntu/apps/kohya_ss/kohya_gui.py:12 in <module> │
│ │
│ 11 from kohya_gui.textual_inversion_gui import ti_tab │
│ ❱ 12 from kohya_gui.utilities import utilities_tab │
│ 13 from kohya_gui.lora_gui import lora_tab │
│ │
│ /home/Ubuntu/apps/kohya_ss/kohya_gui/utilities.py:6 in <module> │
│ │
│ 5 from .blip_caption_gui import gradio_blip_caption_gui_tab │
│ ❱ 6 from .blip2_caption_gui import gradio_blip2_caption_gui_tab │
│ 7 from .git_caption_gui import gradio_git_caption_gui_tab │
│ │
│ /home/Ubuntu/apps/kohya_ss/kohya_gui/blip2_caption_gui.py:2 in <module> │
│ │
│ 1 from PIL import Image │
│ ❱ 2 from transformers import Blip2Processor, Blip2ForConditionalGeneration │
│ 3 import torch │
│ │
│ /home/Ubuntu/apps/kohya_ss/venv/lib/python3.10/site-packages/transformers/__ │
│ init__.py:26 in <module> │
│ │
│ 25 # Check the dependencies satisfy the minimal versions required. │
│ ❱ 26 from . import dependency_versions_check │
│ 27 from .utils import ( │
│ │
│ /home/Ubuntu/apps/kohya_ss/venv/lib/python3.10/site-packages/transformers/de │
│ pendency_versions_check.py:16 in <module> │
│ │
│ 15 from .dependency_versions_table import deps │
│ ❱ 16 from .utils.versions import require_version, require_version_core │
│ 17 │
│ │
│ /home/Ubuntu/apps/kohya_ss/venv/lib/python3.10/site-packages/transformers/ut │
│ ils/__init__.py:34 in <module> │
│ │
│ 33 ) │
│ ❱ 34 from .generic import ( │
│ 35 ContextManagers, │
│ │
│ /home/Ubuntu/apps/kohya_ss/venv/lib/python3.10/site-packages/transformers/ut │
│ ils/generic.py:462 in <module> │
│ │
│ 461 if is_torch_available(): │
│ ❱ 462 import torch.utils._pytree as _torch_pytree │
│ 463 │
│ │
│ /home/Ubuntu/apps/kohya_ss/venv/lib/python3.10/site-packages/torch/__init__. │
│ py:418 in <module> │
│ │
│ 417 _load_global_deps() │
│ ❱ 418 from torch._C import * # noqa: F403 │
│ 419 │
╰──────────────────────────────────────────────────────────────────────────────╯
ImportError:
/home/Ubuntu/apps/kohya_ss/venv/lib/python3.10/site-packages/torch/lib/../../nvi
dia/cusparse/lib/libcusparse.so.12: undefined symbol: __nvJitLinkCreate_12_8,
version libnvJitLink.so.12
```
cc @seemethere @malfet @osalpekar @atalman
| true
|
3,038,076,765
|
[BE]: Update cudnn to 9.9 for cu128
|
Skylion007
|
open
|
[
"open source",
"topic: not user facing",
"ciflow/inductor",
"ciflow/inductor-cu126"
] | 1
|
COLLABORATOR
|
Update cudnn to 9.9 for better blackwell support for cu128
| true
|
3,038,073,282
|
[MPS] SDPA specialized kernels
|
Isalia20
|
closed
|
[
"triaged",
"open source",
"Merged",
"module: mps",
"release notes: mps",
"ciflow/mps",
"module: sdpa"
] | 8
|
COLLABORATOR
|
Paritally fixes #139668 and #152550
Still work in progress. Following needs to be addressed:
- [x] Some tests are failing and need to check why and bugfix
- [x] Benchmark the new kernels and add to this PR for varying sequence lengths head dimensions(the ones that get dispatched to kernels)
- [x] Add tests to cover the specialized paths(if applicable)
- [x] Code cleanup
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
**Tested on Macbook M1 Pro**
### Vector Fast Path (q_len=1, k_len=256)
- Old: 0.378 ms
- New: 0.260 ms
- **31.2% speed improvement**
### Vector 2-pass (q_len=1, k_len=4096)
- Old: 0.627 ms
- New: 0.370 ms
- **41.0% speed improvement**
### Vector Fast Path (q_len=8, k_len=256)
- Old: 0.545 ms
- New: 0.322 ms
- **40.9% speed improvement**
### Vector 2-pass (q_len=8, k_len=4096)
- Old: 1.318 ms
- New: 1.057 ms
- **19.8% speed improvement**
Script to get perf:
```
import torch
import time
def benchmark_sdpa(config, iterations=100):
device = config.get("device", "cpu")
batch = config["batch"]
heads = config["heads"]
q_len = config["q_len"]
k_len = config["k_len"]
head_dim = config["head_dim"]
q = torch.randn(batch, heads, q_len, head_dim, device=device, dtype=torch.float32)
k = torch.randn(batch, heads, k_len, head_dim, device=device, dtype=torch.float32)
v = torch.randn(batch, heads, k_len, head_dim, device=device, dtype=torch.float32)
for _ in range(5):
_ = torch.nn.functional.scaled_dot_product_attention(q, k, v)
if device == "mps":
torch.mps.synchronize()
total_time = 0.0
for i in range(iterations):
start = time.perf_counter()
_ = torch.nn.functional.scaled_dot_product_attention(q, k, v)
if device == "mps":
torch.mps.synchronize()
end = time.perf_counter()
total_time += end - start
avg_time = total_time / iterations
print(f"[{config['name']}] Avg time per run: {avg_time * 1000:.3f} ms over {iterations} iterations")
return avg_time
def main():
device = "mps" if torch.backends.mps.is_available() else "cpu"
print(f"Running benchmarks on device: {device}")
benchmarks = [
{
"name": "Vector Fast - Small q_len & moderate k_len",
"batch": 1,
"heads": 8,
"q_len": 1, # small query sequence length triggers vector fast path
"k_len": 256, # moderate key length
"head_dim": 64,
"device": device,
},
{
"name": "Vector 2-pass - Small q_len & long k_len",
"batch": 1,
"heads": 8,
"q_len": 1, # small query sequence length
"k_len": 4096, # long key length triggers the 2-pass variant
"head_dim": 64,
"device": device,
},
# {
# "name": "Full Attention - Moderate q_len/k_len",
# "batch": 1,
# "heads": 8,
# "q_len": 128, # longer query sequence length
# "k_len": 8192, # matching key length for full attention paths
# "head_dim": 64,
# "device": device,
# },
# {
# "name": "Full Attention - Longer q_len/k_len",
# "batch": 1,
# "heads": 8,
# "q_len": 128, # very long sequence length
# "k_len": 8192,
# "head_dim": 64,
# "device": device,
# },
]
iterations = 100
for config in benchmarks:
benchmark_sdpa(config, iterations=iterations)
if __name__ == "__main__":
main()
```
| true
|
3,038,054,985
|
Error with nccl + multiple RTX5090 in ddp training. CUDA error: an illegal memory access was encountered
|
KohakuBlueleaf
|
closed
|
[
"oncall: distributed",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
Related issues: https://github.com/Lightning-AI/pytorch-lightning/issues/20757
When I tried to run DDP training with multiple RTX5090 I encountered the error in nccl.
I have seen this in different task/project and different trainer implementation, and eventually reproduced this error with native pytorch implementation.
With this minimal script of ddp training example: https://gist.github.com/KohakuBlueleaf/ec182a1e542905a5b0ec2fbdf3518e46
Pytorch 2.7.0+cu128 with multiple RTX5090 cannot run this training script normally with nccl backend, gloo works well.
The `_sync_module_states()` call within `DistrubutedDataPrallel.__init__()` failed with following error messages:
```
terminate called after throwing an instance of 'c10::Error'
[rank0]:[E504 22:44:08.389636235 ProcessGroupNCCL.cpp:1896] [PG ID 0 PG GUID 0(default_pg) Rank 0] Process group watchdog thread terminated with exception: CUDA error: an illegal memory access was encountered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x749ef23785e8 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xe0 (0x749ef230d4a2 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x3c2 (0x749ef27b7422 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libc10_cuda.so)
frame #3: c10d::ProcessGroupNCCL::WorkNCCL::finishedGPUExecutionInternal() const + 0x56 (0x749e8208b456 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::WorkNCCL::isCompleted() + 0x70 (0x749e8209b6f0 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #5: c10d::ProcessGroupNCCL::watchdogHandler() + 0x782 (0x749e8209d282 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #6: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x749e8209ee8d in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
frame #7: <unknown function> + 0xecdb4 (0x749e72371db4 in /lib/x86_64-linux-gnu/libstdc++.so.6)
frame #8: <unknown function> + 0x9caa4 (0x749ef3ed6aa4 in /lib/x86_64-linux-gnu/libc.so.6)
frame #9: <unknown function> + 0x129c3c (0x749ef3f63c3c in /lib/x86_64-linux-gnu/libc.so.6)
terminate called recursively
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: an illegal memory access was encountered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x7a55cc9785e8 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xe0 (0x7a55cc90d4a2 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x3c2 (0x7a55ccd22422 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x1e79f (0x7a55cccea79f in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x20060 (0x7a55cccec060 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x2028c (0x7a55cccec28c in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x44d142 (0x7a55bf5ab142 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
frame #7: c10::TensorImpl::~TensorImpl() + 0x9 (0x7a55cc952f39 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #8: <unknown function> + 0x1629e70 (0x7a55ac4bde70 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0x13661f2 (0x7a55ac1fa1f2 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
frame #10: <unknown function> + 0xc337a0 (0x7a55bfd917a0 in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
frame #11: <unknown function> + 0x37f17d (0x7a55bf4dd17d in /home/kblueleaf/micromamba/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
frame #12: <unknown function> + 0x224918 (0x61ce7101a918 in /home/kblueleaf/micromamba/bin/python)
frame #13: _PyObject_MakeTpCall + 0x2c3 (0x61ce70ffac23 in /home/kblueleaf/micromamba/bin/python)
frame #14: <unknown function> + 0x1127e4 (0x61ce70f087e4 in /home/kblueleaf/micromamba/bin/python)
frame #15: _PyObject_FastCallDictTstate + 0x292 (0x61ce70ffd852 in /home/kblueleaf/micromamba/bin/python)
frame #16: <unknown function> + 0x23267c (0x61ce7102867c in /home/kblueleaf/micromamba/bin/python)
frame #17: _PyObject_MakeTpCall + 0x274 (0x61ce70ffabd4 in /home/kblueleaf/micromamba/bin/python)
frame #18: <unknown function> + 0x1127e4 (0x61ce70f087e4 in /home/kblueleaf/micromamba/bin/python)
frame #19: PyEval_EvalCode + 0xa1 (0x61ce710b1341 in /home/kblueleaf/micromamba/bin/python)
frame #20: <unknown function> + 0x2df9ba (0x61ce710d59ba in /home/kblueleaf/micromamba/bin/python)
frame #21: <unknown function> + 0x2da9c5 (0x61ce710d09c5 in /home/kblueleaf/micromamba/bin/python)
frame #22: PyRun_StringFlags + 0x62 (0x61ce710c1b82 in /home/kblueleaf/micromamba/bin/python)
frame #23: PyRun_SimpleStringFlags + 0x3c (0x61ce710c1abc in /home/kblueleaf/micromamba/bin/python)
frame #24: Py_RunMain + 0x45c (0x61ce710e19fc in /home/kblueleaf/micromamba/bin/python)
frame #25: Py_BytesMain + 0x37 (0x61ce7109b477 in /home/kblueleaf/micromamba/bin/python)
frame #26: <unknown function> + 0x2a1ca (0x7a55ce3cf1ca in /lib/x86_64-linux-gnu/libc.so.6)
frame #27: __libc_start_main + 0x8b (0x7a55ce3cf28b in /lib/x86_64-linux-gnu/libc.so.6)
frame #28: <unknown function> + 0x2a5321 (0x61ce7109b321 in /home/kblueleaf/micromamba/bin/python)
```
I have ran nccl-test from Nvidia and it works really well:
```
❯ ./build/all_reduce_perf -b 16M -e 128M -f 2 -g 4
# nThread 1 nGpus 4 minBytes 16777216 maxBytes 134217728 step: 2(factor) warmup iters: 5 iters: 20 agg iters: 1 validation: 1 graph: 0
#
# Using devices
# Rank 0 Group 0 Pid 836872 on Amphitrite device 0 [0000:01:00] NVIDIA GeForce RTX 5090
# Rank 1 Group 0 Pid 836872 on Amphitrite device 1 [0000:81:00] NVIDIA GeForce RTX 5090
# Rank 2 Group 0 Pid 836872 on Amphitrite device 2 [0000:82:00] NVIDIA GeForce RTX 5090
# Rank 3 Group 0 Pid 836872 on Amphitrite device 3 [0000:c1:00] NVIDIA GeForce RTX 5090
#
# out-of-place in-place
# size count type redop root time algbw busbw #wrong time algbw busbw #wrong
# (B) (elements) (us) (GB/s) (GB/s) (us) (GB/s) (GB/s)
16777216 4194304 float sum -1 4922.6 3.41 5.11 0 4855.1 3.46 5.18 0
33554432 8388608 float sum -1 9678.9 3.47 5.20 0 9689.4 3.46 5.19 0
67108864 16777216 float sum -1 20211 3.32 4.98 0 19824 3.39 5.08 0
134217728 33554432 float sum -1 39707 3.38 5.07 0 40415 3.32 4.98 0
# Out of bounds values : 0 OK
# Avg bus bandwidth : 5.10008
#
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.9 | packaged by conda-forge | (main, Mar 4 2025, 22:48:41) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-24-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 5090
GPU 1: NVIDIA GeForce RTX 5090
GPU 2: NVIDIA GeForce RTX 5090
GPU 3: NVIDIA GeForce RTX 5090
Nvidia driver version: 570.124.04
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD Eng Sample: 100-000000314-04_30/16_N
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 43%
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 3200.22
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] lion-pytorch==0.2.3
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] open_clip_torch==2.32.0
[pip3] pytorch-lightning==2.5.1.post0
[pip3] torch==2.7.0+cu128
[pip3] torchaudio==2.7.0+cu128
[pip3] torchdata==0.11.0
[pip3] torchdiffeq==0.2.5
[pip3] torchmetrics==1.7.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.22.0+cu128
[pip3] triton==3.3.0
[conda] Could not collect
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,038,050,985
|
[BE]: Update cutlass submodule to 3.9.2
|
Skylion007
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"module: dynamo",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
A lot of last minute bugfixes for CUTLASS blackwell that we should upstream. It's a header only library and a minor release so this should strictly improve compiler support and fix some bugs. Needed to update some instruction numbers in torch compile baselines for the new kernels
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,038,048,736
|
[BE]: Update torch core lazy helpers with micropts
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Some minor nits I noticed. Use reserve when possible
| true
|
3,037,894,802
|
Segmentation fault (core dumped) in torch.nn.functional.alpha_dropout
|
cx104906
|
open
|
[
"module: crash",
"oncall: quantization",
"module: error checking",
"triaged",
"module: empty tensor",
"topic: fuzzer"
] | 1
|
NONE
|
### 🐛 Describe the bug
reproduce
```
curl -L -o 003-args "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000003-args"
curl -L -o 003-kwargs "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000003-kwargs"
python cxtest1.py
```
cxtest1.py
```import torch
import pickle
print(torch.__version__)
mylist = torch.load("/home/cx/cxtemp/003-args",weights_only=True)
mydict = torch.load("/home/cx/cxtemp/003-kwargs",weights_only=True)
print("test.....")
torch.nn.functional.alpha_dropout(*mylist,**mydict)
```
output
```
2.8.0a0+gitcbcf677
/home/cx/pytorch/torch/_utils.py:425: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
device=storage.device,
test.....
Segmentation fault (core dumped)
```
### Versions
versions
```
python collect_env.py
Collecting environment information...
PyTorch version: 2.8.0a0+gitcbcf677
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.31
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
Stepping: 7
CPU MHz: 2095.076
BogoMIPS: 4190.15
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 128 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] optree==0.15.0
[pip3] torch==2.8.0a0+gitcbcf677
[conda] numpy 2.2.5 pypi_0 pypi
[conda] optree 0.15.0 pypi_0 pypi
[conda] torch 2.8.0a0+gitcbcf677 dev_0 <develop>
```
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @malfet
| true
|
3,037,832,335
|
[WIP] Pattern matcher support for mutable ops with view inputs
|
yf225
|
open
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152776
* #152775
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,037,808,077
|
[Inductor] Pattern matcher support for mutable ops with non-view inputs
|
yf225
|
open
|
[
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 2
|
CONTRIBUTOR
|
Fixes the non-view input use case in https://github.com/pytorch/pytorch/issues/152441.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152776
* __->__ #152775
Pull-Request-resolved: https://github.com/pytorch/pytorch/pull/152767
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,037,780,186
|
[dynamo][super variable] Fix bug to use correct source
|
anijain2305
|
closed
|
[
"module: rocm",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,037,779,755
|
RuntimeError: creation_meta == CreationMeta::DEFAULT INTERNAL ASSERT FAILED at "/build/pytorch/torch/csrc/autograd/variable.cpp":224, please report a bug to PyTorch.
|
ad8e
|
open
|
[
"high priority",
"triage review",
"module: autograd",
"triaged"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Reproducer:
1. `git clone https://github.com/crowsonkb/k-diffusion.git`
2. `cd k-diffusion`
3. Use find in files: `q, k = scale_for_cosine_sim(q, k, self.scale[:, None], 1e-6)` (it'll be in image_transformer_v2.py). Comment it out.
4. Run `python train.py --config configs/config_oxford_flowers.json --name flowers_demo_001 --evaluate-n 0 --batch-size 32 --sample-n 36 --mixed-precision bf16`
```
[rank0]: File "/mnt/clusterstorage/workspace/kevin/kd2/k-diffusion/k_diffusion/models/image_transformer_v2.py", line 426, in forward
[rank0]: q = apply_rotary_emb_(q, theta)
[rank0]: File "/mnt/clusterstorage/workspace/kevin/kd2/k-diffusion/k_diffusion/models/image_transformer_v2.py", line 236, in apply_rotary_emb_
[rank0]: return ApplyRotaryEmbeddingInplace.apply(x, theta, False)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 575, in apply
[rank0]: return super().apply(*args, **kwargs) # type: ignore[misc]
[rank0]: RuntimeError: creation_meta == CreationMeta::DEFAULT INTERNAL ASSERT FAILED at "/build/pytorch/torch/csrc/autograd/variable.cpp":224, please report a bug to PyTorch.
[rank0]:[W504 04:57:17.126816218 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
```
### Versions
```
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] gpytorch==1.14
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.17.4+torch260cu128
[pip3] numpy==1.24.4
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0+git35c6c7c6
[pip3] welford-torch==0.2.5
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan
| true
|
3,037,774,797
|
[fx] Recursive DCE on subgraphs
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td",
"ciflow/pull"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152806
* #152675
* #152770
* __->__ #152772
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,037,772,983
|
[aoti] Add grid_sampler_3d to cshim
|
MaanasArora
|
open
|
[
"triaged",
"open source",
"module: inductor",
"release notes: inductor (aoti)"
] | 4
|
NONE
|
Fixes #147625.
Do we need any tests?
This is my first contribution. Thanks!
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @angelayi @desertfire
| true
|
3,037,768,075
|
[inductor][refactor] Refactor the fetching of subgraph names
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ciflow/pull"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152806
* #152675
* __->__ #152770
* #152772
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,037,708,655
|
Set CMake 3.5 as minimum version in pytorch_android
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/android"
] | 9
|
COLLABORATOR
|
I saw pytorch_android failure in docker image builds. This fix attempts to bypass CMake 4 limitations.
| true
|
3,037,694,873
|
[cudagraphs] Fix issue in collecting static_input_idxs
|
pytorchbot
|
closed
|
[
"open source",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152287
related to https://github.com/pytorch/pytorch/issues/152275
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,037,692,794
|
[WIP] Pattern matcher support for custom op
|
yf225
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152767
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,037,690,318
|
[caffe2] Support building for armv8.1
|
andrewjcg
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary:
- Remove explicit `-march=` compiler flags, as they're already implied by
the toolchain:
https://www.internalfb.com/code/fbsource/[7f85b0565073]/fbcode/tools/build/buck/wrappers/defs.bzl?lines=819
- Gate non-8.1 compliant opcodes with `__ARM_FEATURE_*`.
Test Plan: CI
Reviewed By: rahulg
Differential Revision: D74023601
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,037,687,461
|
[c10d] Fix unused `group` input argument in `new_subgroups()`
|
tsunghsienlee
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 10
|
CONTRIBUTOR
|
Summary: This diff fixes an unused input argument [`group`](https://github.com/pytorch/pytorch/blob/8faa22569519b8916dfa0334287cbb849704965f/torch/distributed/distributed_c10d.py#L5341) in the `new_subgroups()` function.
Test Plan: contbuild & OSS CI, see
Differential Revision: D74132537
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,037,685,086
|
[WIP] fix issue 151198
|
yf225
|
closed
|
[
"module: cpu",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152764
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,037,676,169
|
can't build torch on WSL
|
thot-experiment
|
closed
|
[
"module: build"
] | 5
|
NONE
|
### 🐛 Describe the bug
I'm on hour 5 of trying to get a version of torch built that support sm_70 AND sm_120, for some reason the latest linux version does not, everything is working fine for me under windows so I know it must be possible to do both somehow but I'm sort of at wits end. I've followed the instructions under both archlinux and Ubuntu 24.04 in WSL and both and in all the cases I've tried I've gotten something like this, it's always tensorpipe, and it always fails with some sort of error referencing `uint8_t` and `<cstdint>`. Even when trying to build with `USE_CUDA=0`
Two things, one, a plea for help if anyone has can point me in *a* direction; and two, is there a reason why on windows:
```
> python -c "import torch; print(torch.__version__); print(torch.cuda.get_arch_list()); print(torch.randn(1).cuda(0))"
2.7.0+cu128
['sm_50', 'sm_60', 'sm_61', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90', 'sm_100', 'sm_120']
tensor([-0.8565], device='cuda:0')
```
while under WSL
```
python -c "import torch; print(torch.__version__); print(torch.cuda.get_arch_list()); print(torch.randn(1).cuda(0))
"
2.7.0+cu128
['sm_75', 'sm_80', 'sm_86', 'sm_90', 'sm_100', 'sm_120', 'compute_120']
/home/errata/comfy/.venv/lib/python3.12/site-packages/torch/cuda/__init__.py:262: UserWarning:
Found GPU1 Quadro GV100 which is of cuda capability 7.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability supported by this library is 7.5.
warnings.warn(
tensor([2.2525], device='cuda:0')
```
and here's the build error i got last time i tried, i didn't save earlier ones and i believe they were all in tensorpipe and related to `uint8_`
```
[4849/7498] Building CXX object third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe.dir/common/shm_segment.cc.o
FAILED: third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe.dir/common/shm_segment.cc.o
/usr/sbin/ccache /usr/sbin/c++ -I/home/thot/libs/pytorch/cmake/../third_party/benchmark/include -I/home/thot/libs/pytorch/third_party/tensorpipe -I/home/thot/libs/pytorch/build/third_party/tensorpipe -I/home/thot/libs/pytorch/third_party/tensorpipe/third_party/libnop/include -I/home/thot/libs/pytorch/third_party/tensorpipe/third_party/libuv/include -isystem /home/thot/libs/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /home/thot/libs/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/thot/libs/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/thot/libs/pytorch/third_party/protobuf/src -isystem /home/thot/.conda/envs/buildtorch/include -isystem /home/thot/libs/pytorch/third_party/XNNPACK/include -isystem /home/thot/libs/pytorch/third_party/ittapi/include -isystem /home/thot/libs/pytorch/cmake/../third_party/eigen -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -O3 -DNDEBUG -std=gnu++14 -fPIC -DMKL_HAS_SBGEMM -DMKL_HAS_SHGEMM -DTORCH_USE_LIBUV -MD -MT third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe.dir/common/shm_segment.cc.o -MF third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe.dir/common/shm_segment.cc.o.d -o third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe.dir/common/shm_segment.cc.o -c /home/thot/libs/pytorch/third_party/tensorpipe/tensorpipe/common/shm_segment.cc
In file included from /home/thot/libs/pytorch/third_party/tensorpipe/tensorpipe/common/shm_segment.h:18,
from /home/thot/libs/pytorch/third_party/tensorpipe/tensorpipe/common/shm_segment.cc:9:
/home/thot/libs/pytorch/third_party/tensorpipe/tensorpipe/common/memory.h:22:21: error: expected ‘)’ before ‘*’ token
22 | MmappedPtr(uint8_t* ptr, size_t length) {
| ~ ^
| )
/home/thot/libs/pytorch/third_party/tensorpipe/tensorpipe/common/memory.h:44:3: error: ‘uint8_t’ does not name a type
44 | uint8_t* ptr() {
| ^~~~~~~
/home/thot/libs/pytorch/third_party/tensorpipe/tensorpipe/common/memory.h:18:1: note: ‘uint8_t’ is defined in header ‘<cstdint>’; this is probably fixable by adding ‘#include <cstdint>’
17 | #include <tensorpipe/common/error_macros.h>
+++ |+#include <cstdint>
```
at this point i'm just trying to build the CPU version in a minimal env just to sanity check myself that it can even be done
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 15.1.1 20250425
Clang version: Could not collect
CMake version: version 4.0.0
Libc version: glibc-2.41
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.41
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 5070 Ti
GPU 1: Quadro GV100
Nvidia driver version: 576.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 10
BogoMIPS: 7392.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves md_clear flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1.5 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Unknown: Dependent on hypervisor status
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.9.0.13
[pip3] nvidia-cudnn-cu12==9.9.0.52
[pip3] optree==0.15.0
[pip3] triton==3.3.0
[conda] mkl-include 2025.1.0 pypi_0 pypi
[conda] mkl-static 2025.1.0 pypi_0 pypi
[conda] numpy 2.2.5 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.9.0.13 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.9.0.52 pypi_0 pypi
[conda] optree 0.15.0 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
cc @malfet @seemethere
| true
|
3,037,564,690
|
added short integer for repeat_interleave_cpu, Fixes #151311
|
arjuanwall
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 5
|
NONE
|
- Fixes #151311 (repeat_interleave_cpu not implemented for "Char")
- Allows torch.repeat_interleave on CPU to accept int8, uint8, and int16 repeat‑count tensors
- In aten/src/ATen/native/Repeat.cpp, tiny integer dtypes (kChar, kByte, kShort) are up‑cast to kInt before the AT_DISPATCH_INDEX_TYPES macro, so they reach the existing int32/64 kernel
- No changes to the core algorithm or performance‑critical paths (int32/64 stay unchanged)
- test/test_repeat_interleave_smallint.py verifies correct results for int8 and int16 repeat counts
| true
|
3,037,563,460
|
Performance Regression nightly 02/14→02/15, on nanogpt speedrun
|
YouJiacheng
|
closed
|
[] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I manually applied changes from #152641
02/09: 1469.8-1470.4s.
03/01: 1471.3-1472.5s.
#### Inductor output code
1. (02/09 + patch vs. 03/01 + patch)
Bwd diff:
https://www.diffchecker.com/p6TsbcIF/
Fwd diff (~no diff):
https://www.diffchecker.com/BaZVI86E/
#### Bisection
02/20 Bwd is identical to 03/01, Fwd ~no diff to both 02/09 & 03/01: https://www.diffchecker.com/hIHeG4HA/
02/15 Bwd is identical to 03/01, Fwd is identical to 02/09.
02/12 Bwd & Fwd is identical to 02/09
02/14 Bwd is identical to 02/09, Fwd ~no diff to both 02/09 & 03/01: https://www.diffchecker.com/TkhA3est/
#### Conclusion
the regression is 02/14→02/15, with this diff in Inductor output code: https://www.diffchecker.com/p6TsbcIF/
According to `nightly` branch:
2025-02-14 nightly release (https://github.com/pytorch/pytorch/commit/68c826639edb3ccd14ed198f8b53f384b4fed36d)
2025-02-15 nightly release (https://github.com/pytorch/pytorch/commit/4233a779603207f19033cd433d2961c93b932cb4)
`git log --oneline f95bdf5e6c8ea482ba6f64d655513b6a191ac142^..4233a779603207f19033cd433d2961c93b932cb4`
```
4233a77960 update kineto submodule to include fix for windows build (#147195)
c1fcba3648 [Inductor] Fix the lowering of squeeze when input is not contiguous (#146746)
bf0c89a72f [dynamo] fix error message when logging graph that contains hops (#147227)
933f921b36 [PT][FSDP] support custom all reduce hook across FSDP units (#147114)
a9ae3340ca Fix triton masked loading for non-block tl.loads (#144782)
49727bbc9d Turn on prologue fusion (#147008)
76f57e184a [dynamo] Make SliceVariable a subclass of VariableTracker (#147046)
a5c0dab900 [AOTInductor] Guard RAII_cpuMalloc with macro (#147150)
1224765286 [cond] make cond call fake kernel in dynamo (#147045)
85a82c5bc8 [cond] make cond re-dispatch in proxy mode (#146954)
eecee5863e Nccl update to 2.25.1 for cuda 12.4-12.8 (#146073)
d38db94689 [inductor][refactor] Move _compile_file to cpp_builder (#147202)
dd86491b35 [cutlass backend][BE] refactor tests to remove duplicate logic (#146743)
6f035d8462 [torch] Make amdsmi cdll hook private (#147207)
272ead7b5e Make fx.node.map_arg() and .map_aggregate() generic (#146248)
58f654b5ad [ONNX] Consolidate constants to a single location (#147166)
765bc30ab9 [ONNX] Set warning stacklevel so it appears at the torch.onnx call site (#147165)
9a1eac6704 [ONNX] Handle number of outputs in builder (#147164)
5517eb4398 Revert "[cutlass backend] Do not change dtype of GEMM template (#146877)"
aac5d1a289 Revert "Add torch._scaled_mm for CPU (#139975)"
20a9938069 try print stacktrace for error (#147061)
8b5ee275fb [MPS] Fix cholesky_ex for empty inputs (#147159)
0d16188c06 [CI] Use job name to index into test times json (#147154)
e8fbc86de0 Make torch.cuda.gds APIs public (#147120)
c3853d924f Introduce new template heuristic for triton autotune configs (#144985)
e06ee4aa9f Revert "Nccl update to 2.25.1 for cuda 12.4-12.8 (#146073)"
059dfe2081 Revert "update kineto submodule (#147015)"
06f4a5c0e5 Nccl update to 2.25.1 for cuda 12.4-12.8 (#146073)
cefd9805de Add `RAISE_VARARGS 0` (#146493)
134723ee1c Add `WITH_EXCEPT_START` opcode (#146492)
dbb86b78ad Add `sys.exc_info` and `sys.exception` (#146498)
ea188ac0c7 [export] Add meta for aten.bincount (#147129)
de26ddfbdc Update torch-xpu-ops commit pin (#146671)
bd019c0bb4 [Inductor][CPP] Fix node name for wgt delete (#147056)
10bc8f25b2 [MPS][BE] Migrate polar to use functor (#147184)
278ffd84fc [MPS][BE] Add copysign integral flavors as functor (#147183)
2ef51cfb9d [BE][MPS] Infer results of functor (#147182)
331d5cf560 [inductor] [cpp] Support vectorization for score and mask in FlexAttention CPU (#143638)
ce38bfd299 [executorch hash update] update the pinned executorch hash (#147157)
92f669e39c [BE] Use `c10::multiply_integers` in cholesky_impl (#147163)
2d089a5697 [dynamo] Remove unintended lru_cache (#147147)
6344ca1dd4 [BE][Ez]: Apply FURB188: use str remove(pre|suf)fix (#146997)
d473c212fd Remove code for Python < 3.9 (#147097)
880e176544 [inductor] Fix for pattern file contains 'getitem' fails during impor… (#144980)
0b84311842 [export] Generate printers/parsers for serialization enum values. (#147126)
05001f0459 Add Structured Tracing for Traced Graph Edge Details for AC Debugging (#146634)
486fc12d7e torch: Log a unified waitcounter for torch.compile and triton.autotune (#146723)
f0bdc27f74 Add torch._scaled_mm for CPU (#139975)
c5a9e4a6a0 [Inductor][CPP] Fix a CPP GEMM Template output data type issue (#146958)
d3524ecdd6 [Break XPU] Align meta calculation for fft_r2c with _fft_r2c_mkl (#146763)
ade5af9430 [XPU] Align XPU convolution_backward output layout between fake tensor and real output tensor. (#146880)
9befdf565a [Break XPU][Inductor UT] Set input tensors to corresponding device for test case in test_aot_indutor.py (#145248)
972e927134 [Break XPU][Inductor UT] Fix XPU Inductor UT failures introduced from community. (#146762)
6419076db9 [torch][amdsmi] Look for amdsmi in ROCM_HOME/ROCM_PATH before using rpath (#147117)
20a369aa3a [Intel GPU] Avoid copy when the input of Matmul is broadcasted (#143784)
057bcd3a45 [ca] eliminate duplicate getitem graph nodes for shape inputs (#146875)
76dacd5fc7 [ca] log graph before reodering passes (#146735)
cdbf677cdd Remove outdated comment in ATen/mkl/Sparse.h about lack of Windows support (#147125)
1f41ceb713 [BE][Ez]: Enable ruff rule banning print in assert (#146615)
5469e5c556 [export] Minor fix to locals (#146955)
7b4efb492b [inductor][refactor] Make _compile_file only used for fbcode (#147106)
2d3db4509a fix pt2e block wise quantization test (#147035)
b0553cee6b [Utilization] post-test-process workflow (#145310)
278ffd84fc [MPS][BE] Add copysign integral flavors as functor (#147183)
2ef51cfb9d [BE][MPS] Infer results of functor (#147182)
331d5cf560 [inductor] [cpp] Support vectorization for score and mask in FlexAttention CPU (#143638)
ce38bfd299 [executorch hash update] update the pinned executorch hash (#147157)
92f669e39c [BE] Use `c10::multiply_integers` in cholesky_impl (#147163)
2d089a5697 [dynamo] Remove unintended lru_cache (#147147)
6344ca1dd4 [BE][Ez]: Apply FURB188: use str remove(pre|suf)fix (#146997)
d473c212fd Remove code for Python < 3.9 (#147097)
880e176544 [inductor] Fix for pattern file contains 'getitem' fails during impor… (#144980)
0b84311842 [export] Generate printers/parsers for serialization enum values. (#147126)
05001f0459 Add Structured Tracing for Traced Graph Edge Details for AC Debugging (#146634)
486fc12d7e torch: Log a unified waitcounter for torch.compile and triton.autotune (#146723)
f0bdc27f74 Add torch._scaled_mm for CPU (#139975)
c5a9e4a6a0 [Inductor][CPP] Fix a CPP GEMM Template output data type issue (#146958)
d3524ecdd6 [Break XPU] Align meta calculation for fft_r2c with _fft_r2c_mkl (#146763)
ade5af9430 [XPU] Align XPU convolution_backward output layout between fake tensor and real output tensor. (#146880)
9befdf565a [Break XPU][Inductor UT] Set input tensors to corresponding device for test case in test_aot_indutor.py (#145248)
972e927134 [Break XPU][Inductor UT] Fix XPU Inductor UT failures introduced from community. (#146762)
6419076db9 [torch][amdsmi] Look for amdsmi in ROCM_HOME/ROCM_PATH before using rpath (#147117)
20a369aa3a [Intel GPU] Avoid copy when the input of Matmul is broadcasted (#143784)
057bcd3a45 [ca] eliminate duplicate getitem graph nodes for shape inputs (#146875)
76dacd5fc7 [ca] log graph before reodering passes (#146735)
cdbf677cdd Remove outdated comment in ATen/mkl/Sparse.h about lack of Windows support (#147125)
1f41ceb713 [BE][Ez]: Enable ruff rule banning print in assert (#146615)
5469e5c556 [export] Minor fix to locals (#146955)
7b4efb492b [inductor][refactor] Make _compile_file only used for fbcode (#147106)
2d3db4509a fix pt2e block wise quantization test (#147035)
b0553cee6b [Utilization] post-test-process workflow (#145310)
260b21b8bc [cutlass backend] Do not change dtype of GEMM template (#146877)
92d448ff62 Add self to CODEOWNERS for fx/proxy.py; warn against adding new node arg types (#147031)
9a883007a2 Revert "Implement cuda graphs implementation of torch.cond and torch.while_loop (#140979)"
65e8862b9a Revert "[cond] make cond re-dispatch in proxy mode (#146954)"
1f8ff6812d [Fix]: Disable KleidiAI if unsupported gcc/clang compiler is detected (#146836)
447a142de2 support input mutations on tangents in compile (#141131)
7077d0ac8c [DCP] Introduce modules metadata in the storage_meta (#146654)
938209fb6f Revert "Use 2022 as default VC_YEAR for windows builds (#147053)"
683178fabc [cuda] fix printing of num_gpus (#146838)
020232ec9f [Submodule]: Update KleidiAI submodule to v1.3.0 (#146480)
df776d64f7 chore: fix typos in error messages in FSDP (#146805)
345f556628 Fix `DispatchStub.cpp` compilation for gcc 14 (#146512)
7c3b2a29ec [subclass] testing WrapperSubclass respect outer_size, outer_stride (#146897)
e2479d7809 Update slow tests (#146822)
aeabbffe15 Disable test with dynamo for schema gen (#146865)
67c4c39b4f [docs] Minor fixes to export and aoti docs (#144513)
d1997b610f update kineto submodule (#147015)
8d94eb1e3b [BE]: Make OrderedSet reversible (#146904)
858bc0cea5 Use 2022 as default VC_YEAR for windows builds (#147053)
f95bdf5e6c Make GetCPUAllocatorMaybePinned to be Device-Agnostic (#146687)
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250209+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.9 (main, Feb 5 2025, 19:10:45) [Clang 19.1.6 ] (64-bit runtime)
Python platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 168
On-line CPU(s) list: 0-161
Off-line CPU(s) list: 162-167
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 42
Socket(s): 2
Stepping: 8
BogoMIPS: 5199.53
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.9 MiB (84 instances)
L1i cache: 2.6 MiB (84 instances)
L2 cache: 168 MiB (84 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-83
NUMA node1 CPU(s): 84-167
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250209+cu126
[conda] Could not collect
| true
|
3,037,539,544
|
[Easy][BE] update recommanded VS Code settings
|
XuehaiPan
|
open
|
[
"open source",
"better-engineering",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152760
Remove old invalid settings and replace with new settings.
| true
|
3,037,484,623
|
Allow ATen ops overloading
|
goldcoderZ
|
open
|
[
"fb-exported"
] | 4
|
CONTRIBUTOR
|
Summary: Allow ATen ops being overloaded.
Test Plan: contbuild & OSS CI [pending]
Differential Revision: D74117257
| true
|
3,037,303,120
|
[MPS] Migrate div roudning modes
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"keep-going"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152788
* __->__ #152758
By implementing `div_floor` and `div_trunc` . Do not mark `div_trunc` as OPMATH, to align following output with CPU(if division is performed in fp32, than result will be truncated to 25
```
import torch
print(torch.tensor([[-7.4688, -3.1289]], dtype=torch.float16,device="cpu").div(torch.tensor([-0.2988, -0.8789], dtype=torch.bfloat16,device="cpu"), rounding_mode="trunc"))
tensor([[24., 3.]])
```
| true
|
3,037,220,364
|
wip
|
bobrenjc93
|
closed
|
[
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152757
* #152601
* #152597
* #152596
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,037,212,290
|
Cuda-12.9 removed libnvToolsExt.so.* and is now purely header nvtx3
|
whitesscott
|
open
|
[
"module: cuda",
"triaged",
"actionable"
] | 3
|
NONE
|
### 🐛 Describe the bug
Nvidia released Cuda-12.9 on 05/01/25
Python 3.12.10 venv
Nvidia Jetson AGX Orin dev kit
Cuda-12.9 removed libnvToolsExt.so.* and is now purely header /usr/local/cuda/include/nvtx3/*
torch/__init__.py attempts to load the now nonexistent library:
"nvtx": "libnvToolsExt.so.*[0-9]",
I compiled torch Version: 2.7.0a0+git119ea4a this morning.
Tried to compile vllm this afternoon. It fails no matter what I do with this error:
```
CMake Error at python3.12/site-packages/torch/share/cmake/Caffe2/public/cuda.cmake:187 (set_property):
The link interface of target "torch::nvtoolsext" contains:
CUDA::nvToolsExt
but the target was not found. Possible reasons include:
* There is a typo in the target name.
* A find_package call is missing for an IMPORTED target.
* An ALIAS target is missing.
```
I added the following prior to more compilation attempts:
```
export CMAKE_ARGS="-DUSE_SYSTEM_NVTX=ON \
-DCUDACXX=/usr/local/cuda/bin/nvcc \
-Dnvtx3_DIR=/usr/local/cuda/include/nvtx3"
```
and it shortened this error to what is noted above:
```
-- Could NOT find nvtx3 (missing: nvtx3_dir)
CMake Warning at python3.12/site-packages/torch/share/cmake/Caffe2/public/cuda.cmake:184 (message):
Cannot find NVTX3, find old NVTX instead
Call Stack (most recent call first):
python3.12/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:86 (include)
python3.12/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
CMakeLists.txt:80 (find_package)
CMake Error at python3.12/site-packages/torch/share/cmake/Caffe2/public/cuda.cmake:186 (set_property):
The link interface of target "torch::nvtoolsext" contains:
CUDA::nvToolsExt
but the target was not found. Possible reasons include:
CMake Generate step failed. Build files cannot be regenerated correctly.
```
pytorch/share/cmake/Caffe2/public/cuda.cmake fails to find nvtx3 and falls down and errors out at the last line before endif().
```
# nvToolsExt
if(USE_SYSTEM_NVTX)
find_path(nvtx3_dir NAMES nvtx3 PATHS ${CUDA_INCLUDE_DIRS})
else()
find_path(nvtx3_dir NAMES nvtx3 PATHS "${PROJECT_SOURCE_DIR}/third_party/NVTX/c/include" NO_DEFAULT_PA>endif()
find_package_handle_standard_args(nvtx3 DEFAULT_MSG nvtx3_dir)
if(nvtx3_FOUND)
add_library(torch::nvtx3 INTERFACE IMPORTED)
target_include_directories(torch::nvtx3 INTERFACE "${nvtx3_dir}")
target_compile_definitions(torch::nvtx3 INTERFACE TORCH_CUDA_USE_NVTX3)
else()
message(WARNING "Cannot find NVTX3, find old NVTX instead")
add_library(torch::nvtoolsext INTERFACE IMPORTED)
set_property(TARGET torch::nvtoolsext PROPERTY INTERFACE_LINK_LIBRARIES CUDA::nvToolsExt)
endif()
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+git119ea4a
Is debug build: False
CUDA used to build PyTorch: 12.9
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.12.10 (main, Apr 9 2025, 03:50:15) [GCC 6.3.0 20170516] (64-bit runtime)
Python platform: Linux-5.15.148-tegra-aarch64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.9.41
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Orin (nvgpu)
Nvidia driver version: 540.4.0
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.9.9.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv.so.9.9.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn.so.9.9.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_precompiled.so.9.9.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.9.0
/usr/lib/aarch64-linux-gnu/libcudnn_graph.so.9.9.0
/usr/lib/aarch64-linux-gnu/libcudnn_heuristic.so.9.9.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops.so.9.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: ARM
Model name: Cortex-A78AE
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 4
Socket(s): -
Cluster(s): 3
Stepping: r0p1
CPU max MHz: 2201.6001
CPU min MHz: 115.2000
BogoMIPS: 62.50
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp uscat ilrcpc flagm paca pacg
L1d cache: 768 KiB (12 instances)
L1i cache: 768 KiB (12 instances)
L2 cache: 3 MiB (12 instances)
L3 cache: 6 MiB (3 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, but not BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.2.0
[pip3] mypy==1.15.0
[pip3] mypy_extensions==1.1.0
[pip3] numpy==2.2.5
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvtx==0.2.11
[pip3] optree==0.15.0
[pip3] torch==2.7.0a0+git119ea4a
[pip3] torchaudio==2.7.0a0+654fee8
[pip3] torchvision==0.22.0+9eb57cd
[pip3] triton==3.3.0+gitc7fc1e38
[conda] Could not collect
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,037,186,867
|
Inconsistent behavior between CPU and GPU implementations of `torch.Tensor.put_` method
|
SilentTester73
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
## Description
I've discovered a discrepancy in the behavior of the `put_` method between CPU and GPU tensors. When executing identical operations, CPU tensors maintain their original values while GPU tensors are incorrectly modified to zero.
## Reproduction Code
colab link: [https://colab.research.google.com/drive/1vU3uDTXgqKbD-kytlfbgW3OKJBKkTTrl?usp=sharing](https://colab.research.google.com/drive/1vU3uDTXgqKbD-kytlfbgW3OKJBKkTTrl?usp=sharing)
```python
import torch
cpu_tensor = torch.tensor(-6144)
indices = torch.zeros((5, 6), dtype=torch.int64)
values_data = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 19712, 0, -6144, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 19712, 0, -6144]
values = torch.tensor(values_data, dtype=torch.int64)
# CPU operation
result_cpu = cpu_tensor.put_(indices, values)
print("CPU result:")
print(result_cpu)
# Check if CUDA is available
if torch.cuda.is_available():
# Move tensors to GPU
gpu_tensor = cpu_tensor.clone().to('cuda')
gpu_indices = indices.to('cuda')
gpu_values = values.to('cuda')
# Perform operation on GPU
result_gpu = gpu_tensor.put_(gpu_indices, gpu_values)
# Move result back to CPU for comparison
result_gpu_cpu = result_gpu.to('cpu')
print("\nGPU result (moved back to CPU):")
print(result_gpu_cpu)
```
Output:
```
CPU result:
tensor(-6144)
GPU result (moved back to CPU):
tensor(0)
```
## Behavior
- CPU result: `tensor(-6144)` (correct)
- GPU result: `tensor(0)` (incorrect)
## Additional Information
The operation involves a scalar tensor with indices and values of different shapes. The CPU implementation correctly handles this case and maintains the original value, while the GPU implementation incorrectly changes the value to zero.
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 17.0.6 (++20231209124227+6009708b4367-1~exp1~20231209124336.77)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-58-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
```
| true
|
3,037,173,341
|
[nativert] move intrusive list to c10/util
|
dolpm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13
|
CONTRIBUTOR
|
Summary:
nativert RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md
To land the runtime into PyTorch core, we will gradually land logical parts of the code into the Github issue and get each piece properly reviewed.
This diff moves intrusive list to c10/util
Test Plan: CI
Differential Revision: D74104595
| true
|
3,037,152,263
|
Handle less functions than number of segments
|
JacobHelwig
|
open
|
[
"triaged",
"open source",
"release notes: autograd"
] | 9
|
NONE
|
Fixes #152752
| true
|
3,037,151,754
|
Checkpoint sequential doesn't raise clear error when segments is greater than number of functions
|
JacobHelwig
|
open
|
[
"module: activation checkpointing",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
When incorrectly specifying segments to be greater than number of functions, the error message is not clear:
```
import torch
print(torch.__version__)
from torch.utils.checkpoint import checkpoint_sequential
lin = torch.nn.Linear(10, 10)
torch.nn.init.zeros_(lin.weight)
torch.nn.init.zeros_(lin.bias)
x = torch.zeros(1, 10)
functions = [lin]
x = checkpoint_sequential(
functions=functions,
segments=len(functions) + 1,
input=x,
use_reentrant=False,
)
```
Output:
```
2.1.0+cu121
Traceback (most recent call last):
File "/mnt/data/shared/jacob/ckpt.py", line 11, in <module>
x = checkpoint_sequential(
File "/data/jacob/anaconda3/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 531, in checkpoint_sequential
for start in range(0, segment_size * (segments - 1), segment_size):
ValueError: range() arg 3 must not be zero
```
### Versions
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
--2025-05-02 23:12:09-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24497 (24K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[===========================================================================================================================================================================================>] 23.92K --.-KB/s in 0.002s
2025-05-02 23:12:09 (11.3 MB/s) - ‘collect_env.py’ saved [24497/24497]
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
GPU 2: NVIDIA GeForce RTX 2080 Ti
GPU 3: NVIDIA GeForce RTX 2080 Ti
GPU 4: NVIDIA GeForce RTX 2080 Ti
GPU 5: NVIDIA GeForce RTX 2080 Ti
GPU 6: NVIDIA GeForce RTX 2080 Ti
GPU 7: NVIDIA GeForce RTX 2080 Ti
GPU 8: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Silver 4116 CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 4
CPU max MHz: 3000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 pti ssbd mba ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 24 MiB (24 instances)
L3 cache: 33 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.4
[pip3] numpydoc==1.4.0
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.18.1
[pip3] nvidia-nvjitlink-cu12==12.3.52
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-lightning==1.9.5
[pip3] torch==2.1.0
[pip3] torchaudio==0.12.1
[pip3] torchcde==0.2.5
[pip3] torchcfm==1.0.6
[pip3] torchdata==0.7.0
[pip3] torchdiffeq==0.2.3
[pip3] torchdyn==1.0.6
[pip3] torchmetrics==1.2.0
[pip3] torchsde==0.2.6
[pip3] torchvision==0.13.1
[pip3] triton==2.1.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.0 hecad31d_10 conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] numpydoc 1.4.0 py39h06a4308_0
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.18.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.3.52 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-lightning 1.9.5 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.1.0 pypi_0 pypi
[conda] torchaudio 0.12.1 py39_cu116 pytorch
[conda] torchcde 0.2.5 pypi_0 pypi
[conda] torchcfm 1.0.6 pypi_0 pypi
[conda] torchdata 0.7.0 pypi_0 pypi
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchdyn 1.0.6 pypi_0 pypi
[conda] torchmetrics 1.2.0 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.13.1 py39_cu116 pytorch
[conda] triton 2.1.0 pypi_0 pypi
cc @soulitzer
| true
|
3,037,106,704
|
Implement util function compute_global_tensor_shape for 1D device mesh
|
dharakk
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152166
* __->__ #152751
### Summary
Recreating #151990 to mitigate easyCLA failure
compute_global_tensor_shape util function takes in local tensor shape, device mesh
and placements. We all gather the shapes from the shards and according to the placement
type we construct the global shape.
Note: currenty only implemented for placement type Shard and Replicate, TODO for StridedShared
### Test
`pytest test/distributed/tensor/test_utils.py`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,037,070,633
|
Error on padding 0-sized tensors
|
roman-openai
|
open
|
[
"triaged",
"actionable",
"module: python frontend",
"module: edge cases"
] | 1
|
NONE
|
### 🐛 Describe the bug
```python
from torch.nn import functional
x = torch.ones((0, 1))
y = functional.pad(x, [1, 1, 0, 0])
```
raises
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[517], line 3
1 from torch.nn import functional
2 x = torch.ones((0, 1))
----> 3 y = functional.pad(x, [1, 1, 0, 0])
File ~/.pyenv/versions/3.11.8/lib/python3.11/site-packages/torch/nn/functional.py:5209, in pad(input, pad, mode, value)
5202 if mode == "replicate":
5203 # Use slow decomp whose backward will be in terms of index_put.
5204 # importlib is required because the import cannot be top level
5205 # (cycle) and cannot be nested (TS doesn't support)
5206 return importlib.import_module(
5207 "torch._decomp.decompositions"
5208 )._replication_pad(input, pad)
-> 5209 return torch._C._nn.pad(input, pad, mode, value)
RuntimeError: The input size 0, plus negative padding 0 and 0 resulted in a negative output size, which is invalid. Check dimension 0 of your input.
```
But expected output is a tensor of shape `(0, 3)`.
### Versions
2.6.0
cc @albanD
| true
|
3,037,034,349
|
wip
|
bobrenjc93
|
closed
|
[
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152749
* #152670
* #152601
* #152597
* #152596
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,037,027,389
|
Conditionally support experimental filesystem include in jit_opt_limit
|
aa6moham
|
open
|
[
"oncall: jit",
"fb-exported",
"ciflow/trunk",
"release notes: jit"
] | 11
|
NONE
|
Summary: some build modes rely on GCC toolchains older than 8.1 (version where the official std::filesystem library was integrated into the STL library) so to support these older build modes (i.e. arvr/mode/embedded/linux/clang-aarch64-release) lets have a conditional on when to include the experimental filesystem library for older GCC versions
Test Plan: CI happy
Differential Revision: D74084646
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,037,006,558
|
torch.compile causes stride mismatch in SDPA with non-contiguous query in torch 2.7
|
felix-lyx
|
open
|
[
"high priority",
"triaged",
"module: regression",
"oncall: pt2"
] | 0
|
NONE
|
### 🐛 Describe the bug
In PyTorch 2.7, when running compiled attention block with non-contiguous query input to `F.scaled_dot_product_attention` on CUDA, I got a stride mismatch error. The default mode for `torch.compile` is used. Non-contiguous query comes from transpose sequence and head dimensions, which should be a standard operation.
This issue didn't exist in an earlier version of PyTorch (2.5.1+cu124) and now happens for all SDPA backends. One temporary fix I found is to force transposed query to be contiguous before `F.scaled_dot_product_attention`, which incurs extra computational overhead.
Here's the code to reproduce the error:
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.attention import SDPBackend, sdpa_kernel
class MultiheadAttention(nn.Module):
def __init__(self, embed_dim, num_heads):
super().__init__()
self.embed_dim = embed_dim
self.num_heads = num_heads
self.head_dim = embed_dim // num_heads
assert self.head_dim * num_heads == self.embed_dim
def forward(self, q, k, v, is_causal=True):
bs, seq_len, _ = q.size()
q = q.view(bs, seq_len, self.num_heads, self.head_dim).transpose(1, 2)
k = k.view(bs, k.size(1), self.num_heads, self.head_dim).transpose(1, 2)
v = v.view(bs, v.size(1), self.num_heads, self.head_dim).transpose(1, 2)
# q = q.contiguous() # make torch.compile happy, otherwise get striding error
output = F.scaled_dot_product_attention(q, k, v, is_causal=is_causal)
output = output.transpose(1, 2).contiguous().view(bs, seq_len, -1)
return output
def test(backend):
b = 2
seq_len = 32
embed_dim = 128
num_heads = 8
dtype = torch.float16
attn = MultiheadAttention(embed_dim, num_heads).to("cuda", dtype=dtype)
attn_compile = torch.compile(attn)
q = torch.randn(b, seq_len, embed_dim).to("cuda", dtype=dtype)
k = torch.randn_like(q)
v = torch.randn_like(q)
with sdpa_kernel(backend):
print(f" test uncomplied attention ({backend}) ".center(100, "-"))
try:
attn(q, k, v)
print("all good")
except Exception as e:
print(e)
print(f" test complied attention ({backend}) ".center(100, "-"))
try:
attn_compile(q, k, v)
print("all good")
except Exception as e:
print(e)
print()
if __name__ == "__main__":
for backend in [
SDPBackend.CUDNN_ATTENTION,
SDPBackend.FLASH_ATTENTION,
SDPBackend.EFFICIENT_ATTENTION,
SDPBackend.MATH,
]:
test(backend)
```
and the output I got is
```
---------------------- test uncomplied attention (SDPBackend.CUDNN_ATTENTION) ----------------------
all good
----------------------- test complied attention (SDPBackend.CUDNN_ATTENTION) -----------------------
expected size 8==8, stride 16==512 at dim=1; expected size 32==32, stride 128==16 at dim=2
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
---------------------- test uncomplied attention (SDPBackend.FLASH_ATTENTION) ----------------------
all good
----------------------- test complied attention (SDPBackend.FLASH_ATTENTION) -----------------------
expected size 8==8, stride 16==512 at dim=1; expected size 32==32, stride 128==16 at dim=2
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
-------------------- test uncomplied attention (SDPBackend.EFFICIENT_ATTENTION) --------------------
all good
--------------------- test complied attention (SDPBackend.EFFICIENT_ATTENTION) ---------------------
expected size 8==8, stride 16==512 at dim=1; expected size 32==32, stride 128==16 at dim=2
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
--------------------------- test uncomplied attention (SDPBackend.MATH) ----------------------------
all good
---------------------------- test complied attention (SDPBackend.MATH) -----------------------------
expected size 8==8, stride 16==512 at dim=1; expected size 32==32, stride 128==16 at dim=2
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
```
### Error logs
Here's the more detailed error traceback for cuDNN SDPA:
```
Traceback (most recent call last):
File "/home/yuxuan/code/test.py", line 72, in <module>
test(backend)
File "/home/yuxuan/code/test.py", line 56, in test
attn_compile(q, k, v)
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yuxuan/code/test.py", line 16, in forward
def forward(self, q, k, v, is_causal=True):
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1201, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 328, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 495, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yuxuan/miniconda3/envs/test/lib/python3.11/site-packages/torch/_inductor/utils.py", line 2404, in run
return model(new_inputs)
^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_yuxuan/zr/czrcvuednxue3zzwvyoinfjepwbwso3dkswmd7olh2x6uznjz5wt.py", line 91, in call
assert_size_stride(buf1, (2, 8, 32, 16), (4096, 512, 16, 1))
AssertionError: expected size 8==8, stride 16==512 at dim=1; expected size 32==32, stride 128==16 at dim=2
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 NVL
GPU 1: NVIDIA H100 NVL
GPU 2: NVIDIA H100 NVL
GPU 3: NVIDIA H100 NVL
Nvidia driver version: 560.35.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9354 32-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3799.0720
CPU min MHz: 1500.0000
BogoMIPS: 6499.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 64 MiB (64 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] torch==2.7.0+cu128
[pip3] torchaudio==2.7.0+cu128
[pip3] torchvision==0.22.0+cu128
[pip3] triton==3.3.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.7.1.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] torch 2.7.0+cu128 pypi_0 pypi
[conda] torchaudio 2.7.0+cu128 pypi_0 pypi
[conda] torchvision 0.22.0+cu128 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
3,037,003,894
|
[FSDP2] fully_shard(mesh=(shard, shard)) for intra and inter node all-gathers
|
weifengpy
|
open
|
[
"oncall: distributed",
"triaged"
] | 3
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
current stauts: `fully_shard(mesh=(shard))` do intra/inter node all-gather together by calling `torch.distributed.all_gather_into_tensor` once
what if we all-gather into 2 stages: do inter-node AG first, then intra-node AG
for recommendation workload, we can have following AG schedule
<img width="717" alt="Image" src="https://github.com/user-attachments/assets/30a3f1a5-ed7b-47c9-9452-f7294cccf8d9" />
* Intra node AG takes advantage of RDMA
* Inter node AG takes advantage of sysmetric memory, or memory pool to use 1 SM
there are 2 considerations
* whether all-to-all and inter AG can be overlapped. they are both network heavy
* whether inter AG inccurs too much memory overhead, as it unshard parameters before it's needed. With NVL72, parameters are sharded into 1/72 intra node already
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,037,002,135
|
[CUDA][cuDNN] Fix handling of `CPU` side input and target length tensors in `CTCLoss`
|
eqy
|
closed
|
[
"module: cudnn",
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
https://github.com/pytorch/pytorch/pull/128271 migrated to cuDNN V8 CTCLoss which expects input and target length tensors to be on `CUDA` rather than `CPU` without adding the logic to account for the edge case of them being on `CPU`
see also #152421
cc @csarofeen @ptrblck @xwang233 @msaroufim @jerryzh168
| true
|
3,037,001,982
|
Ensure mxfp8 scaled_mm works w/ max-autotune
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152665
* __->__ #152744
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,036,995,932
|
[MPS] Migrate `div` to Metal
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps",
"keep-going"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152758
* __->__ #152743
TODOs:
- Verify accuracy of `metal::dot` vs `x.x*x.x + y.y*y.y`
| true
|
3,036,993,246
|
[export][cond] support merging constant ints as unbacked symint
|
ydwu4
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152742
@pianpwk points out that this will be helpful to address several data dependent issues in huggingface [models](https://github.com/huggingface/diffusers/blob/e23705e5577387872dd55ebf6db81bd59df928f1/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py#L332) with the following pattern:
```python
idx = if u0 return 0 else return 1
return x[idx]
```
We could preserve the conditional with a cond.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,992,230
|
[dynamo] Support `delattr` on result of `torch.compile(module)`
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152741
* #152740
This is essentially a follow-up on #122098, where we added support of
`getattr` and `setattr` on result of `torch.compile(module)`, but didn't
add support for `delattr`.
Fixes #150711.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,036,992,201
|
[dynamo] Avoid running `torch.nn.Module.__call__` twice under `torch.compile(mod)`
|
StrongerXi
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152741
* __->__ #152740
When we do `torch.compile(mod)`, we eventually end up returning a new
module instance, whose `forward` method is the result of
`torch.compile(mod.__call__)`, meaning it already captures all the extra
logic (e.g., hook firing) from the default `torch.nn.Module.__call__`.
As a result we can't reuse the inherited default `__call__` as is,
because we'd end up running the logic twice.
This patch makes the returned `OptimizedModule` override the default
`__call__`, and directly calls into its compiled `forward` method.
Fixes #149502
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.