id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,846,380,659
|
Support QNX SDP 8.0 in Pytorch Mobile
|
eleir9268
|
open
|
[
"module: cpu",
"triaged",
"open source",
"oncall: mobile",
"Stale",
"release notes: quantization"
] | 3
|
NONE
|
See https://github.com/qnx-ports/build-files/tree/pytorch_qnx_main for build instructions.
Changes to third_party subprojects are administered by patches in the build repo while we work on getting them upstreamed. Once they are I'll submit another PR to update the refs so that this can go in without extra complications.
Currently XNNPACK requires a one-line change to get it to build (cmake/Dependencies.cmake does not list it as supported with QNX, which is true at the moment, but we still need to allow it for the build to pass).
Test results:
[pyt_results.txt](https://github.com/user-attachments/files/18757964/pyt_results.txt)
One abort in TypeMetaTest.CtorDtorAndCopy
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
2,846,369,794
|
cpp_wrapper: Precompile device-specific header files
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148773
* #144293
* __->__ #146928
This saves us about a second per compilation, which is _massive_ for the OpInfo tests. Total OpInfo test runtime is down about 2x from this change alone.
Relands #144002, with changes needed by fbcode internals.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,846,360,753
|
Support static method of torchbind attributes in torch.compile with inductor backend
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: inductor"
] | 12
|
CONTRIBUTOR
|
As title.
Many changes adapted from https://github.com/pytorch/pytorch/pull/129537.
Also this diff is only for *static* method of torchbind *attributes*. Some case that's not supported/tested:
- dynamic torchbind objects
- torchbind objects as an input to the module.
Note that in JIT Inductor, the attributes are lifted as inputs. So even if we just have torchbind objects as attributes, they will show up as inputs in the graph.
Example generated python code in torch.compile with inductor backend for the test case in `inductor/test_torchbind.py` (P1730554370):
```python
async_compile.wait(globals())
del async_compile
def call(args):
arg1_1, arg2_1, arg3_1 = args
args.clear()
assert_size_stride(arg1_1, (2, 3), (3, 1))
assert_size_stride(arg2_1, (2, 3), (3, 1))
buf2 = empty_strided_cpu((2, 3), (3, 1), torch.float32)
cpp_fused_add_0(arg1_1, arg2_1, buf2)
del arg1_1
del arg2_1
# Topologically Sorted Source Nodes: [x, takes_foo_tuple_return], Original ATen: [aten.add]
buf3 = torch.ops._TorchScriptTesting.takes_foo_tuple_return.default(arg3_1, buf2)
buf4 = buf3[0]
assert_size_stride(buf4, (2, 3), (3, 1))
buf5 = buf3[1]
assert_size_stride(buf5, (2, 3), (3, 1))
buf6 = buf4; del buf4 # reuse
cpp_fused_add_1(buf6, buf5)
del buf5
# Topologically Sorted Source Nodes: [y, b], Original ATen: [aten.add]
buf7 = torch.ops._TorchScriptTesting.takes_foo.default(arg3_1, buf6)
del buf3
del buf6
buf8 = buf7
assert_size_stride(buf8, (2, 3), (3, 1))
# Topologically Sorted Source Nodes: [c], Original ATen: []
buf9 = torch.ops.higher_order.call_torchbind(arg3_1, 'add_tensor', buf2)
del arg3_1
del buf7
buf10 = buf9
assert_size_stride(buf10, (2, 3), (3, 1))
del buf9
buf11 = buf2; del buf2 # reuse
cpp_fused_add_2(buf11, buf8, buf10)
return (buf11, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg1_1 = rand_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32)
arg2_1 = rand_strided((2, 3), (3, 1), device='cpu', dtype=torch.float32)
import pickle
global arg3_1
arg3_1 = pickle.loads(b'\x80\x04\x95[\x00\x00\x00\x00\x00\x00\x00\x8c\x05torch\x94\x8c\x0cScriptObject\x94\x93\x94)\x81\x94]\x94(K\nK\x14e\x8c0__torch__.torch.classes._TorchScriptTesting._Foo\x94\x86\x94b.')
fn = lambda: call([arg1_1, arg2_1, arg3_1])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,846,354,460
|
[ONNX] Update CI transformers cache
|
titaiwangms
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
The cached models are outdated because the related tests are all deleted.
| true
|
2,846,354,168
|
Clear CompiledTritonKernel cache after each inductor compile
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146925
* #146417
Fix a bug introduced by D69123174: because triton kernels now are returned directly by the worker, each future created by the triton kernel should only be used once per compile. Otherwise, a long running process that does something like in :
```
compiled_1 = torch.compile("max-autotune", fullgraph=True)(fn)
# run compiled_1
out_compiled = compiled_1
compiled_2 = torch.compile("max-autotune", fullgraph=True)(fn2)
```
Where fn1 and fn2 are very similar (i.e. would generate the same triton kernel source code) would result in us using the launcher for the first autotuning run, and setting the launcher to None after running, and then using the same future/kernel again without regenerating the launcher.
Found this bug testing internal inference models.
This does not remove the caching support for @eellison's caching for prologue benchmarking, because that happens under the same compile: https://github.com/pytorch/pytorch/pull/143408
Differential Revision: [D69476856](https://our.internmc.facebook.com/intern/diff/D69476856/)
**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D69476856/)!
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,846,297,267
|
Support pin_memory() during CUDA stream capture.
|
galv
|
open
|
[
"open source",
"Stale",
"release notes: cuda"
] | 4
|
COLLABORATOR
|
This code previously did not work:
```python
import torch
def test():
graph = torch.cuda.CUDAGraph()
with torch.cuda.graph(graph, capture_error_mode="global"):
data = torch.randn(8)
data_gpu = torch.randn(8, device="cuda")
data = data.pin_memory()
data_gpu.to(data, non_blocking=True)
graph2 = torch.cuda.CUDAGraph()
with torch.cuda.graph(graph2, capture_error_mode="global"):
data2 = torch.randn(8)
data2_gpu = torch.randn(8, device="cuda")
data2 = data2.pin_memory()
data2_gpu.to(data2, non_blocking=True)
if __name__ == "__main__":
test()
```
We use events to signal when a particular usage of a pinned host memory block has completed. Every time we call pin_memory(), cudaEventQuery() gets called to see if we we can reuse existing blocks rather than allocating new blocks. cudaEventQuery() is not allowed during stream capture unless we set the thread to relaxed capture mode. (This is safe in this case so long as we make sure that the pinned buffer is live until the cuda graph is destroyed.)
I haven't fully thought this through. I need to make sure that a pinned memory tensor does in fact stay live until its corresponding cuda graph is destroyed. (I haven't done this yet!)
Draft.
Discovered in https://github.com/pytorch/pytorch/pull/146145#issuecomment-2650061333
Not a high priority, but I wanted to start to figure out what proper support might look like.
| true
|
2,846,232,154
|
[ONNX][reland2] Create deprecation warning on dynamo_export
|
justinchuby
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"ci-no-td"
] | 5
|
COLLABORATOR
|
Reland two PRs
- https://github.com/pytorch/pytorch/pull/146425
- https://github.com/pytorch/pytorch/pull/146639
Fixed by removing the deprecation warning on a base class `ExportOptions`.
| true
|
2,846,226,673
|
Fix `ReferenceError: weakly-referenced object no longer exists` in cycle detector
|
zdevito
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13
|
CONTRIBUTOR
|
Summary: weakref.proxy objects will throw errors when they re dead. We just do not bother visulaizing them. They are weak, so they aren't relevant to cycles anyway.
Differential Revision: D69270429
| true
|
2,846,191,034
|
[Dynamo][pytree] handle `isinstance(...)` check for polyfilled class
|
XuehaiPan
|
closed
|
[
"open source",
"Stale",
"topic: not user facing",
"module: pytree",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146921
* #146984
Fixes https://github.com/pytorch/pytorch/pull/137398/files#r1951280557
Related:
- #146678
- https://github.com/pytorch/pytorch/pull/146678#discussion_r1948035430
cc @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,846,155,169
|
Revert commit that removed windows testing in VS2019-> update
|
Camyll
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 15
|
CONTRIBUTOR
|
This reverts commit b57b38b52ede2af27d4eb1bf6ba63868a3ee7553.
This commit removed windows testing for the VS build and needs to be added back in with the updated VS2022 build
Fixes #ISSUE_NUMBER
| true
|
2,846,152,202
|
Make export._trace._WrapperModule work in strict mode
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 15
|
CONTRIBUTOR
|
Summary:
as title
`export._trace._WrapperModule` is used to wrap functions into a Module so we can export the function.
We add `export._wrapper_utils` to `dynamo`'s `MOD_INLINELIST` so dynamo traces into `_WrapperModule`
Fixes https://github.com/pytorch/pytorch/issues/146867
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test:test_export -- -r wrapper_module
```
Differential Revision: D69434316
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,846,151,298
|
don't use Config for compile job id since it only supports bools and not strings
|
bobrenjc93
|
closed
|
[
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146918
https://github.com/pytorch/pytorch/pull/143152 broke pgo testing locally since Config ignores all config values other than "0" or "1" since it assumes these are flags/bools: https://www.internalfb.com/code/fbsource/[a43a941905ac138a02b6d32a60c28853e2a71eec]/fbcode/caffe2/torch/utils/_config_module.py?lines=156%2C315%2C320
Maybe Config should support strings? Anyways for now let's land this so at least we can test PGO locally.
Tested via
```
TORCH_LOGS="torch._dynamo.pgo" TORCH_COMPILE_JOB_ID=123 TORCH_DYNAMO_AUTOMATIC_DYNAMIC_LOCAL_PGO=1 tlp python r9.py
```
| true
|
2,846,144,528
|
[inductor] triton support port-#5512, update cpp wrapper for gpu
|
anmyachev
|
closed
|
[
"triaged",
"module: mkldnn",
"open source",
"Merged",
"ciflow/trunk",
"oncall: pt2",
"module: inductor",
"release notes: inductor",
"ciflow/xpu"
] | 27
|
COLLABORATOR
|
In short, this pull request enhances `constexprs` expression filtering.
Note: I tested the changes on xpu backend.
Part of https://github.com/pytorch/pytorch/issues/144103
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @chauhang @penguinwu @voznesenskym @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @davidberard98
| true
|
2,846,100,538
|
[rocm6.4_internal_testing] [ROCm] [TunableOp] Future proof TunableOp unit test.
|
naromero77amd
|
closed
|
[
"oncall: distributed",
"module: rocm",
"release notes: releng",
"module: inductor",
"module: dynamo"
] | 2
|
COLLABORATOR
|
Cherry pick of upstream PR:
https://github.com/pytorch/pytorch/pull/146548
CI passed upstream.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,846,067,050
|
Unit test failure on Aarch64 - TestSelectAlgorithmCPU.test_linear_with_in_out_buffer
|
robert-hardwick
|
open
|
[
"module: tests",
"module: arm",
"oncall: pt2",
"oncall: cpu inductor"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
We wan't to enable inductor/test_cpu_select_algorithm in ci for Aarch64 however we are seeing this test failure. Marking as expectedFail for the time being so that we can get coverage of the other tests in ci.
```
AssertionError: Scalars are not equal!
Expected 2 but got 1.
Absolute difference: 1
Relative difference: 0.5
To execute this test, run the following from the base repo dir:
python test/inductor/test_cpu_select_algorithm.py TestSelectAlgorithmCPU.test_linear_with_in_out_buffer_batch_size_8_in_features_3_in_features2_192_image_size_224_out_features_64_bias_True_cpu_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+giteffc545
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+giteffc545
[conda] No relevant package
cc @mruberry @ZainRizvi @malfet @snadampal @milpuz01 @chauhang @penguinwu
| true
|
2,846,058,678
|
TestSelectAlgorithmCPU.test_int8_woq_mm fails on Aarch64
|
robert-hardwick
|
open
|
[
"oncall: jit",
"module: tests",
"module: arm",
"oncall: cpu inductor"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
Test `test_int8_woq_mm` in `TestSelectAlgorithmCPU` fails with multiple parameterizations on Aarch64 platform.
```
AssertionError: Scalars are not equal!
Expected 1 but got 0.
Absolute difference: 1
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
python test/inductor/test_cpu_select_algorithm.py TestSelectAlgorithmCPU.test_int8_woq_mm_batch_size_32_in_features_144_out_features_64_cpu_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
Note.
I will mark this test as expectedFail on Aarch64 in order to enable the suite of tests on Aarch64.
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+giteffc545
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+giteffc545
[conda] No relevant package
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mruberry @ZainRizvi @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01 @chauhang @penguinwu
| true
|
2,846,024,342
|
Add a `TORCH_LOGS_RANK=0` env var that integrates with `TORCH_LOGS`
|
bdhirsh
|
open
|
[
"module: logging",
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
`TORCH_LOGS` is a very useful env var for inspecting PT2 state throughout a program, although it can be quite spammy:
(1) debug logs for dynamo + dynamic shapes together can be huge, especially when dealing with large models
(2) this is exacerbated in a distributed setting, where you have N ranks all printing logs
@laithsakka recently added a `TORCH_LOGS_TRACE_ID_FILTER="[0/0],[1/0_1]"` envvar to help with (1), by limiting TORCH_LOG output to a specific subgraph that is compiled
For (2), @wconstab told me that you if you are using the `torchrun` CLI, you can pass in `—local_ranks_filter <...>` to filter logs to a certain rank.
Not everybody uses `torchrun` though - we should probably add another envvar like `TORCH_LOGS_RANK`, that integrates with `TORCH_LOGS` and only spews PT2 logs for the specified ranks
cc @chauhang @penguinwu
| true
|
2,845,971,130
|
[export][ez] Update tag_ for union setters.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 10
|
CONTRIBUTOR
|
Summary: ez fix to set tag for union type fields.
Test Plan: CI
Differential Revision: D69467715
| true
|
2,845,896,519
|
Adjust TestInductorOpInfo to depend on backend, not device
|
matthewhagraphcore
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: not user facing",
"oncall: pt2",
"module: inductor"
] | 14
|
CONTRIBUTOR
|
As is the case with many inductor tests, this test adapts test criteria based on device type, where it should be adjusting for the backend registered for that device.
In this particular case, using the upstream triton CPU backend would lead to failures, as reference_in_float would be true as this is required for the C++/OpenMP backend which does not have float16 support. However most triton backends do, and as such should be tested in float16. Similarly a triton backend with a device not described as a GPU would get skipped from testing entirely.
A more generic solution would be ideal, but this would require a lot of work across many tests.
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,845,896,179
|
INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/sparse/SparseCsrTensorMath.cpp":817, please report a bug to PyTorch.
|
cybersupersoap
|
open
|
[
"module: sparse",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when using `device.type.add_` . The code is as follows:
```python
import torch
def generate_simple_inputs(layout,
device=None,
dtype=None,
index_dtype=None,
pin_memory=None,
members_pin_memory=None,
enable_batch=True,
enable_hybrid=True,
enable_zero_sized=True,
enable_non_contiguous_indices=True,
enable_non_contiguous_values=True,
enable_batch_variable_nse=False,
output_tensor=True,
patterns=None):
if index_dtype is None:
index_dtype = torch.int64
is_compressed_sparse_layout = layout in {torch.sparse_csr, torch.sparse_csc, torch.sparse_bsr, torch.sparse_bsc}
if output_tensor:
for args, kwargs in generate_simple_inputs(layout, device=device, dtype=dtype, index_dtype=index_dtype,
pin_memory=pin_memory,
enable_batch=enable_batch, enable_hybrid=enable_hybrid,
enable_zero_sized=enable_zero_sized,
enable_non_contiguous_indices=enable_non_contiguous_indices,
enable_non_contiguous_values=enable_non_contiguous_values,
enable_batch_variable_nse=enable_batch_variable_nse,
output_tensor=False):
if members_pin_memory:
args = tuple(a.pin_memory() for a in args)
if layout is torch.strided:
assert len(args) == 1
size = kwargs.pop('size', None) # to ensure that a zero-sized tensor has the desired shape
assert size is not None
if pin_memory:
yield args[0].reshape(size).pin_memory()
else:
yield args[0].reshape(size)
elif layout is torch.sparse_coo:
yield torch.sparse_coo_tensor(*args, **kwargs)
elif is_compressed_sparse_layout:
kwargs.update(layout=layout)
yield torch.sparse_compressed_tensor(*args, **kwargs)
else:
assert 0 # unreachable
return
def get_blockpattern(pattern, blocksize):
basesize = pattern.shape
assert basesize[0] % blocksize[0] == 0, (basesize, blocksize)
assert basesize[1] % blocksize[1] == 0, (basesize, blocksize)
blockpattern = pattern.reshape(-1,
blocksize[0],
basesize[1] // blocksize[1],
blocksize[1]).transpose(-3, -2).any(-1).any(-1)
block_ids = torch.arange(1, blockpattern.numel() + 1).reshape(blockpattern.shape)
return (blockpattern != 0) * block_ids
def get_sparse_data(pattern):
basesize = pattern.shape
assert len(basesize) == 2, basesize # pattern is expected to be a matrix
# We cannot use `torch.sparse_xyz_tensor(pattern)` to
# compute the sparse layout indices and values because
# generate_simple_inputs is used to generate the inputs to
# test `torch.sparse_xyz_tensor` factory functions, so
# we'll compute the indices and values independently of
# the factory functions.
indices = torch.where(pattern != 0)
coo_indices = torch.stack(indices)
crow_indices = torch.zeros(basesize[0] + 1, dtype=torch.int64)
crow_indices[1:] = torch.cumsum(coo_indices[0].bincount(minlength=basesize[0]), 0)
col_indices = coo_indices[1]
strided_values = torch.zeros(basesize, dtype=torch.int64)
# the property of `values == range(1, 1+nnz)` is used in
# get_sparse_data_with_block to relate BSR and BSC values,
# so, don't change the following line:
values = torch.arange(1, 1 + len(indices[0]), dtype=torch.int64)
strided_values[indices] = values
indices_T = torch.where(pattern.transpose(0, 1) != 0)
coo_indices_T = torch.stack(indices_T)
ccol_indices = torch.zeros(basesize[1] + 1, dtype=torch.int64)
ccol_indices[1:] = torch.cumsum(coo_indices_T[0].bincount(minlength=basesize[1]), 0)
row_indices = coo_indices_T[1]
csc_values = strided_values.transpose(0, 1)[indices_T]
return {torch.sparse_coo: (coo_indices, values),
torch.sparse_csr: (crow_indices, col_indices, values),
torch.sparse_csc: (ccol_indices, row_indices, csc_values),
torch.strided: (strided_values,)}
def get_sparse_data_with_block(pattern, blocksize):
nonblock_data = get_sparse_data(pattern)
blockpattern = get_blockpattern(pattern, blocksize)
block_data = get_sparse_data(blockpattern)
strided_values = nonblock_data[torch.strided][0]
block_indices = block_data[torch.sparse_coo][0]
bsr_values = torch.stack([strided_values[bi * blocksize[0]:(bi + 1) * blocksize[0],
bj * blocksize[1]:(bj + 1) * blocksize[1]]
for bi, bj in block_indices.transpose(0, 1)])
# here we use the property `values == range(1, 1+nnz)` and
# `values` relation to `csc_values` (see get_sparse_data)
# to get BSC blocks via reordering the BSR blocks:
bsc_values = bsr_values[block_data[torch.sparse_csc][2] - 1]
return {torch.sparse_bsr: (*block_data[torch.sparse_csr][:2], bsr_values),
torch.sparse_bsc: (*block_data[torch.sparse_csc][:2], bsc_values),
**nonblock_data}
def get_batch_sparse_data(pattern, blocksize):
size = pattern.shape
if len(size) <= 2: # non-batch
return get_sparse_data_with_block(pattern, blocksize)
# batch data is created recursively:
batch_data = {} # type: ignore[var-annotated]
for i, item in enumerate(pattern):
for layout, d in get_batch_sparse_data(item, blocksize).items():
target = batch_data.get(layout)
if layout is torch.sparse_coo:
# a "batch COO" means a COO with the leading
# sparse dimensions interpreted as batch
# dimensions
ext_coo_indices1 = torch.cat((torch.full((1, len(d[1])), i, dtype=torch.int64), d[0]))
if target is None:
target = batch_data[layout] = (ext_coo_indices1, d[1])
else:
target[0].set_(torch.cat((target[0], ext_coo_indices1), 1))
target[1].set_(torch.cat((target[1], d[1])))
else:
if target is None:
target = batch_data[layout] = tuple(d[j].unsqueeze(0) for j in range(len(d)))
else:
for j in range(len(d)):
target[j].set_(torch.cat((target[j], d[j].unsqueeze(0))))
return batch_data
def generate_values(base, densesize):
if not densesize:
return base
if not isinstance(base, int) and base.ndim > 0:
return torch.stack([generate_values(b, densesize) for b in base])
if base == 0:
return torch.zeros(densesize, dtype=torch.int64)
r = torch.arange(densesize[0], dtype=torch.int64)
for i, d in enumerate(densesize[1:]):
y = torch.arange(d, dtype=torch.int64) * (10 ** (i + 1))
r = r[..., None] + y[None, ...]
r.add_(base)
return r
if patterns is None:
patterns = [
# a simple 3 x 2 tensor: non-hybrid, hybrid with 1 and 2 dense dimensions
([[1, 2, 0],
[1, 0, 3]], [(2, 1), (1, 3)], [(), (2,), (4, 5)]),
# 2 x 3 batch of 3 x 2 tensors: non-hybrid and hybrid with 2 dense dimensions
([[[[1, 2, 0],
[1, 0, 3]],
[[1, 2, 3],
[1, 0, 0]],
[[1, 0, 0],
[1, 2, 3]]],
[[[0, 2, 0],
[1, 2, 3]],
[[1, 0, 3],
[1, 2, 0]],
[[1, 2, 3],
[0, 2, 0]]]], [(2, 1), (2, 3)], [(), (2,)]),
# tensor with non-trivial blocksize
([[0, 1, 0, 2, 0, 2],
[0, 1, 0, 0, 2, 0],
[3, 3, 3, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 5, 0, 6, 6, 6],
[5, 0, 5, 6, 6, 6],
[0, 0, 0, 0, 8, 8],
[7, 7, 7, 0, 8, 8]], [(2, 3)], [(), (4, 5)]),
# batch tensor with variable NSE
# Requires https://github.com/pytorch/pytorch/pull/84843 or similar.
([[[1, 2],
[3, 4]],
[[1, 0],
[0, 0]]], [(1, 1)], ([()] if enable_batch_variable_nse else []))]
def non_contiguous_copy(t, dim=-1, offset=0):
# return a copy of t that is non-contiguous along the
# given dimension and with the given storage offset
if dim < 0:
dim = dim + t.ndim
assert dim >= 0 and dim < t.ndim
step = max(2, offset + 1)
tmp = torch.zeros((*t.shape[:dim], t.shape[dim] * step, *t.shape[dim + 1:]), dtype=t.dtype, device=t.device)
dim_slices = (*((slice(None),) * dim), slice(offset, None, step))
r = tmp[dim_slices].copy_(t)
return r
# the main loop of the method:
for pattern, blocksizes, densesizes in patterns:
if not enable_hybrid:
densesizes = [s for s in densesizes if not s]
if not (densesizes and blocksizes):
continue
pattern = torch.tensor(pattern, dtype=torch.int64)
if not enable_batch and pattern.ndim > 2:
continue
for blocksize in blocksizes:
data = get_batch_sparse_data(pattern, blocksize)[layout]
for densesize in densesizes:
indices = [a.to(device=device, dtype=index_dtype) for a in data[:-1]]
values = generate_values(data[-1], densesize).to(device=device, dtype=dtype)
kwargs = dict(device=device, dtype=dtype, size=pattern.shape + densesize)
if pin_memory is not None:
kwargs.update(pin_memory=pin_memory)
yield (*indices, values), kwargs.copy()
if enable_non_contiguous_indices and pattern.ndim > 2:
# sparse compressed indices can be sliced only along batch dimensions
for (dim, offset) in {(0, 1), (-2, 0)}:
indices_copy = [non_contiguous_copy(a, dim=dim, offset=offset) for a in indices]
yield (*indices_copy, values), kwargs.copy()
if enable_non_contiguous_values:
values_copy = non_contiguous_copy(values, dim=-1, offset=1)
yield (*indices_copy, values_copy), kwargs.copy()
if enable_non_contiguous_values:
values_copy = non_contiguous_copy(values, dim=-1, offset=1)
yield (*indices, values_copy), kwargs.copy()
# zero-sized tensor inputs, non-batch, non-hybrid/hybrid
if enable_zero_sized:
for basesize, blocksizes, densesizes in [
((2, 0), [(1, 2)], [(), (2,), (2, 3)] if enable_hybrid else [()]),
((0, 2), [(1, 2), (2, 1), (3, 2)], [()]),
((0, 0), [(1, 2)], [()]),
]:
for blocksize in blocksizes:
for densesize in densesizes:
if layout == torch.strided:
indices = () # type: ignore[assignment]
values = torch.empty((basesize + densesize), device=device, dtype=dtype)
elif layout == torch.sparse_coo:
indices = (torch.empty(len(basesize), 0, device=device, dtype=index_dtype),) # type: ignore[assignment]
values = torch.empty((0, *densesize), device=device, dtype=dtype)
elif layout == torch.sparse_csr:
crow_indices = torch.tensor([0] * (basesize[0] + 1), device=device, dtype=index_dtype)
col_indices = torch.empty(0, device=device, dtype=index_dtype)
indices = (crow_indices, col_indices) # type: ignore[assignment]
values = torch.empty((0, *densesize), device=device, dtype=dtype)
elif layout == torch.sparse_csc:
ccol_indices = torch.tensor([0] * (basesize[1] + 1), device=device, dtype=index_dtype)
row_indices = torch.empty(0, device=device, dtype=index_dtype)
indices = (ccol_indices, row_indices) # type: ignore[assignment]
values = torch.empty((0, *densesize), device=device, dtype=dtype)
elif layout == torch.sparse_bsr:
crow_indices = torch.tensor([0] * (basesize[0] // blocksize[0] + 1), device=device, dtype=index_dtype)
col_indices = torch.empty(0, device=device, dtype=index_dtype)
indices = (crow_indices, col_indices) # type: ignore[assignment]
values = torch.empty((0, *blocksize, *densesize), device=device, dtype=dtype)
elif layout == torch.sparse_bsc:
ccol_indices = torch.tensor([0] * (basesize[1] // blocksize[1] + 1), device=device, dtype=index_dtype)
row_indices = torch.empty(0, device=device, dtype=index_dtype)
indices = (ccol_indices, row_indices) # type: ignore[assignment]
values = torch.empty((0, *blocksize, *densesize), device=device, dtype=dtype)
else:
assert 0 # unreachable
kwargs = dict(device=device, dtype=dtype, size=basesize + densesize)
if pin_memory is not None:
kwargs.update(pin_memory=pin_memory)
yield (*indices, values), kwargs
index_dtype = torch.int64
layout = torch.sparse_bsc
dtype = torch.float64
device = 'cpu'
for t in generate_simple_inputs(layout=layout, device=device, dtype=dtype, index_dtype=index_dtype):
m = torch.zeros_like(t, device='meta')
tmp_var1 = m.device.type
tmp_var2 = 'meta'
tmp_var3 = m
tmp_var4 = t
tmp_var5 = 0
# Mutated line: setting bias tensor to zero
bias = torch.zeros(0, dtype=dtype, device=device)
m.add_(bias)
```
Error messages:
```
# RuntimeError Traceback (most recent call last)
# <ipython-input-24-604c8a38faf6> in <cell line: 0>()
# 297 # Mutated line: setting bias tensor to zero
# 298 bias = torch.zeros(0, dtype=dtype, device=device)
# --> 299 m.add_(bias)
# RuntimeError: src.layout() == kSparseCsr || src.layout() == kSparseCsc INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/sparse/SparseCsrTensorMath.cpp":817, please report a bug to PyTorch.
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
Please find the [gist](https://colab.research.google.com/drive/1jA09O7VE27-4OoGJZP0nnHYIFtDtrrca?usp=sharing) here for reference.
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
2,845,890,961
|
whats the plan on replacing DDP with the replicate API?
|
mayank31398
|
closed
|
[
"oncall: distributed"
] | 12
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
replicate seems to be a nicer API but it looks like it lacks mixed precision support
ideally if it can obey similar arguments like `fully_shard` from FSDP-2, it will be helpful.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,845,869,873
|
[AOTI][Draft] Extend torchgen to generate C shim with version number
|
desertfire
|
open
|
[
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Summary: While it is ok to add a new arg with defaul value to a fallback op in Python, it will be BC-breaking for the C shim. This PR adds an automatic approach to update C shim files when specifying a version number with a list of new args for the modified op. TO-BE-FILLED: there will be an example PR linked here later.
| true
|
2,845,853,491
|
INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/distributed/c10d/reducer.cpp":2134, please report a bug to PyTorch.
|
cybersupersoap
|
open
|
[
"oncall: distributed"
] | 2
|
NONE
|
### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when using `torch.distributed._compute_bucket_assignment_by_size` . The code is as follows:
```python
import torch
import torch.distributed as dist
tensors = [torch.empty([50], dtype=torch.float), torch.empty([25], dtype=torch.double), torch.empty([50], dtype=torch.float), torch.empty([25], dtype=torch.double), torch.empty([50], dtype=torch.float), torch.empty([25], dtype=torch.double)]
labels = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
result, per_bucket_size_limits = dist._compute_bucket_assignment_by_size(tensors, [200, 400], labels)
```
Error messages:
```
RuntimeError Traceback (most recent call last)
<ipython-input-1-258b56e7c689> in <cell line: 0>()
4
5 labels = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
----> 6 result, per_bucket_size_limits = dist._compute_bucket_assignment_by_size(tensors, [200, 400], labels)
RuntimeError: expect_sparse_gradient.empty() || (tensors.size() == expect_sparse_gradient.size()) INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/distributed/c10d/reducer.cpp":2134, please report a bug to PyTorch.
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
Please find the [gist](https://colab.research.google.com/drive/1uifFY2OjslwyChzvoE17Sp3Jm_ovWLxc?usp=sharing) here for reference.
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,845,843,564
|
Fix var CUDA_PATH_V128 in cuda128.bat file
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Followup after: https://github.com/pytorch/pytorch/pull/146653
This should fix upcoming CUDA 12.8 windows builds.
Issue found during pytorch-canary Windows AMI test.
| true
|
2,845,834,464
|
[BE][OpInfo] Introduce generic `dtypesIf`
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"no-runner-experiments"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146905
Use `__setattr__` and `__getattribute__` to wrap existing `dtypesIfXYZ` into it, which will allow for subsequent incremental elimination of those
Also, type annotation for OpInfo is a sham: it claims that `dtypes` and `dtypesIfXYZ` must be of type `_dispatch_dtypes`, but in reality it's converted to set in post init.
Test Plan:
- Check that `op_db[0].dtypesIfCUDA` and others shows the same values as before, by running the following script
```python
from torch.testing._internal.common_methods_invocations import op_db
print({name: getattr(op_db[0], f'dtypesIf{name}') for name in ['CUDA', 'ROCM', 'XPU', 'Hpu']})
```
- CI
| true
|
2,845,828,909
|
[BE]: Make OrderedSet reversible
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
It's rather trivial to make OrderedSet reversible, so let's do it and unlock that additional functionality for downstream users.
| true
|
2,845,810,435
|
[DO NOT MERGE] ROCm sandbox PR
|
amdfaa
|
open
|
[
"module: rocm",
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 5
|
CONTRIBUTOR
|
Used for testing upstream CI to help triage network issues being observed on the MI250 ROCm CI runners.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,845,734,131
|
[BE][Ez]: Remove unnecessary type ignores from orderedset
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
COLLABORATOR
|
After #145783, we can remove some type ignores from the ordered set class
| true
|
2,845,684,726
|
Trying to use forward AD with _scaled_dot_product_efficient_attention
|
manukyutai
|
open
|
[
"triaged",
"module: forward ad",
"module: sdpa"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
It's not implemented yet
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,845,676,860
|
PyTorch Profiler Data Is Corrupted When Using with_stack=True
|
wdziurdz
|
open
|
[
"oncall: profiler"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I have a problem when trying to profile with with_stack=True. When I add this flag, the Torch profiler writes corrupted data. I noticed that this happens when Kineto is available. However, when Kineto is disabled and the profiler in torch.autograd.profiler takes the else path (not saving Kineto traces), everything works fine:
```python
def export_chrome_trace(self, path):
"""
Exports the collected trace in Chrome JSON format. If Kineto is enabled, only
the last cycle in the schedule is exported.
"""
if kineto_available():
self.kineto_results.save(path) # type: ignore[union-attr] <-- This probably corrupts the data
else:
self._ensure_function_events()
return self._function_events.export_chrome_trace(path) # type: ignore[union-attr] <-- This works fine
```
Code to Reproduce the Issue:
```python
import torch
prof = torch.profiler.profile(
activities=[torch.profiler.ProfilerActivity.CPU],
schedule=torch.profiler.schedule(wait=0, warmup=0, active=6, repeat=0),
on_trace_ready=torch.profiler.tensorboard_trace_handler("cpu_traces"),
profile_memory=True,
with_stack=True,
record_shapes=True,
)
input_shapes = [(6, 6), (8, 8), (10, 10), (10, 10), (10, 10), (10, 10)]
def raw_function(t1, t2, t3):
out = torch.addr(t1, t2, t3)
return out
compiled_fn = torch.compile(
raw_function,
backend="inductor",
dynamic=None,
)
prof.start()
for s in input_shapes:
prof.step()
v = torch.randn(s[1])
t = torch.randn(s)
h_result = compiled_fn(t, v, v)
prof.stop()
# Validate profile files
trace_dir = "cpu_traces"
print("\nValidating JSON trace files in:", trace_dir)
import json
import os
for filename in os.listdir(trace_dir):
if filename.endswith(".json"):
file_path = os.path.join(trace_dir, filename)
with open(file_path, "r", encoding="utf-8") as f:
try:
data = json.load(f)
formatted = json.dumps(data)
print(f"File '{filename}' is valid JSON.")
except json.JSONDecodeError as e:
print(f"File '{filename}' is invalid JSON: {e}")
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] qtorch==0.3.0
[pip3] torch==2.6.0a0+gitbfe59b9
[pip3] torch_tb_profiler==0.4.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,845,667,456
|
Friendly handle mem_get_info's runtime error message
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"ciflow/xpu",
"release notes: xpu"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146899
# Motivation
Friendly handle the runtime error message if the device doesn't support querying the available free memory. See https://github.com/intel/torch-xpu-ops/issues/1352
| true
|
2,845,660,429
|
[RFC] Test Cases Enabling for Accelerators
|
EikanWang
|
open
|
[
"feature",
"module: tests",
"triaged"
] | 14
|
COLLABORATOR
|
# Motivation
PyTorch contains a large and continually growing set of test cases to ensure quality and performance across a variety of features. As more device backends (accelerators) are added, contributors frequently face the challenge of enabling those existing test cases for new devices:
1. Device Decorator
PyTorch test cases rely on device-specific decorators—such as onlyCUDA, dtypesIfCUDA, skipIfXLA, etc.—to skip or include tests for particular backends. Enabling a new device often requires adding or modifying these decorators in many tests for reusing.
This creates an intrusive process since broad modifications are required to integrate a device carefully without breaking other backends.
2. Diverse Device Capabilities
Each hardware vendor’s device, or even different generations of a single vendor’s device, may differ in capabilities—e.g., data type support, distributed training support, etc.
A one-size-fits-all approach for test decorators has proven cumbersome when, for instance, a new backend only partially supports a feature (like torch.compile), or only supports certain data types (e.g., no float64 but supports bfloat16).
Thus, we think it is meaningful to create a mechanism that can flexibly determine at runtime which tests to run, skip, or adapt, based on a device’s specific capabilities. This RFC tries to propose ideas to address these issues.
# Proposed Approach
The core idea of this RFC is to introduce device or “accelerator” abstractions that report device capabilities, and then use these capabilities to dynamically configure test inclusion or parameterization. In general, the idea is as follows:
- Define device abstraction to provide a unified interface for querying device capabilities for test case infrastructure
- Provide a registration mechanism allowing devices to dynamically declare their capabilities at runtime.
- Define test case device capability requirements
- Run test case automatically if a device meets the device capability requirement
- Skip test case automatically if a device does NOT meet the device capability requirement
## Device Abstraction
Create a unified interface—either via torch.accelerator or a specialized test infrastructure interface (similar to DeviceInterface in Dynamo)—to query device capabilities. For example:
```python
class DeviceInterface:
def device_type(self) -> str:
...
def supports_distributed(self) -> bool:
...
def supported_dtypes(self) -> Set[torch.dtype]:
...
# etc.
```
Alternatively, PyTorch’s existing `torch.accelerator` can be extended or leveraged to provide similar functionality.
Below is a non-exhaustive list of capabilities a device backend might declare, many of which match existing features tested in PyTorch:
- ATen operation
- whitelist/blacklist
- Deterministic
- (Other capabilities)
- Supports distributed (True/False)
- Supports torch.compile (True/False)
- Graph mode (CUDA Graph, XPU Graph, etc.)
- Data types (e.g. support for float64, bfloat16, fp8_e5m2, fp8_e4m3, etc.)
- Tensor types (sparse, quantized, etc.)
- Mixed precision
- Quantization features
- OS constraints or platform constraints (if needed)
- (Other specialized capabilities)
The exact set of fields is flexible, but the concept is that each device or backend can declare these capabilities at runtime, and the testing framework uses them to filter or adapt test cases appropriately.
## Registration of Device Interface
Allow backends to register the device interface to declare their capabilities at runtime. This ensures that:
- Out-of-tree backends (using PrivateUse1 or similar dispatch keys) can inform the testing framework of their features.
- Each device can selectively advertise what it supports (distributed, certain data types, mixed precision, etc.).
## Capability-Based Decorators
### Require capability
In addition to `onlyCPU`, `onlyCUDA` , `dtypesIfCUDA`, etc., the idea proposes decorators that declare capability requirements:
```python
class requires_capabilities:
def __init__(self, required_caps):
self.required_caps = required_caps
def __call__(self, fn):
@wraps(fn)
def wrapper(test_self, *args, **kwargs):
device_iface = get_device_interface() # Query current device
if not device_iface.check_device_capability(self.required_caps):
raise unittest.SkipTest(
f"Skipping because device does not meet {self.required_caps}"
)
return fn(test_self, *args, **kwargs)
return wrapper
@requires_capabilities({"distributed": True})
def test_distributed_functionality():
...
```
So that the test framework can decide the test scope(test cases list) at runtime.
- Runtime Skipping/Running: The test framework checks the device’s capabilities, automatically skipping tests that are not supported.
- Runtime Parameterization: Instead of statically declaring which `dtypes` are tested, a test can specify all relevant `dtypes`, and the framework intersects those with the device’s supported `dtypes` to generate the final test set.
### Intersect Capability
Some test cases may require multiple device capabilities, while a particular device may provide partial of the required capabilities. We expect that the test case still can be instantiated. The data types are a good example.
For tests that use @dtypes(...) or a similar approach:
```python
@dtypes(torch.bfloat16, torch.float32, torch.float64)
def test_div_rounding_nonfinite(self, device, type):
...
```
- Suppose the device claims to support {torch.bfloat16, torch.float16, torch.float32}.
- The intersection with {torch.bfloat16, torch.float32, torch.float64} is {torch.bfloat16, torch.float32}.
- The framework thus only instantiates:
- test_div_rounding_nonfinite_accelerator_bfloat16
- test_div_rounding_nonfinite_accelerator_float32
# Summary
By introducing a unified device-capability abstraction, dynamic capability registration, and capability-based decorators, we expect the RFC can refine the test suite to handle multiple backends gradually.
- Minimize intrusion in existing tests when bringing up a new device.
- Provide robust coverage by automatically running all relevant tests for each device’s advertised capabilities.
- Reduce maintenance overhead by consolidating device-specific logic into a standardized abstraction.
# Context and Limitations
1. Not a One-Size-Fits-All Replacement
Some tests are inherently device-specific. For example, certain CUDA-specific tests rely on features like `cudaStream_t`, kernels that only exist on CUDA, or tooling limited to NVIDIA hardware. These tests should remain decorated with something like `onlyCUDA`.
2. Gradual Adoption
PyTorch has a large test suite with many specialized decorators. This proposal does not require that all tests migrate at once. Instead, new backends or test modules can adopt this approach incrementally.
3. Complementary to Existing Infrastructure
This RFC does not attempt to eliminate or replace all current decorators, nor does it cover every corner case. Instead, it provides a design that can be expanded to other devices or features over time.
cc @mruberry @ZainRizvi
| true
|
2,845,545,772
|
[subclass] testing WrapperSubclass respect outer_size, outer_stride
|
IvanKobzarev
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146897
| true
|
2,845,523,808
|
Assertion error in Flex Attention backward pass when indexing a parameter
|
otoomey
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 3
|
NONE
|
### 🐛 Describe the bug
Flex Attention raises an assertion error during the backward pass if the `score_mod` implementation indexes a unit sized dimension.
For instance:
```python
import torch
from torch.nn.attention.flex_attention import flex_attention
MAXLEN = 2
embeddings_table = nn.Parameter(torch.Tensor(2 * MAXLEN, 1)) # <-- note unit sized last dimension
query = torch.randn(1, 1, 1, 32)
key = torch.randn(1, 1, 1, 32)
value = torch.randn(1, 1, 1, 32)
dt = torch.randn(1, 1, 1)
def rpe(score, _1, _2, q_idx, kv_idx):
delta = q_idx - kv_idx
delta = torch.clamp(delta, -MAXLEN, MAXLEN - 1)
delta += MAXLEN
return score + embeddings_table[delta.int()][0] # <-- note the [0] here
attn_output = flex_attention(
query=query,
key=key,
value=value,
score_mod=rpe,
)
attn_output.sum().backward()
```
An assertion error is raised on the last line:
```
File /.../.venv/lib/python3.11/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py:84, in _(info, indims, shape, indices, value)
81 expanded_indices.append(idx.expand(value.shape))
82 else:
83 # the index is being part of the vmap batch, it should be the same size as val
---> 84 assert idx.shape == value.shape
85 expanded_indices.append(idx)
87 out = torch.ops.flex_lib.zeros_and_scatter(
88 shape,
89 expanded_indices,
90 value,
91 )
AssertionError:
```
If the unit sized dimension is removed from `embeddings_table` the code works as expected.
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250211+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: AlmaLinux release 8.10 (Cerulean Leopard) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22)
Clang version: 17.0.6 (Red Hat 17.0.6-1.module_el8.10.0+3757+fc27b834)
CMake version: version 3.26.5
Libc version: glibc-2.28
Python version: 3.11.3 (main, Apr 19 2023, 23:54:32) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-553.8.1.el8_10.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 97
Model name: AMD Ryzen 9 7900X 12-Core Processor
Stepping: 2
CPU MHz: 545.000
CPU max MHz: 5733.0000
CPU min MHz: 545.0000
BogoMIPS: 9399.80
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 32768K
NUMA node0 CPU(s): 0-23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.2
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250211+cu124
[pip3] torch-tb-profiler==0.4.3
[pip3] torchaudio==2.6.0.dev20250211+cu124
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.22.0.dev20250211+cu124
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] cudatoolkit 11.8.0 h6a678d5_0
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcurand 10.3.3.53 0 nvidia
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.6 py311ha02d727_1
[conda] mkl_random 1.2.2 py311ha02d727_1
[conda] numpy 1.24.3 py311h08b1b3b_1
[conda] numpy-base 1.24.3 py311hf175353_1
[conda] numpydoc 1.5.0 py311h06a4308_0
[conda] pytorch 2.0.1 py3.11_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py311_cu118 pytorch
[conda] torchtriton 2.0.0 py311 pytorch
[conda] torchvision 0.15.2 py311_cu118 pytorch
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,845,422,501
|
[ARM] Enable some additional Aarch64 unit tests
|
robert-hardwick
|
open
|
[
"triaged",
"open source",
"module: arm",
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 20
|
COLLABORATOR
|
This PR adds some tests to Aarch64 ci. Notably `nn/test_convolution` and `inductor/test_fused_attention`
The reason for this is that there are some additional regression test failures in the oneDNN 3.7 upgrade https://github.com/pytorch/pytorch/pull/138889 which do not have visibility because they are not enabled.
I have marked `test_ConvTranspose2d_output_size_downsample_upsample` as skipped ( https://github.com/pytorch/pytorch/issues/146857 ) due to a segmentation fault. But priority is to get visibility on oneDNN 3.7 new test failures.
cc @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @yf225
| true
|
2,845,405,878
|
Enable bitwise operators between scalars and Tensors (and fix #145838)
|
rec
|
open
|
[
"open source",
"Stale",
"release notes: fx",
"topic: not user facing",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147372
* __->__ #146894
| true
|
2,845,374,973
|
Add flatten/unflatten utility `c10::utils::nested` for C++ nested containers
|
XuehaiPan
|
closed
|
[
"module: internals",
"module: cpp",
"needs research",
"open source",
"needs design",
"Stale",
"module: pytree"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146893
The interface is (need design):
```cpp
std::pair<List<IValue>, std::function<IValue(const List<IValue>&)>>
tree_flatten(const IValue& input);
```
Usage:
```cpp
auto [leaves, unflatten_func] = c10::utils::tree_flatten(ivalue);
auto reconstructed_ivalue = unflatten_func(leaves.copy());
```
Extend supported container type:
```cpp
namespace c10::utils::nested {
template<>
std::pair<List<IValue>, std::function<MyContainer(const List<IValue>&)>>
tree_flatten_one_level(const MyContainer& input) {
return {
input.toList(),
[](const List<IValue>& children) -> IValue {
return IValue(MyContainer(children.copy()));
}
};
}
} // namespace c10::utils::nested
```
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @jbschlosser @zou3519
| true
|
2,845,325,231
|
Update documentation to include insert and + methods to add layers in sequential
|
jobs-git
|
closed
|
[
"module: docs",
"module: nn",
"triaged",
"actionable"
] | 2
|
NONE
|
### 📚 The doc issue
Documentation does not yet reflect these methods. Keras reflects such features under sequential: https://keras.io/guides/sequential_model/
See: https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html
Ref: https://github.com/pytorch/pytorch/issues/146829
### Suggest a potential alternative/fix
_No response_
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,844,965,957
|
Strange Behavior when using Factory Methods with Inherited Tensor Class
|
anzr299
|
closed
|
[
"triaged",
"tensor subclass"
] | 2
|
NONE
|
When I try to inherit the `torch.Tensor` class for my own use case. It results in some unexpected errors. This occurs when trying to call torch.tensor again on an iterable of this inherited class' instance. I also noticed other similar behaviors. How can I ensure that the inherited class can interact with `torch.tensor` appropriately?
Following is a simple reproducer for the problem:
```
import torch
class ChildTensor(torch.Tensor):
pass
m = torch.tensor(12)
abc = torch.tensor((m,)) # Works
m = ChildTensor(data=12)
abc = torch.tensor((m,)) # Error
```
Continuing this, the following code also produces an error when implementing __get_item__() method in the subclass:
```
import torch
class ChildTensor(torch.Tensor):
def __getitem__(self, idx):
return super().__getitem__(idx)
abc = torch.tensor(12)
abc.__class__ = ChildTensor
torch.tensor((abc,))
```
cc @ezyang @albanD
| true
|
2,844,607,986
|
max(a, b) if a and b else b specialize on both a and b. can we be smarter?
|
laithsakka
|
closed
|
[
"oncall: pt2",
"module: dynamic shapes"
] | 1
|
CONTRIBUTOR
|
pattern in OmniFm.
import torch
def fun2(v):
return torch.tensor([1])*max(v,100)
@torch.compile(dynamic = True)
def fun(a, b):
return torch.tensor([1])+ fun2(v = max(a, b) if a and b else b)
fun(1000, 12)
fun(100, 12)
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,844,473,870
|
How to customize a torch.Tensor() method to access the underlying data structure of a PyTorch tensor.
|
xiangxinhello
|
open
|
[
"triaged",
"tensor subclass"
] | 2
|
NONE
|
### 🐛 Describe the bug
1. How to customize a torch.Tensor() method and call PyTorch's THPVariable_pynew function to obtain the underlying data structure of the original Tensor.

tensor = torch.Tensor(3,4).to("new_one") -> initModule()->Module.cpp->and run in
https://github.com/pytorch/pytorch/blob/32f585d9346e316e554c8d9bf7548af9f62141fc/torch/csrc/autograd/python_variable.cpp#L1891
2.This is my project: https://github.com/xiangxinhello/torch_new_tensor. My project is based on modifications of https://github.com/pytorch/pytorch/tree/v2.5.0/test/cpp_extensions/open_registration_extension, but I was unable to modify it successfully.
3.I want to obtain the underlying data structure information of a PyTorch tensor through a custom torch.Tensor method.
### Versions
PyTorch version: 2.5.0a0+gita8d6afb
Is debug build: True
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] optree==0.13.1
[pip3] torch==2.5.0a0+gita8d6afb
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.2.1 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.0a0+gita8d6afb dev_0 <develop>
cc @ezyang @albanD
| true
|
2,844,384,179
|
[Intel GPU] Fallback embedding_dense_backward on XPU
|
jianyizh
|
closed
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/xpu"
] | 7
|
CONTRIBUTOR
|
Do not decompose embedding_dense_backward for torch.compile. Current XPU devices have hardware limitations on atomic ops. Fallback to eager and we use sort to implement this op. hf_T5 amp bf16 training in torchbench can get 2x improvement on Max 1550. I also align with cuda on gelu decomposition in _addmm_activation
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,844,353,619
|
[dynamo][source] Remove index_is_slice from GetItemSource
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146887
* #146819
Slicing on the data structures that go through GetItemSource always
result in temporary objects. So, there is never a case where we need a
source for a sliced data structure. I think it was a bug at one point of
time, and we incorrectly provided a source for sliced data structures.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,844,268,438
|
torch.serialization.add_safe_globals should be module qualified
|
ezyang
|
closed
|
[
"module: serialization",
"triaged",
"topic: improvements"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL numpy.dtype was not an allowed global by default. Please use `torch.serialization.add_safe_globals([dtype])` or the `torch.serialization.safe_globals([dtype])` context manager to allowlist this global if you trust this class/function.
```
Here, I would have preferred
```
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL numpy.dtype was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.dtype])` or the `torch.serialization.safe_globals([numpy.dtype])` context manager to allowlist this global if you trust this class/function.
```
which would be much more clear
cc @mruberry @mikaylagawarecki
### Versions
main
| true
|
2,844,265,131
|
Add numpy.core.multiarray.scalar to default safe globals
|
ezyang
|
open
|
[
"module: serialization",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We should audit that this one is safe, but I think it is, and it would be nice if it was in the safe set; specifically, Suno bark checkpoints need it https://github.com/suno-ai/bark
cc @mruberry @mikaylagawarecki
### Versions
main
| true
|
2,844,208,164
|
[DTensor]Operator aten.mT.default/aten.mv.default/aten.dot.default does not have a sharding strategy registered
|
zqwenn
|
open
|
[
"oncall: distributed",
"module: dtensor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It seems that after this commit was merged, matmul for dtensor no longer supports 1-dimensional tensors or other shapes.
[[DTensor] Support matmul in inference_mode](https://github.com/pytorch/pytorch/commit/8bdcdae73383da706f82877d76ee1756bf329cc2)
Testcase and Error report as below.
Testcase:
``` python
import torch
from torch.distributed._tensor import distribute_tensor, Replicate, Shard, DeviceMesh
from torch.testing._internal.common_utils import run_tests
from torch.testing._internal.distributed._tensor.common_dtensor import DTensorTestBase
from torch.testing._internal.distributed._tensor.common_dtensor import with_comms
class TestRegisterSharding(DTensorTestBase):
@with_comms
def test_ori_matmul(self):
device_mesh = DeviceMesh(self.device_type, list(range(self.world_size)))
dim = 128
x = torch.randn(8,8,dim)
A = torch.randn(dim)
#x = torch.randn(8, dim)
#A = torch.randn(dim, dim)
y = torch.matmul(x, A)
# Prepare DTensors
dx = distribute_tensor(x, device_mesh, [Replicate()])
dA = distribute_tensor(A, device_mesh, [Shard(0)])
# Use `inference_mode` to test DTensor's capability of decomposing
# `matmul` op
with torch.inference_mode():
dy = torch.matmul(dx, dA)
self.assertEqual(y, dy.full_tensor())
if __name__ == "__main__":
run_tests()
```
Error stack
```
Traceback (most recent call last):
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/testing/_internal/common_distributed.py", line 726, in run_test
getattr(self, test_name)()
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/testing/_internal/common_distributed.py", line 599, in wrapper
fn()
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 3108, in wrapper
method(*args, **kwargs)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 398, in wrapper
raise e
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 395, in wrapper
func(self, *args, **kwargs) # type: ignore[misc]
File "/home/zqw/tmp_test/test_register_sharding_single.py", line 26, in test_ori_matmul
dy = torch.matmul(dx, dA)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/distributed/tensor/_api.py", line 346, in __torch_dispatch__
return DTensor._op_dispatcher.dispatch(
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/distributed/tensor/_dispatch.py", line 164, in dispatch
return self._custom_op_handlers[op_call](op_call, args, kwargs) # type: ignore[operator]
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/distributed/tensor/_dispatch.py", line 53, in decompose_handler
r = op_call.decompose(*args, **kwargs)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/_ops.py", line 764, in decompose
return self.py_kernels[dk](*args, **kwargs)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/_prims_common/wrappers.py", line 289, in _fn
result = fn(*args, is_out=(out is not None), **kwargs) # type: ignore[arg-type]
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/_decomp/decompositions.py", line 4444, in matmul
return torch.ops.aten._unsafe_view(t1_folded.mv(t2), output_shape)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/distributed/tensor/_api.py", line 346, in __torch_dispatch__
return DTensor._op_dispatcher.dispatch(
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/distributed/tensor/_dispatch.py", line 170, in dispatch
self.sharding_propagator.propagate(op_info)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/distributed/tensor/_sharding_prop.py", line 206, in propagate
OutputSharding, self.propagate_op_sharding(op_info.schema)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/distributed/tensor/_sharding_prop.py", line 46, in __call__
return self.cache(*args, **kwargs)
File "/root/miniconda3/envs/zh2.6/lib/python3.9/site-packages/torch/distributed/tensor/_sharding_prop.py", line 455, in propagate_op_sharding_non_cached
raise NotImplementedError(
NotImplementedError: Operator aten.mv.default does not have a sharding strategy registered.
```
I also tested different shape cases. Here are the relevant error messages.
```
NotImplementedError: Operator aten.mT.default does not have a sharding strategy registered.
NotImplementedError: Operator aten.mv.default does not have a sharding strategy registered.
NotImplementedError: Operator aten.dot.default does not have a sharding strategy registered.
```
plz cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu @bdhirsh
### Versions
torch==2.6.0
| true
|
2,844,188,910
|
[XPU] The op `aten.nonzero` implemented by XPU has different output layout with fake tensor in torch.compile.
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
After https://github.com/pytorch/pytorch/pull/145904 add size-asserts for fallback ops in Inductor, we found XPU CI failures:
https://github.com/pytorch/pytorch/actions/runs/13217339535/job/36905991648
```
----------------------------- Captured stdout call -----------------------------
stats [('calls_captured', 10), ('unique_graphs', 1)]
inductor [('extern_calls', 4), ('fxgraph_cache_miss', 1)]
aot_autograd [('total', 1), ('autograd_cache_miss', 1), ('autograd_cache_saved', 1), ('ok', 1)]
________ TestUnbackedSymintsXPU.test_expand_ok_with_runtime_assert_xpu _________
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_unbacked_symints.py", line 50, in test_expand_ok_with_runtime_assert
torch.compile(fn, fullgraph=True)(x)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_unbacked_symints.py", line 44, in fn
def fn(x):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 749, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1199, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 325, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 686, in inner_fn
outs = compiled_fn(args)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 492, in wrapper
return compiled_fn(runtime_args)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/utils.py", line 2239, in run
return model(new_inputs)
File "/tmp/tmpvyseyjtw/n7/cn7uqxu5djdoksj7mrnsipd2nvxgk742xn4vmkajzdte43ip3qve.py", line 45, in call
AssertionError: expected size 128==128, stride 2==1 at dim=0; expected size 2==2, stride 1==128 at dim=1
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
To execute this test, run the following from the base repo dir:
python test/inductor/test_unbacked_symints.py TestUnbackedSymintsXPU.test_expand_ok_with_runtime_assert_xpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
The root cause is the op `aten.nonzero` implemented by XPU has different output layout with fake tensor in torch.compile.
The implementation of CPU/CUDA do not have such issue, Please check if we should align their design.
##### To Reproduce:
```
import torch
from torch._dynamo.testing import rand_strided
assert_size_stride = torch._C._dynamo.guards.assert_size_stride
devices = ["cpu", "xpu"]
for device in devices:
print("testing device: ", device)
arg0_1 = rand_strided((32, 4), (4, 1), device=device, dtype=torch.float32)
buf0 = torch.ops.aten.nonzero.default(arg0_1)
assert_size_stride(buf0, (128, 2), (1, 128))
```
Note, cuda/cpu pass this case, but xpu not.
### Versions
PyTorch version: 2.7.0a0+git9c78fb92
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,844,172,411
|
Optimize transformer encoder/decoder init suggestion
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: docs"
] | 14
|
CONTRIBUTOR
|
Fixes #72253
Add hint message for users to manually initialize after created.
## Test Result
**Before**


**After**


cc @albanD
| true
|
2,844,121,927
|
[Feature][c10d] Allow flexible `cudaStreamWaitEvent` in PGNCCL
|
awgu
|
closed
|
[
"oncall: distributed"
] | 5
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
Today, PGNCCL collectives have the internal NCCL stream wait on the current stream before issuing the NCCL kernel in the NCCL stream. However, waiting on the current stream can be over-synchronization and waiting on an earlier CUDA event could suffice for correctness.
I have a specific use case where it could be useful to pass in a user-specified CUDA event to a PGNCCL collective (in my case, P2P send and recv) and have the collective wait on that instead of recording a new event and waiting on that at collective call time.
The current design mostly assumes that the collective will be called immediately after the relevant kernels needed for that collective; however, when we have multidimensional parallelism, this may not always be the case anymore. There could be several collectives being issued for different PGs, possibly requiring non-collective ops (e.g. device copies) only related to those collectives -- we do not want later collectives to wait on those non-collective ops that have no relevance to them.
One possible API would be to add an `event: Optional[torch.Event]` arg to each c10d collective (though I am not sure how to rationalize this across non-NCCL backends), where if the user passes an event, then the PGNCCL impl should wait on that event instead of doing a stream sync with the current stream.
### Alternatives
We can manually reorder the collectives to avoid over-synchronization, but it may be intrusive to the different parallelism components.
### Additional context
_No response_
cc @H-Huang @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,844,119,804
|
[XPU] Align XPU convolution_backward output layout between fake tensor and real output tensor.
|
etaf
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146763
* __->__ #146880
* #145248
* #146762
Fix #146879
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,844,082,572
|
[XPU Inductor]Inconsistent FP64 Convolution_Backward Output Layout Between Fake Tensor and Real output Tensor
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
After #145904 add size-asserts for fallback ops, we found XPU CI failures: https://github.com/pytorch/pytorch/actions/runs/13232888974/job/36933919996
```
=================================== FAILURES ===================================
___ TestFxGraphCache.test_cache_load_model_device_xpu_float64_dynamic_False ____
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 450, in test_cache_load_model
grads1 = compiled_fn(mod, inp)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 440, in fn
mod(x).sum().backward()
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 440, in torch_dynamo_resume_in_fn_at_440
mod(x).sum().backward()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1983, in backward
return impl_fn()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1969, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2090, in _backward_impl
out = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 749, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/utils.py", line 2239, in run
return model(new_inputs)
File "/tmp/tmp46q7sylf/nu/cnunzgrtlb3zhh5mcpwvlyu7z3kmpi4wabi3g3qle4yjlxdtnooa.py", line 48, in call
AssertionError: expected size 512==512, stride 1==49 at dim=1; expected size 7==7, stride 3584==7 at dim=2; expected size 7==7, stride 512==1 at dim=3
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
To execute this test, run the following from the base repo dir:
python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_model_device_xpu_float64_dynamic_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
The root cause is inconsistent FP64 convolution_backward output layout between fake Tensor and real output tensor.
The layout of fake tensor is determined by https://github.com/pytorch/pytorch/blob/275c034b164dc51dc7827cabf13a54de43e7cf48/aten/src/ATen/native/ConvUtils.h#L422-L433
which do not allow channel last for FP64.
But the real output tensor layout is determined by https://github.com/pytorch/pytorch/blob/275c034b164dc51dc7827cabf13a54de43e7cf48/aten/src/ATen/native/mkldnn/xpu/detail/Utils.cpp#L360-L376
which do allow channel last for FP64.
We should uniformly adopt the second method.
### Versions
PyTorch version: 2.7.0a0+git9c78fb92
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,844,080,867
|
Support torch.compile rng selective activation checkpointing with cudagraph
|
eellison
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 29
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146878
TODO:
- [x] Add handling for when forward is invoked multiple times without invoking backward, so that the fwd/backward states are out of sync
- [x] Update rng state initialization to take from correct device
- [x] Tests
- [x] handling of retain_graph
- [x] respect fallback random
Fix for https://github.com/pytorch/pytorch/issues/130123.
Updates the aot_eager and cudagraph compilation of `run_and_save_rng_state` to use the new mechanism added by https://github.com/pytorch/pytorch/pull/114068 for CUDAGraph safe rng states.
We have a pair of rng states for the fwd and backward respectively. In both forward and backward the rng op will get run with `graphsafe_run_with_rng_state` which takes in RNG state and it hooks onto the current RNG generator before running the operator. The rng states for fwd/backward are initialized with the same value. We ensure that for any given run of the forward, the corresponding backward run will have the same rng states for the op as was observed in the forward.
```
===== Forward graph 1 =====
/data/users/eellison/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, primals_1: "f32[4, 4][4, 1]cuda:0", primals_2: "f32[4, 4][4, 1]cuda:0", fwd_rng_state_0):
sin: "f32[4, 4][4, 1]cuda:0" = torch.ops.aten.sin.default(primals_1)
# No stacktrace found for following nodes
graphsafe_run_with_rng_state = torch.ops.higher_order.graphsafe_run_with_rng_state(torch.ops.aten.rand.default, [4, 4], dtype = torch.float32, device = device(type='cuda', index=0), pin_memory = False, rng_state = fwd_rng_state_0); fwd_rng_state_0 = None
...
===== Backward graph 1 =====
def forward(self, primals_1: "f32[4, 4][4, 1]cuda:0", primals_2: "f32[4, 4][4, 1]cuda:0", tangents_1: "f32[4, 4][4, 1]cuda:0", bwd_rng_state_0):
sin: "f32[4, 4][4, 1]cuda:0" = torch.ops.aten.sin.default(primals_1)
# No stacktrace found for following nodes
graphsafe_run_with_rng_state = torch.ops.higher_order.graphsafe_run_with_rng_state(torch.ops.aten.rand.default, [4, 4], dtype = torch.float32, device = device(type='cuda', index=0), pin_memory = False, rng_state = bwd_rng_state_0); bwd_rng_state_0 = None
```
There is some extra complication when a user either calls backward with retain_graph, or calls the backward in a different order as they called the forward. If a user has state fwd_rng_state0, bwd_rng_state0 and calls:
- fwd0: fwd_rng_state0 -> fwd_rng_state1
- fwd1: fwd_rng_state1 -> fwd_rng_state2
- bwd1
- bwd0
Then naively, when bwd1 is invoked the bwd rng states would not be equal to the same states that were observed in fwd1. I added handling of this in the aot runtime wrappers to detect pending backward invocations, and the current position of the bwd rng states, and to update when necesssary.
Other notes:
Because nodes which appear later in the forward appear earlier in the backward, we need a separate rng state for each operator. If we reused the rng across ops, the forward and backward would be run with different rng states. I.e., not applied in the same order.
Questions for reviewers:
This does change numerics, bc the rng of the op is now taken from the input rng state instead of whatever the rng would be midway through running the graph. Technically, we only need this for cuda graph. But, I'd prefer to not have a rng divergence just for cudagraph. I am making it respect `fallback_random`.
Edit: decided to apply to non cudagraphs as well, so long as fallback_random is not set
I'm initializing the rng states by cloning the current state. If you had something like 5 different rands in the model with the same shape, theyd all get the same value. This doesn't seem great. I could use some other initialization scheme like taking seed from graph position, or etc etc. Not sure. Let me know thoughts.
Edit: updated to be taken from randint()
Update: initializing rng states from torch.randint..
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,844,072,631
|
[cutlass backend] Do not change dtype of GEMM template
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 18
|
CONTRIBUTOR
|
I think this is a change in the right direction.
Right now, when we try to find a cutlass gemm, we generate bunch of gemm templates, and filter out those that don't fix. For example, if we are doing bf16 x bf16 matmul, the gemm template for fp32 x fp32 is generated and filtered out.
However, for the dtype of bias, we would attempt to modify the dtype of the gemm template. I think this is a bad idea, since (1) the usable template is also being generated, and (2) this messes with the configuration name of the template.
I tested this offline. There isn't much difference in performance. However, with instantiation level 2222, I noticed way less "C++ compile error". This is probably due to using the right template?
Follow-ups are needed:
1. benchmark and dashboard
2. check our logic for setting alignment
with my change
https://www.internalfb.com/intern/paste/P1729604119/
without my change
https://www.internalfb.com/intern/paste/P1729624806/
Differential Revision: D69085556
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,844,049,790
|
[SkipFiles] remove some more stuff from MOD_SKIPLIST
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146876
* #146854
Test Plan:
- tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,844,030,323
|
[ca] eliminate duplicate getitem graph nodes for shape inputs
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147021
* __->__ #146875
* #146735
should reuse existing proxies instead of creating new ones
before: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpL7hmHe/0_-_-_0/compiled_autograd_graph_3.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
```python
class CompiledAutograd0(torch.nn.Module):
def forward(self, inputs, sizes, scalars, hooks):
# No stacktrace found for following nodes
getitem = inputs[0]
getitem_1 = inputs[1]
getitem_2 = inputs[2]; inputs = None
getitem_3 = sizes[0]; getitem_3 = None
getitem_4 = sizes[1]; getitem_4 = None
getitem_5 = sizes[2]; getitem_5 = None
getitem_6 = sizes[3]; getitem_6 = None
getitem_7 = sizes[4]; getitem_7 = None
getitem_8 = sizes[5]; getitem_8 = None
getitem_9 = sizes[6]; getitem_9 = None
getitem_10 = sizes[7]; getitem_10 = None
getitem_11 = sizes[8]; getitem_11 = None
getitem_12 = sizes[9]; getitem_12 = None
getitem_13 = sizes[10]; getitem_13 = None
getitem_14 = sizes[11]; getitem_14 = None
getitem_15 = sizes[12]; getitem_15 = None
getitem_16 = sizes[13]; getitem_16 = None
getitem_17 = sizes[14]; getitem_17 = None
getitem_18 = sizes[15]; getitem_18 = None
getitem_19 = sizes[0]
getitem_20 = sizes[1]
getitem_21 = sizes[2]
getitem_22 = sizes[3]
getitem_23 = sizes[4]
getitem_24 = sizes[5]
getitem_25 = sizes[6]
getitem_26 = sizes[7]
getitem_27 = sizes[8]
getitem_28 = sizes[9]
getitem_29 = sizes[10]
getitem_30 = sizes[11]
getitem_31 = sizes[12]
getitem_32 = sizes[13]
getitem_33 = sizes[14]
getitem_34 = sizes[15]; sizes = None
```
after: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpCo5T6B/0_-_-_0/compiled_autograd_graph_1.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
```python
class CompiledAutograd0(torch.nn.Module):
def forward(self, inputs, sizes, scalars, hooks):
# No stacktrace found for following nodes
getitem = inputs[0]
getitem_1 = inputs[1]
getitem_2 = inputs[2]; inputs = None
getitem_3 = sizes[0]
getitem_4 = sizes[1]
getitem_5 = sizes[2]
getitem_6 = sizes[3]
getitem_7 = sizes[4]
getitem_8 = sizes[5]
getitem_9 = sizes[6]
getitem_10 = sizes[7]
getitem_11 = sizes[8]
getitem_12 = sizes[9]
getitem_13 = sizes[10]
getitem_14 = sizes[11]
getitem_15 = sizes[12]
getitem_16 = sizes[13]
getitem_17 = sizes[14]
getitem_18 = sizes[15]; sizes = None
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi @yf225
| true
|
2,844,012,614
|
[Optimus][Inductor] Add full cat aten pattern
|
mengluy0125
|
open
|
[
"fb-exported",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"inductor_pattern_match"
] | 5
|
CONTRIBUTOR
|
Test Plan:
# how to add config
```
"post_grad_fusion_options": {
"normalization_aten_pass": {},
"full_slice_cat_aten_pass": {},
},
```
# unit test
```
buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/inductor:split_cat_fx_aten_passes -- test_full_cat_post_grad
```
File changed: fbcode//caffe2/torch/_inductor/fx_passes/split_cat.py
Buck UI: https://www.internalfb.com/buck2/12c57cef-2f48-463b-99bc-9e801841590b
Test UI: https://www.internalfb.com/intern/testinfra/testrun/1970325105288963
Network: Up: 91KiB Down: 165KiB (reSessionID-0ed781c0-4126-4662-a5ba-5f43fe98b77d)
Analyzing targets. Remaining 0/79180
Executing actions. Remaining 0/480033 6.9s exec time total
Command: test. Finished 2 local
Time elapsed: 3:06.3s
Tests finished: Pass 2. Fail 0. Fatal 0. Skip 0. Build failure 0
# local reproduce
```
CUDA_VISIBLE_DEVICES=5 buck2 run mode/opt scripts/shuaiyang:test -- --optimus --flow_id 685212996 --use_synthetic_data 2>&1 | tee ~/wukong_685212996.txt
```
baseline trace: https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/mengluy/2025-02-08-22-08-35/trace.json.gz&bucket=gpu_traces
proposal trace: https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/mengluy/2025-02-10-16-06-37/trace.json.gz&bucket=gpu_traces
{F1975066983}
{F1975066984}
We reduced the compiledfunctionbackward region from ~17.2ms to ~8.4ms
Differential Revision: D69429984
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,844,004,353
|
update types on dynamo configs
|
exclamaforte
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 12
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,844,000,696
|
[FlexAttention] Bug fix broken flag
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146872
# Summary
I somehow broke this... I think claude was trippin
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,843,998,875
|
[BE] Strip `#pragma once` when embedding the headers
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
This eliminates compiler warning, for example when compiling Metal shader with embedded headers
```
with program_source:6:9: warning: #pragma once in main file [-Wpragma-once-outside-header]
#pragma once
^
program_source:81:9: warning: #pragma once in main file [-Wpragma-once-outside-header]
#pragma once
^
program_source:588:9: warning: #pragma once in main file [-Wpragma-once-outside-header]
#pragma once
^
program_source:719:9: warning: #pragma once in main file [-Wpragma-once-outside-header]
#pragma once
^
program_source:829:29: error: use of undeclared identifier 'r0_2'
auto tmp8 = in_ptr2[r0_2 + 768*x0];
```
| true
|
2,843,998,105
|
Don't look at TESTING_ONLY in fuzzer
|
exclamaforte
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
CONTRIBUTOR
|
Lots of configs aren't meant to be set because they're testing only
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,843,985,789
|
[dtensor] fix side-effect on dtype for _like ops
|
tianyu-l
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146869
fixes https://github.com/pytorch/pytorch/issues/146749
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,843,977,293
|
ci: Add h100 nightly perf testing
|
seemethere
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"test-config/default"
] | 12
|
MEMBER
|
This infrastructure has been up for a while so add a workflow to actually run things on it.
> [!IMPORTANT]
> We only have **14** linux.aws.h100 runners so it might be beneficial for us to actually pair this list down.
> Will leave it up to the compiler team to comment on this PR on which tests are actually important vs. what is not.
| true
|
2,843,973,207
|
export._trace._WrapperModule doesn't work in strict mode
|
yushangdi
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
from torch.export import _trace
def f(x):
return torch.abs(x)
model = _trace._WrapperModule(f)
ep = torch.export.export(model, (torch.randn(8,),))
```
The code errors, but works fine in non-strict mode.
```
Traceback (most recent call last):
File "/data/users/shangdiy/pytorch/test.py", line 8, in <module>
ep = torch.export.export(model, (torch.randn(8,),))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/export/__init__.py", line 368, in export
return _export(
^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/export/_trace.py", line 1048, in wrapper
raise e
File "/data/users/shangdiy/pytorch/torch/export/_trace.py", line 1021, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/export/_trace.py", line 2067, in _export
return _export_for_training(
^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/export/_trace.py", line 1048, in wrapper
raise e
File "/data/users/shangdiy/pytorch/torch/export/_trace.py", line 1021, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/export/_trace.py", line 1932, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/export/_trace.py", line 1300, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/export/_trace.py", line 695, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/eval_frame.py", line 1587, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/convert_frame.py", line 1372, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/convert_frame.py", line 564, in __call__
return _compile(
^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/convert_frame.py", line 1000, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/convert_frame.py", line 725, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/convert_frame.py", line 759, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/data/users/shangdiy/pytorch/torch/_dynamo/convert_frame.py", line 235, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/convert_frame.py", line 679, in transform
tracer.run()
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 2984, in run
super().run()
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 1118, in run
while self.step():
^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 1028, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 714, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 1841, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 952, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/variables/nn_module.py", line 449, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 969, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 3205, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 3342, in inline_call_
self.run()
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 1118, in run
while self.step():
^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 1028, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 714, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 1841, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 952, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/variables/functions.py", line 874, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/variables/functions.py", line 360, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/variables/functions.py", line 162, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 969, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 3204, in inline_call
tracer = cls.build_inline_tracer(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 3257, in build_inline_tracer
result = InliningInstructionTranslator.check_inlineable(func)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/shangdiy/pytorch/torch/_dynamo/symbolic_convert.py", line 3226, in check_inlineable
unimplemented(
File "/data/users/shangdiy/pytorch/torch/_dynamo/exc.py", line 411, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: 'inline in skipfiles: _WrapperModule.forward | forward /data/users/shangdiy/pytorch/torch/export/_trace.py, skipped according trace_rules.lookup MOD_SKIPLIST'
from user code:
File "/data/users/shangdiy/pytorch/torch/_dynamo/external_utils.py", line 48, in inner
return fn(*args, **kwargs)
File "/data/users/shangdiy/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
### Versions
PyTorch version: 2.7.0a0+git6a30232
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.34
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk15_zion_2630_gf27365f948db-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 535.154.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU(s) scaling MHz: 100%
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+git6a30232
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 1.26.2 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.7.0a0+git6a30232 dev_0 <develop>
| true
|
2,843,971,653
|
Add support for no-op concat with padded output
|
nandesuka
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 14
|
CONTRIBUTOR
|
Add support for no-op concat with padded output
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,843,910,301
|
Disable test with dynamo for schema gen
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146865
Fixes https://github.com/pytorch/pytorch/issues/141202.
1. So we skip the schema gen tests under dynamo. https://github.com/pytorch/pytorch/issues/141202 fails in a weird way: where it's claiming node is an integer, but we tested isinstance tests [here](https://github.com/pytorch/pytorch/blob/main/torch/_library/utils.py#L234-L241). This is probably dynamo messing up with the stacks. and checking fx.Node isn't really what dynamo is designed for.
2. We move some of legit cond testes out of schema gen and put it back to control flow tests. Also rename _test_export to a lengthy names.
| true
|
2,843,910,120
|
wip [ca] wrap AOTDispatcher tests
|
xmfan
|
open
|
[
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146864
* #146735
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,843,901,435
|
Back out "Exclude upsample_bilinear2d.vec and nearest2d.vec from default export decomposition table"
|
GregoryComer
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"release notes: export"
] | 2
|
MEMBER
|
Summary:
Original commit changeset: c90cb1aa7f8f
Original Phabricator Diff: D66575454
Original diff breaks internal QNN tests.
Test Plan: CI
Differential Revision: D69423952
| true
|
2,843,885,300
|
[BE] hop_db tests should not be allowed to specify decorators/skips
|
zou3519
|
open
|
[
"triaged",
"better-engineering",
"module: testing",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
Those should all go on the tests themselves...
cc @chauhang @penguinwu @ydwu4 @bdhirsh @yf225
| true
|
2,843,884,364
|
[BE] the majority of HOP OpInfo tests should be in one place
|
zou3519
|
open
|
[
"triaged",
"better-engineering",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
Otherwise people will go hunting down the tests which is annoying
cc @chauhang @penguinwu @ydwu4 @bdhirsh @yf225
| true
|
2,843,883,381
|
[BE] Put all the HOP tests in one location
|
zou3519
|
open
|
[
"triaged",
"better-engineering",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
There's test/test_hop_infra, test/higher_order_ops/*, test/dynamo/test_higher_order_ops, and more.
We should try to put the majority of HOP tests in the same location
cc @chauhang @penguinwu @ydwu4 @bdhirsh @yf225
| true
|
2,843,807,226
|
[export] Dedup expression_created logs
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146939
* #146955
* __->__ #146859
* #146858
* #146534
* #146533
* #146532
| true
|
2,843,807,093
|
[tlparse] Add stacktrace filter utility
|
angelayi
|
closed
|
[
"Merged",
"fx",
"ciflow/inductor",
"release notes: export"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146939
* #146955
* #146859
* __->__ #146858
* #146534
* #146533
* #146532
Added a utility function for capturing the user stack and framework stacktrace.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,843,799,614
|
Segmentation Fault nn/test_convolution.py test_ConvTranspose2d_output_size_downsample_upsample on Aarch64
|
robert-hardwick
|
open
|
[
"module: crash",
"triaged",
"module: arm"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
Riasing this issue so in order to skip the test, but enable the `nn/test_convolution.py` suite of tests on Aarch64.
Running on a Neoverse-V1 we get the following output
```
[2025-02-10T16:14:20.552Z] nn/test_convolution.py::TestConvolutionNN::test_ConvTranspose2d_output_size_downsample_upsample Fatal Python error: Segmentation fault
[2025-02-10T16:14:20.552Z]
[2025-02-10T16:14:20.552Z] Current thread 0x0000f469c9196900 (most recent call first):
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 1162 in forward
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762 in _call_impl
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751 in _wrapped_call_impl
[2025-02-10T16:14:20.552Z] File "/var/lib/jenkins/workspace/test/nn/test_convolution.py", line 692 in test_ConvTranspose2d_output_size_downsample_upsample
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3130 in wrapper
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 549 in _callTestMethod
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 591 in run
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3270 in _run_custom
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3298 in run
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 650 in __call__
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/unittest.py", line 333 in runtest
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/runner.py", line 169 in pytest_runtest_call
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_callers.py", line 103 in _multicall
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_manager.py", line 120 in _hookexec
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_hooks.py", line 513 in __call__
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/runner.py", line 262 in <lambda>
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/runner.py", line 341 in from_call
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/runner.py", line 261 in call_runtest_hook
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/runner.py", line 222 in call_and_report
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/runner.py", line 133 in runtestprotocol
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pytest_rerunfailures.py", line 549 in pytest_runtest_protocol
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_callers.py", line 103 in _multicall
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_manager.py", line 120 in _hookexec
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_hooks.py", line 513 in __call__
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/main.py", line 348 in pytest_runtestloop
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_callers.py", line 103 in _multicall
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_manager.py", line 120 in _hookexec
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_hooks.py", line 513 in __call__
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/main.py", line 323 in _main
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/main.py", line 269 in wrap_session
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/main.py", line 316 in pytest_cmdline_main
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_callers.py", line 103 in _multicall
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_manager.py", line 120 in _hookexec
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pluggy/_hooks.py", line 513 in __call__
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/_pytest/config/__init__.py", line 166 in main
[2025-02-10T16:14:20.552Z] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1292 in run_tests
[2025-02-10T16:14:20.552Z] File "/var/lib/jenkins/workspace/test/nn/test_convolution.py", line 4076 in <module>
[2025-02-10T16:14:20.552Z]
[2025-02-10T16:14:20.552Z] Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._dynamo.autograd_compiler, torch._C._dynamo.eval_frame, torch._C._dynamo.guards, torch._C._dynamo.utils, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, multidict._multidict, yarl._quoting_c, propcache._helpers_c, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket.mask, aiohttp._websocket.reader_c, frozenlist._frozenlist, charset_normalizer.md, requests.packages.charset_normalizer.md, requests.packages.chardet.md, thriftpy2.transport.cybase, thriftpy2.transport.buffered.cybuffered, thriftpy2.transport.framed.cyframed, thriftpy2.transport.memory.cymemory, thriftpy2.transport.sasl.cysasl, thriftpy2.protocol.cybin.cybin, cython.cimports.libc.math, Cython.Utils, Cython.Plex.Actions, Cython.Plex.Transitions, Cython.Plex.Machines, Cython.Plex.DFA, Cython.Plex.Scanners, Cython.Compiler.Scanning, Cython.StringIOTree, Cython.Compiler.Code, yaml._yaml, numba.core.typeconv._typeconv, numba._helperlib, numba._dynfunc, numba._dispatcher, numba.core.runtime._nrt_python, numba.np.ufunc._internal, scipy._lib._ccallback_c, numba.mviewbuf, psutil._psutil_linux, psutil._psutil_posix, scipy.ndimage._nd_image, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg._cythonized_array_utils, scipy.linalg._flinalg, scipy.linalg._solve_toeplitz, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_lapack, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.sparse.linalg._isolve._iterative, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.special._ellip_harm_2, _ni_label, scipy.ndimage._ni_label, scipy.signal._sigtools, scipy._lib._uarray._uarray, scipy.signal._max_len_seq_inner, scipy.signal._upfirdn_apply, scipy.signal._spline, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy._lib.messagestream, scipy.optimize._trlib._trlib, numpy.linalg.lapack_lite, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize.__nnls, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.spatial._ckdtree, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.spatial.transform._rotation, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.interpolate._fitpack, scipy.interpolate.dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.signal._sosfilt, scipy.signal._spectral, scipy.special.cython_special, scipy.stats._stats, scipy.stats.beta_ufunc, scipy.stats._boost.beta_ufunc, scipy.stats.binom_ufunc, scipy.stats._boost.binom_ufunc, scipy.stats.nbinom_ufunc, scipy.stats._boost.nbinom_ufunc, scipy.stats.hypergeom_ufunc, scipy.stats._boost.hypergeom_ufunc, scipy.stats.ncf_ufunc, scipy.stats._boost.ncf_ufunc, scipy.stats.ncx2_ufunc, scipy.stats._boost.ncx2_ufunc, scipy.stats.nct_ufunc, scipy.stats._boost.nct_ufunc, scipy.stats.skewnorm_ufunc, scipy.stats._boost.skewnorm_ufunc, scipy.stats.invgauss_ufunc, scipy.stats._boost.invgauss_ufunc, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy.stats._statlib, scipy.stats._mvn, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._rcont.rcont, scipy.signal._peak_finding_utils (total: 169)
[2025-02-10T16:14:20.552Z] Got exit code -11 (SIGSEGV)
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+giteffc545
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+giteffc545
[conda] No relevant package
cc @malfet @snadampal @milpuz01
| true
|
2,843,785,048
|
Update dynamo expected 20250210
|
huydhn
|
closed
|
[
"module: rocm",
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/inductor-periodic"
] | 3
|
CONTRIBUTOR
|
Update all the ci accuracy expect values to make trunk green.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,843,733,478
|
[StaticRuntime] Fix a bug that memory planner ignores subblocks (#146728)
|
coufon
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 11
|
CONTRIBUTOR
|
Summary:
When Static Runtime graph node has sub-blocks, the memory planner does not consider sub-blocks' inputs as a node's input in memory planner. As the result, such nodes' inputs' lifetime is incorrect and corresponding tensor memory is released earlier than required and causes errors.
Differential Revision: D69195886
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,843,705,210
|
[SkipFiles] remove some stuff from MOD_SKIPLIST
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146876
* __->__ #146854
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,843,686,868
|
Remove torch._higher_order_ops from MOD_SKIPLIST
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146853
Test Plan:
- tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,843,589,490
|
Let _create_cpu_state_dict and _copy_state_dict support DTensor
|
fegin
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146852
cc @H-Huang @awgu @kwen2501 @wanchaol @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,843,561,323
|
DISABLED test_aot_cudagraphs_cuda (__main__.TestOptimizationsCUDA)
|
huydhn
|
open
|
[
"triaged",
"skipped",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22dynamo%2Ftest_backends.py%3A%3ATestOptimizationsCUDA%3A%3Atest_aot_cudagraphs_cuda%22%5D)).
This causes a memory leak and can be reproduced in main with `PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/dynamo/test_backends.py TestOptimizationsCUDA.test_aot_cudagraphs_cuda`
cc @chauhang @penguinwu
| true
|
2,843,514,009
|
STuff
|
tugsbayasgalan
|
open
|
[
"fb-exported",
"Stale",
"fx",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146850
Differential Revision: [D69413972](https://our.internmc.facebook.com/intern/diff/D69413972/)
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,843,512,392
|
[inductor] skip _test_insignificant_strides on rocm
|
shunting314
|
closed
|
[
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146849
* #145904
Check https://github.com/pytorch/pytorch/issues/146848 , the rocm kernel for _scaled_dot_product_attention does not match the meta kernel regarding output shape. cuda kernel is fine.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,843,497,648
|
_scale_dot_product_efficient_attention eager/meta computation mismatch on output tensor shape
|
shunting314
|
open
|
[
"triaged",
"oncall: pt2",
"rocm",
"module: sdpa"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```
import torch
from torch._subclasses.fake_tensor import FakeTensorMode
empty_strided_cuda = torch._C._dynamo.guards._empty_strided_cuda
q = empty_strided_cuda((1, 32, 1, 128), (4096, 128, 128, 1), torch.float32)
k = empty_strided_cuda((1, 32, 1, 128), (4096, 128, 128, 1), torch.float32)
v = empty_strided_cuda((1, 32, 1, 128), (4096, 128, 128, 1), torch.float32)
bias = empty_strided_cuda((1, 32, 1, 1), (8, 0, 8, 1), torch.float32)
op = torch.ops.aten._scaled_dot_product_efficient_attention.default
out = op(q, k, v, bias, True)
print("Eager:", out[0].size(), out[1].size())
mode = FakeTensorMode()
with mode:
ft_args = [mode.from_tensor(t) for t in (q, k, v, bias)]
ft_out = op(*ft_args, True)
print("FakeTensor:", ft_out[0].size(), ft_out[1].size())
# output on A100:
# Eager: torch.Size([1, 32, 1, 128]) torch.Size([1, 32, 32])
# FakeTensor: torch.Size([1, 32, 1, 128]) torch.Size([1, 32, 32])
#
# output on MI300X:
# Eager: torch.Size([1, 32, 1, 128]) torch.Size([1, 32, 1])
# FakeTensor: torch.Size([1, 32, 1, 128]) torch.Size([1, 32, 32])
```
As the repro shows, on A100, it works fine. But on MI300X, the eager kernel returns a different shape for the second output
### Versions
Trunk: 3822a88d211fe4a0ab4f1204c48c2588c8d8cfb4
cc @chauhang @penguinwu
| true
|
2,843,456,154
|
[FSDP2] Simplify shard_placement_fn in test
|
tsunghsienlee
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Summary: Found this while checking `shard_placement_fn` for Shampoo shard independent implementation.
Test Plan: OSS CI & tests
Differential Revision: D69412878
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,843,435,746
|
Fix lint
|
angelayi
|
closed
|
[
"Merged",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
[Fixes #ISSUE_NUMBER
](https://github.com/pytorch/pytorch/actions/runs/13248382636/job/36980294598)
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,843,413,095
|
Fix non-bitwise type annotations for Tensor operators (see #145838)
|
rec
|
open
|
[
"oncall: distributed",
"module: typing",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ci-no-td"
] | 13
|
COLLABORATOR
|
Fix https://github.com/pytorch/pytorch/issues/145838
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146845
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @malfet @xuzhao9 @gramster
| true
|
2,843,376,213
|
[PT2] Enable relu_nan_to_num preset in pre_grad by default
|
huxintong
|
open
|
[
"fb-exported",
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: enabled for AOTI_EP https://fburl.com/code/27q2dnc2, for AOTI we also want to enable it by default, so set it directly in the config
Test Plan: sandcastle
Differential Revision: D69409881
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,843,372,920
|
[inductor][cpu] Move VNNI weight packing into AMX GEMM kernel for contiguous BMM weights
|
frost-intel
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 13
|
COLLABORATOR
|
Currently, the bfloat16 microkernel that uses AMX vectorization requires that the weights are in an interleaved VNNI format. For GEMM code, this hasn't been an issue because GEMM currently only supports constant weights, so the VNNI weight packing is done during compile-time and saved as a constant tensor to the graph. But for BMM ops where weights are not required to be constant, current code does an expensive reshape/VNNI packing for all BMM weights.
This PR removes the need for the reshape/packing for non-constant inputs by moving VNNI packing inside the AMX microkernel. A new `K * block_n` buffer is used to store the temporary packed weights. Weight packing involves interleaving 2 rows of weights.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,843,277,026
|
[PGNCCL] Associate tensor allocation support with NCCL version
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146842
* #146589
This is a forward fix to #146589.
For NCCL version lower than 2.19, previous PR would see `RuntimeError: NCCL mem allocator is not supported in this NCCL version`.
This PR gates the support by checking link-time NCCL version via `ncclGetVersion`.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,843,260,441
|
[cuda] fix printing of num_gpus
|
wconstab
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146841
Previously on machines with less than 8 gpus, the device==7 case would
trigger the assert inside getDeviceProperties, and print `num_gpus=BEL`
which is ascii for 7.
| true
|
2,843,244,550
|
Fix bazel job after #144489
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/periodic",
"test-config/default"
] | 3
|
CONTRIBUTOR
|
This is currently failing in trunk with the following error https://github.com/pytorch/pytorch/actions/runs/13246034191/job/36972742610
### Testing
Bazel job passing https://github.com/pytorch/pytorch/actions/runs/13247495161/job/36977571965
| true
|
2,842,907,729
|
[ROCm] Update periodic.yml to use 2GPU runners
|
amdfaa
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"Reverted",
"topic: not user facing",
"ciflow/periodic",
"test-config/distributed",
"test-config/default",
"ciflow/rocm",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Temporary fix for rocm workflow.
The 4-GPU runners are all taken offline due to (network timeout issue), and so we aren't able to run any periodic jobs.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,842,672,759
|
[cuda] fix printing of num_gpus
|
wconstab
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146838
Previously on machines with less than 8 gpus, the device==7 case would
trigger the assert inside getDeviceProperties, and print `num_gpus=BEL`
which is ascii for 7.
| true
|
2,842,516,680
|
Make cuda and xpu build coexist in same build
|
take-cheeze
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
With https://github.com/pytorch/kineto/pull/1036
| true
|
2,842,452,500
|
[Fix]: Disable KleidiAI if unsupported gcc/clang compiler is detected
|
nikhil-arm
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Fixes: https://github.com/pytorch/pytorch/issues/146740
Description:
1. KleidiAI officially supports GCC>=11 and Clang>=11. Certain hardware features are tied to compiler version and KleidiAI compilation will fail in such cases.
Change-Id: Ib43d6b5bf66ef5ea48c481a2774801c573ec205c
| true
|
2,842,420,177
|
Export HuggingFace mamba to ONNX
|
AyoubMDL
|
open
|
[
"module: onnx",
"triaged"
] | 22
|
NONE
|
### 🐛 Describe the bug
Hi, I am trying to convert the Mamba model from Hugging Face to ONNX. However, I encountered the following error:
```python
<class 'RuntimeError'>: Found <class 'transformers.cache_utils.MambaCache'> in output, which is not a known type.
```
Here is the code:
```python
from transformers import MambaForCausalLM
import torch
model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf")
dummy_inputs = torch.tensor([[33]], dtype=torch.int32)
with torch.no_grad():
onnx_program = torch.onnx.export(model,
dummy_inputs,
"mamba_hf.onnx",
input_names=["input"],
opset_version=15,
external_data=False,
dynamo=True)
onnx_program.optimize()
onnx_program.save("mamba_hf.onnx")
```
When I set `use_cache=False`, the export works without issues. However, the operations differ between the cached (`use_cache=True`) and non-cached (`use_cache=False`) cases—specifically, not caching involves different operations (conv1D).
This code works:
```python
from transformers import MambaForCausalLM
import torch
# Disable caching
model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf", use_cache=False)
dummy_inputs = torch.tensor([[33]], dtype=torch.int32)
with torch.no_grad():
onnx_program = torch.onnx.export(model,
dummy_inputs,
"mamba_hf.onnx",
input_names=["input"],
opset_version=15,
external_data=False,
dynamo=True)
onnx_program.optimize()
onnx_program.save("mamba_hf.onnx")
```
I am particularly interested in exporting the model with the operations that occur when the cache is available.
### Question:
How can I modify the export to ignore the `MambaCache` object but still retain the logic from this part of the code (with caching):
https://github.com/huggingface/transformers/blob/924f1c717a72261a4b9286a31f199d9512448dd0/src/transformers/models/mamba/modeling_mamba.py#L248
and not this part (without caching):
https://github.com/huggingface/transformers/blob/924f1c717a72261a4b9286a31f199d9512448dd0/src/transformers/models/mamba/modeling_mamba.py#L268
Any guidance on how to handle this would be greatly appreciated!
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.3
Libc version: glibc-2.31
Python version: 3.11.11 (main, Dec 4 2024, 08:55:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 550.107.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 165
Model name: Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz
Stepping: 5
CPU MHz: 4410.645
CPU max MHz: 4800,0000
CPU min MHz: 800,0000
BogoMIPS: 5799.77
Virtualization: VT-x
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxconverter-common==1.16.0
[pip3] onnxmltools==1.13.0
[pip3] onnxruntime==1.19.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxscript==0.1.0.dev20250114
[pip3] pytorch-lightning==1.9.5
[pip3] torch==2.6.0
[pip3] torchao==0.8.0
[pip3] torchaudio==2.5.1
[pip3] torchmetrics==1.6.1
[pip3] torchtune==0.5.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
| true
|
2,842,262,341
|
implement Size.__radd__
|
khushi-411
|
open
|
[
"open source",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Fixes #144334
| true
|
2,842,260,625
|
Enable ruff E721
|
zeshengzong
|
open
|
[
"open source",
"Stale",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Enable [type-comparison (E721)](https://docs.astral.sh/ruff/rules/type-comparison/#type-comparison-e721)
| true
|
2,841,827,299
|
Improve Typing for Loss Functions to Fix VSCode Autocomplete
|
Julfried
|
open
|
[
"triaged",
"open source",
"Stale"
] | 6
|
NONE
|
Fixes #146831 by adding a type annotation to `_Loss.__call__`, as proposed in https://github.com/microsoft/pyright/issues/3249
| true
|
2,841,813,770
|
Missing Typing for Loss Functions Causes Poor Code Completion in VSCode
|
Julfried
|
open
|
[
"module: nn",
"module: typing",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
Currently, all loss functions in torch.nn.modules.loss return a Tensor, but VSCode's IntelliSense and static type checkers (like Pyright) cannot infer this correctly. This makes code completion and function signatures less helpful when working with loss functions.
For example, in VSCode:
```python
import torch
loss_fn = torch.nn.CrossEntropyLoss()
loss = loss_fn(torch.tensor([0.1, 0.9]), torch.tensor([1.0, 0.0])) # Expected: Tensor, but loss is infered as Any
```
Because the return type is not properly inferred, IntelliSense does not suggest useful Tensor methods when working with loss, making development inconvenient.
### Versions
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] numpydoc==1.8.0
[pip3] torch==2.5.1
[conda] numpy 2.1.3 pypi_0 pypi
[conda] numpydoc 1.8.0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ezyang @malfet @xuzhao9 @gramster
| true
|
2,841,811,505
|
Make codegen dynamic shapes more device agnostic
|
matthewhagraphcore
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 16
|
CONTRIBUTOR
|
Currently, as is the case with many inductor devices are assumed to be one of:
- CPU with Cpp coden, or
- GPU with triton codegen
This is not always the case, a CPU backend may be using the triton CPU backend, or some other codegen entirely. This goes some way to fixing it in the case where a CPU backend can use triton scheduling.
A more general solution could be implemented, but this would need to be quite robust, and is probably best done more centrally and by someone who can do more testing with CUDA devices.
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.