id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,015,052,034
|
add basic unit tests and noop config
|
Lucaskabela
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151134
* __->__ #152036
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,015,046,806
|
add basic unit tests and noop config
|
Lucaskabela
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,015,005,728
|
RFC: Torch Native Runtime
|
zhxchen17
|
open
|
[
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Requesting a review on a new RFC about Torch Native Runtime: https://github.com/pytorch/rfcs/pull/72
Pull Requests (for tracking purpose):
- [x] https://github.com/pytorch/pytorch/pull/151467
- [x] https://github.com/pytorch/pytorch/pull/152033
More tracked tasks (private link): https://www.internalfb.com/gsd/539269141088926/1198391541766396/list
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu
| true
|
3,014,997,154
|
[nativert] Add moodycamel/concurrentqueue as third-party dependency
|
zhxchen17
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"skip-pr-sanity-checks"
] | 11
|
CONTRIBUTOR
|
nativert RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md
moodycamel/concurrentqueue is a high performence mpmc queue implementation and single header only. We want to add this to third_party to be used with upcoming Torch Native Runtime.
The source code is imported from commit hash 2f09da73d22a47dc8a89cdd4fc4c3bfae07f4284 from https://github.com/cameron314/concurrentqueue
cc @albanD
| true
|
3,014,950,317
|
RFC: The State of Custom CUDA extensions in PyTorch
|
msaroufim
|
open
|
[
"module: cpp-extensions",
"module: cuda",
"triaged"
] | 22
|
MEMBER
|
## The State of Custom CUDA extensions in PyTorch
### Background
This is a document summarizing the state of custom CUDA extensions in PyTorch. There are many tools to do this already with different tradeoffs and we will likely add even more tools with even more different tradeoffs so it's become timely to describe the state of the world and also conclude with some ideas on how to unify more of our tooling.
The simplest way to add a custom CUDA extension in PyTorch is `torch.utils.cpp_extension.load_inline()` it expects both CUDA host and device code as strings and codegenerates a cpp extension by using `torch/extension.h`. From a UX perspective `load_inline()` is great but customers have complained of long compile times on the order of 90s for toy kernels and this is primarily because `torch/extension` essentially imports all of Libtorch with about 18K header files.
It is possible to mitigate this problem by simply not automatically adding `torch/extension.h` in a new mode we call `no_implicit_header`, however users then cannot use `at::Tensor` and must instead rely on using `at::TensorBase` which is an `at::Tensor` without its methods. The core problem here is that users still need to write some opaque code like `PyMODINIT_FUNC PyInit_noname(void)` which is neither familiar to the typical PyTorch user or the typical CUDA user. Granted this does work and it does reduce compilation times to around 5s for toy kernels.
5s is still a lot though and the main reason for this overhead is `load_inline()` while having a JIT like UX actually uses `nvcc` under the hood whereas NVIDIA recommends for fast compilations using `nvrtc`.
A more recent PR https://github.com/pytorch/pytorch/pull/151484 is proposing a new function `torch.cuda._compile_kernel()` and by leveraging `nvrtc` under the hood it can bring down compilation times to 0.01s! This PR is in the same spirit as yet another PR doing the same for Metal https://github.com/pytorch/pytorch/pull/148972
Keep in mind that `load_inline()` was not purely concieved for CUDA extensions but instead for general cpp extensions and instead of relying on Pybind, the PyTorch custom ops documentation instead recommends using `TORCH_LIBRARY` but for most practical purposes users still need to import `at::Tensor` which also pulls in a large amount of header files again causing slow compilation times.
Which is why Jane has also been working on an experimental API `shim.h` originally concieved for ABI compatibility but also has the import side effect that it can dramatically reduce compilation times. Granted authoring kernels for this API looks dramatically different from the typical CUDA code and so while we do expect this to be workable for PyTorch core developers it seems unlikely we'll convert general CUDA developers to use this API.
There are also some older code paths including `torch.cuda.jiterator` which is a PyTorch JIT for elementwise CUDA kernels. This API has remained in beta for many years, only works for elementwise kernels and requires the input string to have specific C++ templates to work https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_multi_output_jit_fn.html
Finally this introduction doesn't even begin to capture composability with compilers or all the new DSLs for GPU programming languages like tilelang, cutileand triton but we will briefly discuss how hardware vendors moving more towards pythonic DSLs will affect our story.
### Summary of tradeoffs table
| Mechanism | Compile Speed | Kernel Launch Overhead | Setup / Dependencies | Dynamic Authoring (JIT) | Complex Data Structure Support | NVIDIA Tooling Used | Code Duplication / Maintenance |
|-----------|--------------|------------------------|----------------------|-----------------------|--------------------------------|---------------------|--------------------------------|
| **cpp_extension.load_inline** | Very slow initial compile (60-90s); With fewer includes: 5-15s. Subsequent runs use cached binaries. | Minimal runtime overhead. Only normal C++ op dispatch. First call incurs PyBind initialization. | Heavy requirements. Needs full CUDA toolkit (nvcc, ptxas) and C++ compiler toolchain. Relies on PyTorch's build tools. | Partial JIT. Supports dynamic compilation at runtime, but iteration is slow due to long compile times. Not ideal for interactive prototyping. | Full ATen API access. Can use at::Tensor and PyTorch C++ libraries, allowing complex ops, autograd support (with manual code), etc. | Uses nvcc + ptxas. Compiles PTX and host code with NVCC. No use of NVRTC at runtime. | Standard extension logic. Relies on PyBind11 and extension caching. Large header includes cause maintenance burden on compile times. |
| **ctypes + NVRTC (torch.cuda._compile_kernel)** | Fast JIT compile (~0.01s vs ~90s with load_inline). NVRTC compiles only CUDA device code in-memory, drastically reducing build time. | Low overhead per launch. Similar to regular CUDA launch, but requires specifying grid/block and arg pointers each time from Python. | Lightweight runtime deps. No nvcc or full toolchain needed – only CUDA driver and NVRTC library. PyTorch uses lazy loading of NVRTC/driver APIs. | Highly JIT-friendly. Designed for on-the-fly kernel generation in eager Python. Great for interactive benchmarking, dynamic codegen. | Limited abstraction. Works with raw device pointers, not at::Tensor. No built-in autograd or complex data type support (needs manual handling). | Uses NVRTC + CUDA Driver. Compiles PTX at runtime via NVRTC, loads via cuModuleLoad, launches with cuLaunchKernel. No nvcc/ptxas. | Duplicate NVRTC logic. New code path paralleling PyTorch's existing NVRTC integration. Issues with kernel argument passing in early prototypes. |
| **PyBind11 libtorch Extensions** | No runtime compile (ahead-of-time). Compile cost paid upfront. Development iteration slower, but no JIT in deployment. Minimal includes can improve build time. | No extra overhead at call. Just C++ function call through dispatcher (same as built-in ops). | Significant setup. Requires C++/CUDA code, build script/Makefile, compatible compiler and CUDA toolkit. Binary must match PyTorch version. | Not JIT at runtime. Static approach good for production, not for interactive prototyping. Code changes require recompilation. | Full flexibility. Can use entire PyTorch C++ API and CUDA libraries. Complex data structures, custom memory layouts, and autograd integration all possible. | Uses nvcc + PyTorch C++ API. Compiled with nvcc (CUDA) and C++ compiler (host code), linking against libtorch. No NVRTC. | Maintenance burden on user side. Users must update code with PyTorch releases (no stable ABI). Recent efforts like shim.h aim to provide stable, minimal ABI. |
| **Jiterator (Elementwise CUDA JIT)** | Fast compile for elementwise (milliseconds). Generates kernel from small template. Uses NVRTC under hood. Caches kernels for given code strings. | Very low overhead per call. Called like regular Python function. Uses PyTorch's TensorIterator internally. On par with fused elementwise kernel. | Built-in and easy. No external toolchain beyond CUDA PyTorch build. Uses PyTorch's NVRTC stubs. Users just provide code string. | Highly JIT/dynamic. Made for JIT'ing new elementwise kernels at runtime. Great for quick math formula experiments. Each use JITs or retrieves from cache. | Restricted to elementwise math. Only supports pointwise/broadcastable operations. Can't implement multi-stage algorithms or custom reductions. No higher-level PyTorch API usage inside kernel. | Uses NVRTC via PyTorch. Uses PyTorch's lazy NVRTC loader to compile code string to PTX. No nvcc needed. | Some overlap with NVRTC API. Implements logic for compiling code strings, handling types. Limited scope makes maintenance easier but less broadly useful than general solution. |
### Problems with `torch.cuda._compile_kernel`
While different solutions represent different tradeoffs something I expect most users would want is `torch.cuda._compile_kernel()`, it gives many orders of magnitude speedups for compilation times but the main caveat is this will be device-only so practically users need to fit all their code within a single `__global__`. There is some pythonic overhead to this though because we need to setup an arg list through ctype, the returned kernel expects raw arguments like device points and sizes so users must package things into tuples, we need to use `data_ptr()` but the runtime will still be fast since we don't do anything silly like loop over all the elements in python.
PyTorch itself also does not link itself against nvrtc directly to keep cpu-only builds possible so instead it uses a stub that calls `dlopen` on `libnvrtc.so`, there is some code duplication because of this now because in the most recent PRs we rewrite this logic in python. There's also a nuance in that code that's compiled using `compile_kernel()` expects a specific compute_capability and as such will not run on new GPUs without recompilation.
The support for kernel authoring is pretty excellent, fast compilations means more interactivity, strings are a fairly powerful interface considering you can metaprogram using jinja templates or t-strings which are expected to land in python 3.14, the API is really intended to be as easy to use as the traditional jiterator one but more useful because it's more general.
However, this does not integrate super well with the rest of the PyTorch ecosystem, for example we can't pass in arbitrary C++ objects and it's up to users to implement autograd logic so it's more likely this API will be mostly used for inference.
The most important limitation of nvrtc is that it compiles only device code so this might not include modern c++ features but for most kernel code this is probably fine.
At a high level the way this PR works is that it calls `nvrtcCompileProgram` to get PTX, then uses functions like `cuModuleLoadData` and `cuModuleGetFunction` to get a kernel handle, and `cuLaunchKernel` to run it. PyTorch has LazyNVRTC which does almost the exact same thing but the benefit of using the pythonic approach is we don't spawn a new process. There's already some subtletly that's biting us with passing in ctypes arguments and a mistake tends to create an opaque error. Longer term it makes sense to unify these 2 code paths.
Static extensions like those using libtorch extensions still have their place, they offer an upfront cost with no runtime cost, their launch overhead is identical to native ops but packaging and setting up the extension is an inherently heavyweight process that cannot compete with `compile_kernel()`. Libtorch extensions essentially add a maintenance burden on the custom kernel developer and better tooling including but not limited to ABI compatibility might make static extensions popular again.
So do we need yet another public API to compile arbitrary CUDA kernels? The short answer is yes because we don't have a solution for fast CUDA prototyping in eager, as long as the gaps are documented (no host code, no autograd, work with raw pointers) and it gains users until it becomes robust then there's a real possibility we'll generalize the existing jiterator code.
Eventually `shim.h` might provide a best of both worlds experience with fast compilation times and a usable UX but considering the API is entirely new and neither CUDA'ish or PyTorch'ish we'll likely have to build either higher level APIs to work with it or implement a few common wrapper functions to make it more usable.
### Python is all you need?
Since the release of `torch.compile()` and triton we've seen a large amount of new pythonic DSLs for GPU programming emerge including tilelang, triton and most notably NVIDIA itself has made cutile and cuda-python one of the core focuses of GTC 2025. Most of these new languages have better ergonomics over C++ libraries like CUTLASS and good chunk of them are also vendor neutral, that combined with their ease of use and ease of packaging because they are typically JIT'd makes it plausible to imagine a future version of PyTorch that relies very little on AOT or native kernels.
Triton for example is very "PyTorch-first" specfically in that Triton kernels natively operatore over torch.Tensor but also launch their kernels on the existing `torch.cuda.Stream`, it remains to be seen what the interop will look like with other library but it is something that should be top of mind for us.
If users are willing to accept higher runtime overhead in return for smaller binaries then that might be one direction we slowly need to take PyTorch in as well.
`cuda-python` however is a bit different from most of the other tile based DSLs we previously mentioned, we could take it that it provides complete cuda driver and runtime apis, a runtime compiler with nvrtc and a linker for device code.
So we could make the argument of why this code should be in core and instead rely on `cuda.nvrtc` instead, and I'll work on producing some good examples here as well so we can compare more concretely. My guess however is this won't help us all that much since we will likely still have to do ctypes shenanigans for interop which is the most complex part of the compile_kernel PR but concretely we'd likely delete things like discovery of the nvrtc.so
## Integration with AOTInductor
This is one area where I'm admittedly not an expert so posting stuff that might be incorrect here in the hope that someone corects me but AOTInductor seems to have a few ways of integrating custom kernels
1. `register_external_matmul` which will autotune a user matmul with the inductor ones https://github.com/pytorch/pytorch/pull/130774
2. `ExternKernel` which per my understanding is used to fallback to fast eager kernels
3. `triton_template_registry` which is one way of removing the user burden of when to call a custom kernel
4. More recently the team added support for packaging custom kernels using CMake https://github.com/pytorch/pytorch/issues/115965
None of these approaches are at adds with what we're doing and conceptually `ExternKernel` would treat any new CUDA functions we compile as black boxes they can package and ship assuming the deployment machine has NVRTC available.
## What should we do next?
This is an opiniated take but one I'd love to discuss more.
1. Merge `compile_kernel()` and see how users react and iterate until a public release
2. Unify nvrtc c++ and python code paths, potentially leveraging `cuda-python`
3. Get `load_inline()` to automatically try `compile_kernel()` and use `nvcc` in case it fails
4. Better tensor marshaling a helper that can convert between `Torch.Tensor` a nd a Cudeviceptr, size, stride could make `cuda-python` as ergonomic as `compile_kernel()`
5. Get `shim.h` used by non core developers, make it easier to write and closer to either CUDA or PyTorch in UX
6. Get more examples of custom kernels that are succesful shipped via AOT Inductor registration mechanisms. In particular having some public API that takes a compiled cubin pointer would make integrations more first class.
## Some more updates since I posted this
1. NVIDIA folks reached out recommending I take a look at https://github.com/NVIDIA/cuda-python/blob/main/cuda_core/examples/saxpy.py as an alternative backend to compile_kernel and overall this seems like it'd be more stable long term
2. Considering the main limitation of compile_kernel as pointed out by @ngimel is going to be support for fast libraries like CUB we're exploring instead having a fast scan algorithm be the first cuda-python kernel we support, details TBD
cc @malfet @zou3519 @xmfan @ptrblck @eqy @jerryzh168 @desertfire @albanD @ngimel @janeyx99 @drisspg @syed-ahmed @seemethere
| true
|
3,014,911,474
|
[pytorch] reland of [cutlass backend] delay construction of cutlass presets to when called (#151875)
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 4
|
CONTRIBUTOR
|
Differential Revision: D73524978
reland of https://github.com/pytorch/pytorch/pull/151875
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @aakhundovr
| true
|
3,014,886,477
|
[caffe2/c10/util/TypeIndex] Add '__CUDA_ARCH_LIST__' check
|
wenxin0319
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 18
|
CONTRIBUTOR
|
Summary:
We suspect that switching the NVCC host compiler from GCC to Clang, while targeting multiple architectures, is causing issues because only _CUDA_ARCH_LIST_ is being passed, without _CUDA_ARCH_.
To resolve this c10 compilation error, we should first fix the problem and then switch the NVCC host compiler from GCC to Clang. Once this is done, the errors no longer occur.
Test Plan: CI
Reviewed By: zhuhan0
Differential Revision: D73383236
| true
|
3,014,870,629
|
[export] improve error message for deserializing custom triton op
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152029
In https://github.com/pytorch/pytorch/issues/151746, users ran into an error where a custom triton op cannot be resolved into an operator from string target. We improve the error message by reminding users to register the same custom operator at de-serialization time.
Now the error looks like this:
```python
torch._export.serde.serialize.SerializeError: We failed to resolve torch.ops.triton_kernel.add.default to an operator. If it's a custom op/custom triton op, this is usally because the custom op is not registered when deserializing. Please import the custom op to register it before deserializing. Otherwise, please file an issue on github. Unsupported target type for node Node(target='torch.ops.triton_kernel.add.default', inputs=[NamedArgument(name='x', arg=Argument(as_tensor=TensorArgument(name='linear')), kind=1), NamedArgument(name='y', arg=Argument(as_tensor=TensorArgument(name='mul')), kind=1)], outputs=[Argument(as_tensor=TensorArgument(name='add'))], metadata={'stack_trace': 'File "/data/users/yidi/pytorch/test.py", line 50, in forward\n output = triton_add(dense_output, bias)', 'nn_module_stack': 'L__self__,,__main__.SimpleModel', 'torch_fn': 'add.default_1;OpOverload.add.default'}, is_hop_single_tensor_return=None): <class 'str'>.```
| true
|
3,014,821,932
|
[CUDA] MultiheadAttention with masks and dropout produces NaNs
|
YChienHung
|
open
|
[
"module: cuda",
"triaged",
"module: correctness (silent)",
"module: sdpa"
] | 0
|
NONE
|
### 🐛 Describe the bug
I found the MHA exception error with the following triggering condition:
1. attn_mask=None, need_weights=False, OK
2. attn_mask=tensor[bool], need_weights=True, OK
3. attn_mask=tensor[bool], need_weights=False, find NaNs
```python
self.read_from_pixel = CrossAttention(
self.embed_dim, self.num_heads, add_pe_to_qkv=this_cfg.read_from_pixel.add_pe_to_qkv
)
# bs*C*H*W -> bs*(H*W)*C
pixel_flat = pixel.flatten(2, 3).transpose(1, 2).contiguous()
# masked cross attention
x, q_weights = self.read_from_pixel(
x, pixel_flat, query_pe, pixel_pe, attn_mask=attn_mask, need_weights=need_weights
)
```
```python
class CrossAttention(nn.Module):
def __init__(
self, dim: int, nhead: int, dropout: float = 0.0, batch_first: bool = True,
add_pe_to_qkv: List[bool] = [True, True, False], residual: bool = True, norm: bool = True
):
super().__init__()
self.cross_attn = nn.MultiheadAttention(dim, nhead, dropout=dropout, batch_first=batch_first)
if norm:
self.norm = nn.LayerNorm(dim)
else:
self.norm = nn.Identity()
self.dropout = nn.Dropout(dropout)
self.add_pe_to_qkv = add_pe_to_qkv
self.residual = residual
def forward(
self, x: torch.Tensor, mem: torch.Tensor, x_pe: torch.Tensor, mem_pe: torch.Tensor, attn_mask: bool = None,
*,
need_weights: bool = False
) -> (torch.Tensor, torch.Tensor):
x = self.norm(x)
if self.add_pe_to_qkv[0]:
q = x + x_pe
else:
q = x
if any(self.add_pe_to_qkv[1:]):
mem_with_pe = mem + mem_pe
k = mem_with_pe if self.add_pe_to_qkv[1] else mem
v = mem_with_pe if self.add_pe_to_qkv[2] else mem
else:
k = v = mem
r = x
x, weights = self.cross_attn(
q, k, v, attn_mask=attn_mask, need_weights=need_weights, average_attn_weights=False
)
if self.residual:
return r + self.dropout(x), weights
else:
return self.dropout(x), weights
```
### Versions
hardware: A100 GPU
os: ubuntu 22.04
cuda:12.4
python:3.11.11
numpy==2.2.4
nvidia-cublas-cu12==12.4.5.8
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.1.3
nvidia-curand-cu12==10.3.5.147
nvidia-cusolver-cu12==11.6.1.9
nvidia-cusparse-cu12==12.3.1.170
nvidia-cusparselt-cu12==0.6.2
nvidia-ml-py==12.570.86
nvidia-nccl-cu12==2.21.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.4.127
torch==2.5.0
torchaudio==2.5.0
torchvision==0.20.0
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,014,714,073
|
Add rich support to torch.distributed.tensor.debug.visualize_sharding
|
wangkuiyi
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (dtensor)"
] | 11
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/151857
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Please verify this PR by running the following command on a computer with at least 4 GPUs.
```shell
torchrun --nproc_per_node=4 /w/pytorch/torch/distributed/tensor/examples/visualize_sharding_example.py
```
| true
|
3,014,704,183
|
Expand cache logging
|
oulgen
|
open
|
[
"ciflow/trunk",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152026
| true
|
3,014,632,023
|
[ONNX] Create a message to suggest users setting dynamo=True when exporting
|
justinchuby
|
closed
|
[
"module: onnx",
"triaged"
] | 3
|
COLLABORATOR
|
Add a message to nudge users to try the dynamo=True option.
| true
|
3,014,577,086
|
fbgemm packages are compiled in torchinductor torchbench tests
|
ngimel
|
open
|
[
"high priority",
"module: ci",
"triaged"
] | 4
|
COLLABORATOR
|
In torchinductor torchbench workflows we compile fbgemm package for 45 minutes:
```
2025-04-22T06:59:33.9864041Z Building wheels for collected packages: fbgemm-gpu
2025-04-22T07:44:40.6897548Z Building wheel for fbgemm-gpu (setup.py) ... [?25l- \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | done
2025-04-22T07:44:40.8746033Z [?25h Created wheel for fbgemm-gpu: filename=fbgemm_gpu-0.4.1rc0.post421-cp310-cp310-linux_x86_64.whl size=253793293
```
we should explore if we can instead use fbgemm wheels (if we only need fbgemm genai functionality, those wheels are built nightly, regular fbgemm is a different story), cache the builds, or build wheels ourselves
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,014,487,726
|
[BE] Throw different errors for CUDA exceptions
|
malfet
|
open
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152023
* #151955
TODO: May be we just need generic DeviceError class that preserves error code?
| true
|
3,014,471,550
|
standalone_compile with training errors with no cache artifacts
|
bdhirsh
|
open
|
[
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
failure at save time:
```
Traceback (most recent call last):
File "/data/users/hirsheybar/checkout2/pytorch/tmp.py", line 36, in <module>
compiled_artifact.save(path=path, format=format)
File "/data/users/hirsheybar/checkout2/pytorch/torch/_inductor/standalone_compile.py", line 73, in save
assert len(cache_info.aot_autograd_artifacts) == 1, cache_info
AssertionError: CacheInfo(inductor_artifacts=['fahslwmuzkeyf4mngvks2bbdwwhwfgi73t32c4zfxjjhbun32exd'], autotune_artifacts=[], aot_autograd_artifacts=[], pgo_artifacts=[])
```
repro:
```python
import torch
def capture(fn):
def inner(*args):
gm = None
actual_args = None
kwargs = None
def backend(gm_, args_, **kwargs_):
nonlocal gm
nonlocal actual_args
nonlocal kwargs
gm = gm_
actual_args = args_
kwargs = kwargs_
return gm
_ = torch.compile(fn, fullgraph=True, backend=backend)(*args)
return gm, actual_args, kwargs
return inner
model = torch.nn.Linear(16, 16, device='cuda')
inp = torch.randn(16, 16, device='cuda')
from contextlib import nullcontext
with nullcontext():
gm, args, kwargs = capture(model)(inp)
assert not kwargs
compiled_artifact = torch._inductor.standalone_compile(gm, args)
path = 'tmp_cache_dir'
format = 'unpacked'
compiled_artifact.save(path=path, format=format)
loaded = torch._inductor.CompiledArtifact.load(path=path, format=format, model=model)
compiled_out = loaded(inp)
breakpoint()
print()
```
cc @chauhang @penguinwu
| true
|
3,014,435,532
|
standalone_compile load + save does not properly lift model params/buffers
|
bdhirsh
|
closed
|
[] | 2
|
CONTRIBUTOR
|
resulting crash:
```
Traceback (most recent call last):
File "/data/users/hirsheybar/checkout2/pytorch/tmp.py", line 39, in <module>
compiled_out = loaded(inp)
File "/data/users/hirsheybar/checkout2/pytorch/torch/_inductor/standalone_compile.py", line 62, in __call__
return self._compiled_fn(*args)
File "/data/users/hirsheybar/checkout2/pytorch/torch/_inductor/standalone_compile.py", line 177, in <lambda>
return CompiledArtifact(lambda *args: compiled_fn(list(args)), None)
File "/data/users/hirsheybar/checkout2/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 330, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/data/users/hirsheybar/checkout2/pytorch/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/data/users/hirsheybar/checkout2/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 497, in wrapper
return compiled_fn(runtime_args)
File "/data/users/hirsheybar/checkout2/pytorch/torch/_inductor/output_code.py", line 569, in __call__
return self.current_callable(inputs)
File "/data/users/hirsheybar/checkout2/pytorch/torch/_inductor/utils.py", line 2531, in run
copy_misaligned_inputs(new_inputs, inputs_to_check)
File "/data/users/hirsheybar/checkout2/pytorch/torch/_inductor/utils.py", line 2558, in copy_misaligned_inputs
_inp = new_inputs[i]
IndexError: list index out of range
```
here's an example repro:
```
import torch
def capture(fn):
def inner(*args):
gm = None
actual_args = None
kwargs = None
def backend(gm_, args_, **kwargs_):
nonlocal gm
nonlocal actual_args
nonlocal kwargs
gm = gm_
actual_args = args_
kwargs = kwargs_
return gm
_ = torch.compile(fn, fullgraph=True, backend=backend)(*args)
return gm, actual_args, kwargs
return inner
model = torch.nn.Linear(16, 16, device='cuda')
inp = torch.randn(16, 16, device='cuda')
with torch.no_grad():
gm, args, kwargs = capture(model)(inp)
assert not kwargs
compiled_artifact = torch._inductor.standalone_compile(gm, args)
path = 'tmp_cache_dir'
format = 'unpacked'
compiled_artifact.save(path=path, format=format)
loaded = torch._inductor.CompiledArtifact.load(path=path, format=format)
compiled_out = loaded(inp)
```
| true
|
3,014,429,909
|
[XPU] The updated torch-xpu-ops caused interpolate_bilinear accuracy error.
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
After the PR #150827 updated torch-xpu-ops, the following test case failed:
python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoXPU.test_comprehensive_nn_functional_interpolate_bilinear_xpu_float32
```
======================================================================
ERROR: test_comprehensive_nn_functional_interpolate_bilinear_xpu_float32 (__main__.TestInductorOpInfoXPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper
return test(*args, **kwargs)
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_device_type.py", line 1430, in only_fn
return fn(self, *args, **kwargs)
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn
return fn(slf, *args, **kwargs)
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_torchinductor_opinfo.py", line 957, in inner
raise e
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_torchinductor_opinfo.py", line 949, in inner
fn(self, device, dtype, op)
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1200, in test_comprehensive
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1175, in test_comprehensive
self.check_model_gpu(
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_torchinductor.py", line 648, in check_model_gpu
check_model(
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_torchinductor.py", line 606, in check_model
self.assertEqual(
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 96 / 96 (100.0%)
Greatest absolute difference: 0.12265288829803467 at index (1, 0, 0, 2) (up to 1.5e-05 allowed)
Greatest relative difference: 200.74110412597656 at index (0, 1, 2, 1) (up to 1.3e-05 allowed)
```
### Versions
PyTorch version: 2.8.0a0+git776aa68
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,014,314,418
|
Ignore unused structured arguments in member functions
|
kundaMwiza
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
inplace structured overloads today do not use the `strides` in the `set_output_raw_strided` member function. For example addmv does not inherit from `TensorIterator`, so there is no `super` call in the `set_output_raw_strided` function, which can trigger a warning / error by the compiler:
```c++
}
struct structured_addmv_out_cpu_inplace final : public at::native::structured_addmv_out_cpu {
structured_addmv_out_cpu_inplace(Tensor& self) : outputs_{std::ref(self)} {}
void set_output_strided(
int64_t output_idx, IntArrayRef sizes, IntArrayRef strides,
TensorOptions options, DimnameList names
) override {
const auto& out = outputs_[output_idx].get();
check_inplace(out, sizes, options);
auto maybe_proxy = maybe_create_proxy(out, sizes, strides, options);
if (C10_UNLIKELY(maybe_proxy.has_value())) {
proxy_outputs_[output_idx] = std::move(maybe_proxy).value();
}
if (!names.empty()) {
namedinference::propagate_names(outputs_[output_idx], names);
}
// super must happen after, so that downstream can use maybe_get_output
// to retrieve the output
}
void set_output_raw_strided(
int64_t output_idx, IntArrayRef sizes, IntArrayRef strides,
TensorOptions options, DimnameList names
) override {
const auto& out = outputs_[output_idx].get();
check_inplace(out, sizes, options);
if (!names.empty()) {
namedinference::propagate_names(outputs_[output_idx], names);
}
// super must happen after, so that downstream can use maybe_get_output
// to retrieve the output
}
const Tensor& maybe_get_output(int64_t output_idx) override {
return proxy_outputs_[output_idx].has_value() ? *proxy_outputs_[output_idx] : outputs_[output_idx].get();
}
std::array<std::reference_wrapper<Tensor>, 1> outputs_;
std::array<::std::optional<Tensor>, 1> proxy_outputs_;
};
```
This PR marks the `strides` argument as unused for inplace structured overloads that do not subclass from `TensorIterator`.
| true
|
3,014,237,730
|
[ONNX] Exporting with `dynamo=True` and `Dim.DYNAMIC` in `dynamic_shapes` passes for `scaled_dot_product_attention`, but doesn't do anything
|
cyanic-selkie
|
closed
|
[
"module: onnx",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
Basically, I'm trying to export a custom transformer model that uses `scaled_dot_product_attention` op to implement the attention layers, but I can't properly export it. Like the title says, setting `dynamic_shapes` appears to work (i.e., nothing crashes), but the exported model's input's/output's dimensions are specialized.
In the example below, simply returning the embeddings instead of the SDPA output works fine. It also works fine if I'm not using dynamo.
```python
class Model(nn.Module):
def __init__(self):
super().__init__()
self.embeddings = nn.Embedding(1000, 512)
def forward(
self,
input_ids: torch.Tensor,
) -> torch.Tensor:
embeddings = self.embeddings(input_ids)
return torch.nn.functional.scaled_dot_product_attention(
embeddings,
embeddings,
embeddings,
)
model = Model()
torch.onnx.export(
model,
(torch.randint(0, 1000, (16, 128), dtype=torch.long),),
"decoder.onnx",
dynamo=True,
dynamic_shapes=((Dim.DYNAMIC, Dim.DYNAMIC),),
)
```
### Versions
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.0.13.3)
CMake version: version 4.0.0
Libc version: N/A
Python version: 3.12.0 (main, Oct 2 2023, 20:56:14) [Clang 16.0.3 ] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] Could not collect
[conda] Could not collect
(I am using `uv`, the `torch` version is 2.6.0, `onnx` is 1.17.0 and `onnxruntime` is 1.20.1)
| true
|
3,014,233,728
|
Floating Point Exception on NVidia H20, torch=2.1.0+cu121 on torch.matmul(torch.zeros(64, 14688, dtype=torch.float16, device='cuda'), torch.zeros(14688, 1536, dtype=torch.float16, device='cuda'))
|
danpovey
|
open
|
[
"module: cuda",
"triaged"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```
import torch
torch.matmul(torch.zeros(64, 14688, dtype=torch.float16, device='cuda'), torch.zeros(14688, 1536, dtype=torch.float16, device='cuda'))
```
dies with
```Floating point exception (core dumped)```
Hardware is NVidia H20. Torch version is '2.1.0+cu121'. The args must be that exact size. This happened to me
during backprop and was very hard to debug as I could not catch the SIGFPE without making pytorch hang;
I had to bisect it using print statements added to specially added debugging modules.
### Versions
sorry I don't think I want to put all this info in the issue but the NVidia driver version is 560.35.03.
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,014,192,132
|
Loss parallel's override of log_softmax doesn't support negative dims
|
lw
|
open
|
[
"oncall: distributed",
"triaged"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It's allowed to invoke `F.log_softmax` with a negative dimension, such as -1. However, when enabling the loss parallel context manager, the log-softmax op gets overridden with a custom impl which seems to require that the dim be positive.
```
File "/my/repo/model.py", line 228, in cross_entropy
return F.nll_loss(F.log_softmax(pred, -1, dtype=torch.float32), labels, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my_env/torch/nn/functional.py", line 2250, in log_softmax
ret = input.log_softmax(dim, dtype=dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my_env/torch/distributed/tensor/_api.py", line 350, in __torch_dispatch__
return DTensor._op_dispatcher.dispatch(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my_env/torch/distributed/tensor/_dispatch.py", line 154, in dispatch
return self._custom_op_handlers[op_call](op_call, args, kwargs) # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my_env/torch/distributed/tensor/parallel/loss.py", line 163, in _log_softmax_handler
mesh_dim = _find_all_reduce_mesh_dim(spec.placements, dim)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/my_env/torch/distributed/tensor/parallel/loss.py", line 86, in _find_all_reduce_mesh_dim
raise ValueError(
torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function <function log_softmax at 0x793acec45a80>(*(DTensor(local_tensor=FakeTensor(..., device='cuda:1', size=(16384, 64128), dtype=torch.bfloat16), device_mesh=DeviceMesh('cuda', [0, 1], mesh_dim_names=('tp',)), placements=(Shard(dim=1),)), -1), **{'dtype': torch.float32}): got ValueError('loss_parallel() should be enabled only when the input tensor is sharded on dimension -1.')
```
https://github.com/pytorch/pytorch/blob/b32b002a6ea879e506453b09a4b206632e530abf/torch/distributed/tensor/parallel/loss.py#L473
### Versions
N/A
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,014,163,626
|
Add CPython complex tests
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152015
* #150797
* #150796
* #150795
* #150794
* #150793
* #150791
* #150790
* #150789
* #150788
Tests:
* test_complex.py
| true
|
3,014,010,882
|
[ROCm][Windows] Fix HIP Caffe2 Tests
|
m-gallus
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Solves the following problems of caffe2 HIP tests building on Windows:
1. HIP tests now use `hip_add_executable` to be built with custom_command invoking hip compiler, due to lack of cmake support for HIP in 3.18 (currently used).
2. failing with "Command line too long" which resulted from `hip_add_executable` adding the same flags over and over on top of `HIP_HIPCC_FLAGS` with every test added.
3. Disables `HasSameArgTypes` test on Windows, as `at::native::modern::detail` is nowhere to be found in the codebase (I think it must be a legacy thing). Perhaps the whole test should be removed/rewritten?
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,013,930,717
|
Return ConstantVariable(None) from WithExitFunctionVariable.exit to prevent NoneType crash inside autocast exception path
|
wdziurdz
|
closed
|
[
"triaged",
"open source",
"topic: not user facing",
"module: dynamo"
] | 19
|
CONTRIBUTOR
|
Fixes #152012
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,013,901,996
|
[Dynamo] Exception raised inside torch.autocast causes crash AttributeError: 'NoneType' object has no attribute 'is_python_constant
|
wdziurdz
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Since commit [511d0dd](https://github.com/pytorch/pytorch/commit/511d0dd5587394a824c917a8efd91e755704fda9)
(PT 2.7 vs 2.6) Dynamo crashes when an exception is raised inside an autocast context-manager, emitting:
```python
V0423 13:34:49.292000 1586012 torch/_dynamo/symbolic_convert.py:1239] [0/0] [__trace_bytecode] TRACE RAISE_VARARGS 1 [ExceptionVariable(<class 'NotImplementedError'>)]
V0423 13:34:49.292000 1586012 torch/_dynamo/symbolic_convert.py:3908] [0/0] Observed exception DURING INLING <code object forward at 0x7f9b195392c0, file "src/test.py", line 6> : raised exception ExceptionVariable(<class 'NotImplementedError'>)
V0423 13:34:49.293000 1586012 torch/_dynamo/symbolic_convert.py:1216] [0/0] [__trace_source] TRACE starts_line test.py:20 in f (Repro.test_autocast_with_exception.f)
V0423 13:34:49.293000 1586012 torch/_dynamo/symbolic_convert.py:1216] [0/0] [__trace_source] with torch.autocast(device_type="cpu", dtype=None):
V0423 13:34:49.293000 1586012 torch/_dynamo/symbolic_convert.py:1239] [0/0] [__trace_bytecode] TRACE WITH_EXCEPT_START None [WithExitFunctionVariable(), ConstantVariable(NoneType: None), ConstantVariable(NoneType: None), ConstantVariable(NoneType: None), UnknownVariable(), ExceptionVariable(<class 'NotImplementedError'>), BuiltinVariable(NotImplementedError)]
V0423 13:34:49.293000 1586012 torch/_dynamo/symbolic_convert.py:1239] [0/0] [__trace_bytecode] TRACE POP_JUMP_IF_TRUE 52 [WithExitFunctionVariable(), ConstantVariable(NoneType: None), ConstantVariable(NoneType: None), ConstantVariable(NoneType: None), UnknownVariable(), ExceptionVariable(<class 'NotImplementedError'>), BuiltinVariable(NotImplementedError), None]
I0423 13:34:49.294000 1586012 torch/_dynamo/convert_frame.py:1121] [0/0] run_gc_after_compile: running gc
E
======================================================================
ERROR: test_autocast_with_exception (__main__.Repro)
----------------------------------------------------------------------
Traceback (most recent call last):
File "src/test.py", line 26, in test_autocast_with_exception
out = f(inp)
File "lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
File "lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1432, in __call__
return self._torchdynamo_orig_callable(
File "lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1213, in __call__
result = self._inner_convert(
File "lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 598, in __call__
return _compile(
File "lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1110, in _compile
raise InternalTorchDynamoError(
File "lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1059, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 761, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 797, in _compile_inner
out_code = transform_code_object(code, transform)
File "lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 257, in _fn
return fn(*args, **kwargs)
File "lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in transform
tracer.run()
File "lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3500, in run
super().run()
File "lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
File "lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 698, in inner
if value.is_python_constant():
torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'NoneType' object has no attribute 'is_python_constant'
from user code:
File "src/test.py", line 20, in f
with torch.autocast(device_type="cpu", dtype=None):
```
This did not happen on 2.6, so it looks like a regression or at least the commit uncovered an un-handled corner case.
Minimal repro (passes on 2.6, fails on 2.7) below
```python
import torch
import unittest
class Boom(torch.autograd.Function):
def forward(ctx, x):
raise NotImplementedError("boom")
@staticmethod
def backward(ctx, grad_out):
return grad_out
class Repro(unittest.TestCase):
def test_autocast_with_exception(self):
@torch.compile
def f(x: torch.Tensor):
try:
with torch.autocast(device_type="cpu", dtype=None):
Boom.apply(x)
except NotImplementedError:
return x + 1
inp = torch.ones(3)
out = f(inp)
self.assertTrue(torch.equal(out, inp + 1))
if __name__ == "__main__":
unittest.main()
```
Root cause
- AutocastModeVariable.exit() returns raw Python None rather than a VariableTracker
- WithExitFunctionVariable.call_function() forwards that return value unchanged
- Dynamo assumes every stack element is a VariableTracker, so it calls is_python_constant() on the raw None, leading to the AttributeError
Code below
```python
....
class WithExitFunctionVariable(VariableTracker):
....
def call_function(
self,
tx: "InstructionTranslator",
args: "list[VariableTracker]",
kwargs: "dict[str, VariableTracker]",
) -> "VariableTracker":
assert not kwargs
return self.ctx.exit(tx, *args)
class AutocastModeVariable(ContextWrappingVariable):
....
def exit(self, tx: "InstructionTranslator", *args):
self.state.cleanup_assert()
tx.output.create_node(
"call_function", torch.amp._exit_autocast, (self.state.proxy,), {}
)
# return None value
```
Fix make every ContextWrappingVariable.exit() return a ConstantVariable wrapper, exactly as other context-manager variables already do.
```python
....
class WithExitFunctionVariable(VariableTracker):
....
def call_function(
self,
tx: "InstructionTranslator",
args: "list[VariableTracker]",
kwargs: "dict[str, VariableTracker]",
) -> "VariableTracker":
assert not kwargs
return self.ctx.exit(tx, *args)
class AutocastModeVariable(ContextWrappingVariable):
....
def exit(self, tx: "InstructionTranslator", *args):
self.state.cleanup_assert()
tx.output.create_node(
"call_function", torch.amp._exit_autocast, (self.state.proxy,), {}
)
# NEW: wrap the None return value
return variables.ConstantVariable.create(None)
```
### Versions
PyTorch version: 2.7.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.14.0
[pip3] torch==2.7.0
[pip3] torch-debug==2.7.0
[pip3] torch_tb_profiler==0.4.0
[pip3] torchvision==0.21.0
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
3,013,800,270
|
[WIP] fix reinplacing bug
|
zou3519
|
open
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152011
There are two problems:
1) canonicalize_view_scatter_ops adds some new nodes into the graph.
These new nodes cause the alias info on the graph to be wrong. To fix
this, we try to run FakeTensorUpdater on the graph again.
2) FakeTensorUpdater's alias information is wrong. If the node was not
previously seen, we need to recursively update users of the node,
even if the meta["val"] looks like it is set correctly. The example
is if we have `x = foo(...); y = x.view(...)`. If the user replaces
`foo` with a new `bar` node and sets bar.meta["val"] correctly, then
FakeTensorUpdater still needs to update y's meta["val"] to be a view
of the new bar node.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,013,799,021
|
[MPS] layernorm forward kernel
|
Isalia20
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: improvements",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 3
|
COLLABORATOR
|
Implements layernorm forward pass as a metal kernel instead of MPSGraph ops. Speed ups are indicated on the chart below:

Script for generating times, need to build torch with old/new codebase and then run this with different file name indicated at the end of the script
```python
import csv
import time
import numpy as np
import torch
import torch.nn.functional as F
matrix_sizes = [32, 64, 128, 256, 512, 1024, 2048, 4096, 8192]
batch_sizes = [1]
elementwise_affine = [False, True]
num_runs = 50
warmup_runs = 3
def create_input_tensor(n, batch_size):
torch.manual_seed(42)
return torch.randn(batch_size, n, dtype=torch.float32)
def run_layer_norm(A, normalized_shape, elementwise_affine):
torch.mps.synchronize()
start = time.perf_counter()
out = F.layer_norm(A, normalized_shape)
torch.mps.synchronize()
end = time.perf_counter()
return out, end - start
results = {"N": [], "elementwise_affine": [], "batch_size": [], "mean_time": [], "std_time": []}
for el_aff in elementwise_affine:
for n in matrix_sizes:
for batch_size in batch_sizes:
print(f"\nBenchmarking LayerNorm for input size N={n}, batch_size={batch_size}, elementwise_affine={el_aff}")
try:
A_cpu = create_input_tensor(n, batch_size)
A_mps = A_cpu.to("mps")
normalized_shape = (n,)
for _ in range(warmup_runs):
_, _ = run_layer_norm(A_mps, normalized_shape, el_aff)
times = []
for _ in range(num_runs):
_, t = run_layer_norm(A_mps, normalized_shape, el_aff)
times.append(t)
mean_time = np.mean(times)
std_time = np.std(times)
results["N"].append(n)
results["elementwise_affine"].append(el_aff)
results["batch_size"].append(batch_size)
results["mean_time"].append(mean_time)
results["std_time"].append(std_time)
print(f"Mean time: {mean_time:.4f}s ± {std_time:.4f}s")
except RuntimeError as e:
print(f"Error for N={n}, batch_size={batch_size}: {e}")
continue
with open("layernorm_benchmark_times_new.csv", "w", newline="") as f:
writer = csv.writer(f)
writer.writerow(["N", "elementwise_affine", "batch_size", "mean_time", "std_time"])
for i in range(len(results["N"])):
writer.writerow(
[
results["N"][i],
results["elementwise_affine"][i],
results["batch_size"][i],
results["mean_time"][i],
results["std_time"][i],
]
)
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,013,670,568
|
THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE
|
wzgrx
|
closed
|
[] | 0
|
NONE
|
### 🐛 Describe the bug
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
(venv) PS E:\Ai\lora-scripts> pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
Looking in indexes: https://download.pytorch.org/whl/nightly/cu128
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/cu128/torch-2.8.0.dev20250422%2Bcu128-cp312-cp312-win_amd64.whl.metadata (28 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/cu128/torchvision-0.22.0.dev20250422%2Bcu128-cp312-cp312-win_amd64.whl.metadata (6.3 kB)
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
torchvision from https://download.pytorch.org/whl/nightly/cu128/torchvision-0.22.0.dev20250422%2Bcu128-cp312-cp312-win_amd64.whl:
Expected sha256 8ee0514d2cddf2b615ce0a85415d9c98f99e98cbd185c22d2a23e2d849dce06e
Got 0bf6ad5fc142196eb0559a44a47b9787e97a7777df9140f74474f262aa17fe9f
### Versions
(venv) PS E:\Ai\lora-scripts> pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
Looking in indexes: https://download.pytorch.org/whl/nightly/cu128
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/cu128/torch-2.8.0.dev20250422%2Bcu128-cp312-cp312-win_amd64.whl.metadata (28 kB)
Collecting torchvision
Using cached https://download.pytorch.org/whl/nightly/cu128/torchvision-0.22.0.dev20250422%2Bcu128-cp312-cp312-win_amd64.whl.metadata (6.3 kB)
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
torchvision from https://download.pytorch.org/whl/nightly/cu128/torchvision-0.22.0.dev20250422%2Bcu128-cp312-cp312-win_amd64.whl:
Expected sha256 8ee0514d2cddf2b615ce0a85415d9c98f99e98cbd185c22d2a23e2d849dce06e
Got 0bf6ad5fc142196eb0559a44a47b9787e97a7777df9140f74474f262aa17fe9f
| true
|
3,013,664,506
|
pinned_use_background_threads will cause a coredump
|
1274085042
|
open
|
[
"module: crash",
"triaged",
"module: CUDACachingAllocator"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When i enable pinned_use_background_threads, the following script produces a coredump. is this due to incorrect usage on my part, or is it a bug in PyTorch itself?
```python
# test_host_caching_allocator.py
import torch
src = torch.randn(1000000, pin_memory=True)
dst = src.to("cuda", non_blocking=True)
```
```
root@notebook-host-caching-allocator-1j09ij5-launcher-0:/workspace# export PYTORCH_CUDA_ALLOC_CONF=pinned_use_background_threads:True
root@notebook-host-caching-allocator-1j09ij5-launcher-0:/workspace# python test_host_caching_allocator.py
Segmentation fault (core dumped)
```
@banitag1 @zyan0 @eqy
### Versions
```
# python collect_env.py
Collecting environment information...
PyTorch version: 2.8.0a0+git62b5649
Is debug build: True
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.27.2.el7.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
GPU 2: Tesla T4
GPU 3: Tesla T4
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
Stepping: 7
CPU MHz: 2100.000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 40 MiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; Load fences, __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_ppin intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==8.7.0.84
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.8.0a0+git62b5649
[pip3] torchaudio==2.1.0
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0a0+fab1188
[conda] blas 1.0 mkl
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcurand 10.3.3.141 0 nvidia
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-include 2024.2.1 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl-static 2024.2.1 pypi_0 pypi
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] numpy 1.26.0 py310h5f9d8c6_0
[conda] numpy-base 1.26.0 py310hb5e798b_0
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cudnn-cu11 8.7.0.84 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.20.5 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.8.0a0+git62b5649 pypi_0 pypi
[conda] torchaudio 2.1.0 py310_cu118 pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.22.0a0+fab1188 pypi_0 pypi
```
| true
|
3,013,432,692
|
[Kineto] Upgrade the kineto commit to fb36cce
|
zejun-chen
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"ciflow/binaries_wheel",
"keep-going",
"ciflow/xpu",
"release notes: xpu"
] | 30
|
CONTRIBUTOR
|
XPU intends to upgrade oneAPI version(https://github.com/pytorch/pytorch/issues/151097) to support torch Distributed. However, the PTI within the oneAPI to be upgraded introduces breaking changes. It changed the signature of the APIs as follows.
- ptiViewEnableRuntimeApi
- ptiViewGetApiIdName
To avoid the breaks due to the PTI upcoming non-backward-compatible changes, we refined the XPU PTI integration with the kineto. We check the PTI version and then invoke the PTI API accordingly. It means that the kineto of this PR can overcome the non-backward-compatible issue for the sake of the upcoming oneAPI 2025.1.
| true
|
3,013,424,860
|
How can the RCE vulnerability in torch.load(weights_only=True) be fixed?
|
south-ocean
|
closed
|
[
"oncall: releng",
"security"
] | 1
|
NONE
|
A security vulnerability has been discovered in PyTorch where torch.load with weights_only=True can still lead to remote code execution (RCE). It is stated that this issue has been fixed in version 2.6 and above.
I'd like to ask whether this fix was applied automatically or through a specific commit. If it was fixed via a commit, could you point me to the exact one? Additionally, is it possible for us to backport this fix to PyTorch 2.5 or 2.4?
Looking forward to your response. Thank you!
| true
|
3,013,383,536
|
DISABLED test_builtin_score_mods_float32_score_mod6_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float32_score_mod6_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40988192284).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float32_score_mod6_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,013,380,955
|
DISABLED test_doc_mask_sparse_cuda (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_doc_mask_sparse_cuda&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40986297636).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_doc_mask_sparse_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,013,380,748
|
DISABLED test_builtin_score_mods_float32_score_mod1_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float32_score_mod1_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40986323576).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float32_score_mod1_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,013,380,746
|
DISABLED test_builtin_score_mods_float16_score_mod7_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float16_score_mod7_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40986323576).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float16_score_mod7_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,013,345,497
|
[Don't merge] Upgrade oneDNN to v3.8-rc for XPU build
|
bjarzemb
|
open
|
[
"triaged",
"module: mkldnn",
"open source",
"topic: not user facing",
"ciflow/xpu",
"module: xpu",
"ciflow/linux-aarch64"
] | 8
|
NONE
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @EikanWang @fengyuan14 @guangyey
| true
|
3,013,103,156
|
[Inductor][CPP] Optimize the epilogue for int8 GEMM Template
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152000
**Summary**
For int8 GEMM Template, the micro GEMM will calculate in u8s8s32 and we will do the scale/zp compensation in the epilogue. In general, it will be calculated as:
```
temp = micro_gemm_output * x_scale * w_scale
temp = temp - (x_scale * w_scale * x_zp) * sum(w, 0)
```
For case when `x_scale, w_scale, x_zp` are constant, we can pre-calculate the compensation to save runtime calculation.
**Performance**
Test with 4 cores of XEON-5 and shapes from VIT model
Before
```
GEMM(M=197,N=768,K=768) compile: 0.0939 ms (2.48 TOPS, 18.13 GB/s)
GEMM(M=197,N=3072,K=768) compile: 0.4275 ms (2.17 TOPS, 13.90 GB/s)
GEMM(M=197,N=768,K=3072) compile: 0.2677 ms (3.47 TOPS, 22.20 GB/s)
GEMM(M=1,N=1000,K=768) compile: 0.0148 ms (0.10 TOPS, 99.10 GB/s)
```
After
```
GEMM(M=197,N=768,K=768) compile: 0.0597 ms (3.90 TOPS, 28.53 GB/s)
GEMM(M=197,N=3072,K=768) compile: 0.2126 ms (4.37 TOPS, 27.95 GB/s)
GEMM(M=197,N=768,K=3072) compile: 0.2282 ms (4.07 TOPS, 26.04 GB/s)
GEMM(M=1,N=1000,K=768) compile: 0.0149 ms (0.10 TOPS, 98.71 GB/s)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,013,004,718
|
[Intel GPU] Enable safe softmax for XPU SDPA
|
LuFinch
|
open
|
[
"module: cpu",
"open source",
"release notes: xpu",
"module: xpu"
] | 7
|
CONTRIBUTOR
|
Fix https://github.com/intel/torch-xpu-ops/issues/1432#event-16899653975
When one row of Q*K attention score is masked with `-inf`, `softmax(score)` would output `NaN` for whole row which would cause model corruption.
With this new flag, it would output `0` for whole row which is aligned with Pytorch CPU/CUDA's behavior.
Pending on OneDNNv3.8 upgrade.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,012,950,309
|
[UniformValueConstantFolder] deduce value on CPU rather than on device
|
xwu-intel
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 4
|
NONE
|
Deducing constant value for joint graph on CPU to reduce host-device syncs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,907,746
|
torch.compile doesnot support index with tensor
|
crispto
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
I get error when Indexing with another Tensor in a torch.compile scope, I think this is a quiet normal scenario, is there any way to achieve this?
```
from user code:
File "/home/richard/coding/POP/decoder/unp_marginal.py", line 641, in torch_compile_compatible_marginal_auto_regressive
dec_output = self.transformer_decoder(
File "/home/richard/coding/POP/decoder/factorized.py", line 706, in forward
token_input = dec_layer.step(
File "/home/richard/coding/POP/decoder/factorized.py", line 291, in step
ret = self.step_opt(
File "/home/richard/coding/POP/decoder/factorized.py", line 236, in step_opt
q_input_tight = q_input[agent_mask_step_1d]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
3,012,892,387
|
docs: add torch.e and torch.pi to constants table (#134964)
|
KaaustaaubShankar
|
open
|
[
"module: docs",
"triaged",
"open source",
"topic: docs",
"topic: not user facing"
] | 4
|
NONE
|
Fixes #134964
Before
<img width="850" alt="Screenshot 2025-04-23 at 2 55 20 AM" src="https://github.com/user-attachments/assets/539f93c8-1e68-40b6-9eda-cd4d4646266c" />
After
<img width="864" alt="Screenshot 2025-04-23 at 2 55 34 AM" src="https://github.com/user-attachments/assets/16338c07-41b2-4de6-a576-bed8a326b421" />
cc @svekars @sekyondaMeta @AlannaBurke
| true
|
3,012,876,381
|
peak memory is lower for subsequent fresh runs compared to the first run of a torch.compiled model
|
neeldani
|
closed
|
[
"oncall: pt2"
] | 3
|
NONE
|
### 🐛 Describe the bug
I am using torch.compile for LLama-2-7B from Huggingface. When I run the program the first time, the peak memory usage is higher than all subsequent fresh runs. (By fresh I mean I run the program again after it has exited). In fact, the peak memory usage is stable second run onwards.
I see this behaviour every time. I wanted to check (1) is this expected? I am not entirely sure if torch.compile reuses cache **across runs**. (2) Could this be happening at the inductor/ dynamo level?
I have some data points:
1. For seq len = 2048, First run peak memory = 7.13GB, Second run peak memory = 7.06GB
2. For seq len = 8196, First peak memory = 16.84GB, Second run peak memory = 16.086GB
I am training `meta-llama/Llama-2-7b-hf` on 2 A100 GPUs for a few iterations. If the above behaviour is not expected, I can help with a repro.
### Error logs
I am looking at the logs generated by dynamo:
For the first run, I see in the logs that the guards are created:
```
TRACED GRAPH
[__graph_code] ===== __compiled_fn_29 =====
[rank1]:V0422 22:32:34.757000 659971 site-packages/torch/_dynamo/output_graph.py:1340] [11/0] GraphModule(torch.nn.Module):
[__graph_code] def forward(self, L_cos_: "bf16[1, 8192, 128][1048576, 128, 1]cuda:1", L_sin_: "bf16[1, 8192, 128][1048576, 128, 1]cuda:1", L_q_: "bf16[1, 32, 8192, 128][33554432, 128, 4096, 1]cuda:1", L_k_: "bf16[1, 32, 8192, 128][33554432, 128, 4096, 1]cuda:1"):
[__graph_code] l_cos_ = L_cos_
[__graph_code] l_sin_ = L_sin_
[__graph_code] l_q_ = L_q_
[__graph_code] l_k_ = L_k_
```
For the second fresh run, I see some difference in the logs (missing stride, device). Could this imply that the guard is validated instead of being created:
```
TRACED GRAPH
===== pre insert_deferred_runtime_asserts __compiled_fn_29 =====
[__graph_code] <eval_with_key>.10 class GraphModule(torch.nn.Module):
[__graph_code] def forward(self, L_cos_: "bf16[1, 8192, 128]", L_sin_: "bf16[1, 8192, 128]", L_q_: "bf16[1, 32, 8192, 128]", L_k_: "bf16[1, 32, 8192, 128]"):
[__graph_code] l_cos_ = L_cos_
[__graph_code] l_sin_ = L_sin_
[__graph_code] l_q_ = L_q_
[__graph_code] l_k_ = L_k_
```
### Versions
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.8 (Ootpa) (x86_64)
GCC version: (Spack GCC) 11.4.0
Clang version: Could not collect
CMake version: version 3.20.2
Libc version: glibc-2.28
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-477.86.1.el8_8.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.3.52
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A40
GPU 1: NVIDIA A40
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 1
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7763 64-Core Processor
Stepping: 1
CPU MHz: 2445.490
BogoMIPS: 4890.98
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
cc @chauhang @penguinwu
| true
|
3,012,796,562
|
tape
|
bobrenjc93
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151994
| true
|
3,012,795,091
|
[SymmMem] Use cub's BlockScan instead of in-house impl for offset calculation
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151993
* #151819
* #151498
* #151261
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,012,776,212
|
[XPU][Qwen][Arc770] Bad Performance for Qwen2.5-0.5B on Arc770 with OS Ubuntu 24.10
|
ZhaoqiongZ
|
closed
|
[
"triaged",
"module: xpu"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
model: Qwen2.5-0.5B
device: Arc770
OS: ubuntu24.10
script to reproduce
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="xpu"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt},
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to("xpu")
with profile(activities=[ProfilerActivity.CPU,ProfilerActivity.XPU]) as prof:
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
```
Bad Performance for both eager and `torch.compile` mode.
it takes ~300s for the above input prompt and max_new_tokens=512. more than 270s on cpu.
Performance bottleneck is on cpu, the function `scale_dot_product_fused_attention_overrideable` takes most of the time.
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.26.0
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Apr 8 2025, 20:53:32) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13700K
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
CPU max MHz: 5400.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 24 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton-xpu==3.3.0
[pip3] torch==2.7.0+xpu
[pip3] torch-stoi==0.2.3
[pip3] torchaudio==2.7.0+xpu
[pip3] torchvision==0.22.0+xpu
[pip3] triton==3.1.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton-xpu 3.3.0 pypi_0 pypi
[conda] torch 2.7.0+xpu pypi_0 pypi
[conda] torch-stoi 0.2.3 pypi_0 pypi
[conda] torchaudio 2.7.0+xpu pypi_0 pypi
[conda] torchvision 0.22.0+xpu pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,012,694,498
|
Fix unnecessary formating change
|
dharakk
|
closed
|
[
"oncall: distributed",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151991
* #151990
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,012,665,158
|
Implement util function compute_global_tensor_shape for 1D device mesh
|
dharakk
|
closed
|
[
"oncall: distributed",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152166
* __->__ #151990
### Summary
compute_global_tensor_shape util function takes in local tensor shape, device mesh
and placements. We all gather the shapes from the shards and according to the placement
type we construct the global shape.
Note: currenty only implemented for placement type Shard and Replicate, TODO for StridedShared
### Test
`pytest test/distributed/tensor/test_utils.py`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,012,641,274
|
[inductor] Remove usage of autotune_fallback_to_aten inside inductor code
|
henrylhtsang
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151989
* #151988
Differential Revision: [D73478753](https://our.internmc.facebook.com/intern/diff/D73478753/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,641,210
|
[inductor] Remove usage of autotune_fallback_to_aten outside inductor code
|
henrylhtsang
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151988
Differential Revision: [D73477120](https://our.internmc.facebook.com/intern/diff/D73477120/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,625,934
|
Import Error (Build from source): Aborted (core dumped)
|
SonicZun
|
closed
|
[
"needs reproduction",
"module: build"
] | 2
|
NONE
|
Hi!
source: branch v2.6.0
env: centos7 glibc2.17 gcc9.3.1
gpu: A100
I follow the instructions to build from source. `import torch` will trigger the following Exception:
```
terminate called after throwing an instance of 'std::runtime_error'
what(): Internal error while parsing type signature (2)
Aborted (core dumped)
```
cc @malfet @seemethere
| true
|
3,012,624,410
|
Add torchcheck for replication_pad3d_backward
|
cz2h
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fixes #142833
Add check on channel dimension, logic same to the CUDA implementation https://github.com/pytorch/pytorch/blob/78bbb468c66fe56e389bf73bf626302b8e2b4cf4/aten/src/ATen/native/cuda/ReplicationPadding.cu#L347
cc @mikaylagawarecki
| true
|
3,012,622,240
|
[whisper][Arc770][Win]XPU performance is worse than CPU
|
yinghu5
|
open
|
[
"module: performance",
"triaged",
"module: xpu"
] | 1
|
NONE
|
### 🐛 Describe the bug
Try [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny/tree/main)
on one desktop machine with Arc 770 Windows 11
and find the performance of XPU is worse than CPU. which is not expected.
reproduce step:
1. install windows 11 on the target machine
2. install XPU driver: https://www.intel.com/content/www/us/en/download/785597/intel-arc-iris-xe-graphics-windows.html => the version is 32.0.101.6734 WHQL Certified 4/8/2025
after install and restart, please check your XPU working in task manager
3. install python from https://www.python.org/downloads/release/python-31210/, using python 3.12.104.
4. install pytorch environment:
open CMD, run C:\Users\gta\AppData\Local\Programs\Python.exe
python -m venv venv_py27_xpu
venv_py27_xpu\Scripts\activate
(venv_py27_xpu) C:\Users\gta>pip3 install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/xpu
python -c "import torch; print(torch.xpu.is_available())"
5. install whisper dependency
pip install transformers
pip install datasets
pip install librosa
6. python test_whisper
run XPU performance is less than CPU
```
import torch
from transformers import WhisperProcessor, WhisperForConditionalGeneration
from datasets import load_dataset
## comment the two line for CPU run
#device = "cpu"
device = torch.device("xpu" if torch.xpu.is_available() else "cpu")
print(f"Using device: {device}")
import time
start_time = time.time()
print(f"Starting voice conversion at {time.strftime('%Y-%m-%d %H:%M:%S')}")
from torch.profiler import profile, ProfilerActivity
# load model and processor
print("\nLoading Whisper Model...")
whisper_start_time = time.time()
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny").to(device)
forced_decoder_ids = processor.get_decoder_prompt_ids(language="en", task="transcribe")
whisper_end_time = time.time()
print(f"Loading Whisper Model time taken: {whisper_end_time - whisper_start_time:.2f} seconds")
# load dummy dataset and read audio files
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = ds[2]["audio"]
input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt",
return_attention_mask=True, # Critical for reliable results
padding=True # Required if batching multiple audios
).input_features.to(device)
# generate token ids
print("\n Runing Whisper Model...")
whisper_start_time = time.time()
#with profile(activities=[ProfilerActivity.CPU,
# ProfilerActivity.XPU]) as prof:
predicted_ids = model.generate(input_features,forced_decoder_ids=forced_decoder_ids)
#print(prof.key_averages().table(sort_by="xpu_time_total"))
# decode token ids to text
#transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
whisper_end_time = time.time()
print(f"Runing Whisper Model time taken: {whisper_end_time - whisper_start_time:.2f} seconds")
print(transcription)
end_time = time.time()
print(f"Voice conversion completed at {time.strftime('%Y-%m-%d %H:%M:%S')}")
print(f"Total time taken: {end_time - start_time:.2f} seconds")
```
<html xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882"
xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta name=ProgId content=OneNote.File>
<meta name=Generator content="Microsoft OneNote 15">
</head>
<body lang=en-US style='font-family:Calibri;font-size:11.0pt'>
<!--StartFragment-->
<div style='direction:ltr'>
"openai/whisper-tiny | CPU (s) | XPU(s) | Torch.compile | 12s mp3
-- | -- | -- | -- | --
Load | 1.57 | 2.34 | 2.67 | hf-internal-testing/librispeech_asr_dummy · Datasets at Hugging Face [2]
Run Generate | 0.57 | 3.58 | 3.58 |
Total | 7.33 | 10.63 | 11.12 |
### Versions
pip3 install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/xpu
cc @msaroufim @jerryzh168 @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,012,607,105
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE_256_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE_256_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40976207492).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod4_BLOCK_SIZE_256_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,607,063
|
DISABLED test_builtin_score_mods_float16_score_mod4_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float16_score_mod4_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40974992610).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float16_score_mod4_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1127, in test_builtin_score_mods
self.run_test(score_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 509, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 847, in sdpa_dense_backward
grad_softmax_scores - sum_scores + grad_logsumexp.unsqueeze(-1)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 420.12 MiB is free. Including non-PyTorch memory, this process has 21.63 GiB memory in use. Of the allocated memory 5.73 GiB is allocated by PyTorch, and 15.64 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_float16_score_mod4_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,607,035
|
DISABLED test_dependent_causal_bidirectional_float16_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_dependent_causal_bidirectional_float16_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40977426043).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_dependent_causal_bidirectional_float16_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1643, in test_dependent_causal_bidirectional
self.run_test(bias_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 509, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 847, in sdpa_dense_backward
grad_softmax_scores - sum_scores + grad_logsumexp.unsqueeze(-1)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 284.12 MiB is free. Including non-PyTorch memory, this process has 21.76 GiB memory in use. Of the allocated memory 5.73 GiB is allocated by PyTorch, and 15.77 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_dependent_causal_bidirectional_float16_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,606,972
|
DISABLED test_builtin_score_mods_dynamic_float16_score_mask_mod6_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_dynamic_float16_score_mask_mod6_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40975726940).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_dynamic_float16_score_mask_mod6_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,589,101
|
[torch.compile][export] `PendingUnbackedSymbolNotFound` for `torch.full`
|
syheliel
|
open
|
[
"triaged",
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 1
|
NONE
|
### 🐛 Describe the bug
```
import torch
from torch.export import export
import torch.nn as nn
class FullOp(nn.Module):
def __init__(self):
super(FullOp, self).__init__()
def forward(self, x):
return torch.full((3, 3), x)
# Example usage
if __name__ == "__main__":
model = FullOp()
example_args = (torch.tensor(5.0, dtype=torch.float32),)
exported_program = export(model, example_args)
```
### Error logs
```
XXX/lib/python3.11/site-packages/torch/cuda/__init__.py:734: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
Traceback (most recent call last):
File "XXX/torch_full_0.py", line 18, in <module>
exported_program = export(model, example_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/export/__init__.py", line 368, in export
return _export(
^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "XXX/lib/python3.11/site-packages/torch/export/_trace.py", line 1008, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/export/_trace.py", line 1970, in _export
return _export_for_training(
^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "XXX/lib/python3.11/site-packages/torch/export/_trace.py", line 1008, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/export/_trace.py", line 1834, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/export/_trace.py", line 1283, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/export/_trace.py", line 662, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1569, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1036, in _compile
raise InternalTorchDynamoError(
File "XXX/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "XXX/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "XXX/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "XXX/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "XXX/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2341, in CALL
self._call(inst)
File "XXX/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
File "XXX/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/variables/torch.py", line 953, in call_function
tensor_variable = wrap_fx_proxy(
^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2153, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2219, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2317, in _wrap_fx_proxy
return handle_traced_output(
^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2336, in handle_traced_output
set_example_value(proxy.node, example_value)
File "XXX/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1896, in set_example_value
if symbol_to_path := torch.fx.experimental.symbolic_shapes.compute_unbacked_bindings(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "XXX/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 1082, in compute_unbacked_bindings
raise PendingUnbackedSymbolNotFound(
torch._dynamo.exc.InternalTorchDynamoError: PendingUnbackedSymbolNotFound: Pending unbacked symbols {zuf0} not in returned outputs FakeTensor(..., size=(3, 3)) ((3, 1), 0).
Did you accidentally call new_dynamic_size() or item() more times than you needed to in your fake implementation?
For more help, see https://docs.google.com/document/d/1RWrH-3wLEpzR9kCS6gGBNen_-Fs-8PVbWWFE5AcgeWE/edit
from user code:
File "XXX/torch_full_0.py", line 12, in forward
return torch.full((3, 3), x)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 18
On-line CPU(s) list: 0-17
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 5 125H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 9
Socket(s): 1
Stepping: 4
BogoMIPS: 5990.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 432 KiB (9 instances)
L1i cache: 576 KiB (9 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,012,556,221
|
Fix typos in meta.rst
|
Stonesjtu
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
### Fixes made:
- "allow you to the module" → corrected to "allows you to move the module"
- "allow" → changed to "allows" to agree with the singular subject "method"
| true
|
3,012,535,469
|
torch.gather returns incorrect output on cuda after unsqueezing and expanding with double precision
|
sdaulton
|
open
|
[
"triaged",
"module: scatter & gather ops"
] | 0
|
NONE
|
### 🐛 Describe the bug
The following code snippet returns the same element for each batch (of which there are three). The issue only occurs with double precision and with cuda, and a call to `contiguous` fixes the issue.
```
import torch
torch.manual_seed(0)
X = torch.rand(3, 5, 2, dtype=torch.float64)
device = torch.device("cuda")
idcs = torch.zeros(3, 1, dtype=torch.long, device=device)
idcs = idcs.unsqueeze(-1).expand(3, 1, 2) # .contiguous() fixes the issue
torch.gather(X.to(device=device), -2, idcs)
```
### Versions
PyTorch version: 2.8.0a0+fb
Is debug build: False
CUDA used to build PyTorch: 12.4.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
| true
|
3,012,531,992
|
[ROCm][CI] Enabled fp8 distributed tests in test_micro_pipeline_tp.py for MI300
|
akashveramd
|
closed
|
[
"oncall: distributed",
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 6
|
CONTRIBUTOR
|
This PR enabled fp8 distributed tests on MI300.
For testing the added feature, ran distributed.tensor.parallel.test_micro_pipeline_tp test and all the tests passed successfully, and no tests were skipped.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,012,527,548
|
[Intel GPU] undo broadcast on zero stride tensor for SDPA
|
LuFinch
|
open
|
[
"module: cpu",
"triaged",
"open source",
"keep-going",
"ciflow/xpu",
"release notes: xpu",
"module: xpu"
] | 19
|
CONTRIBUTOR
|
Fix https://github.com/pytorch/pytorch/issues/152290.
The model **hubert** uses aten::expand to build attention mask by broadcasting. Pytorch uses strides[d]=0 to represent broadcast, which is not supported by oneDNN. This PR handles this scenario.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,012,520,599
|
[inductor] Remove usage of autotune_fallback_to_aten outside inductor code
|
henrylhtsang
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Differential Revision: [D73477120](https://our.internmc.facebook.com/intern/diff/D73477120/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,510,679
|
BM FM FlashAttention Test
|
czhao8863
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Reviewed By: joebos
Differential Revision: D72880307
| true
|
3,012,485,197
|
Add lr_lambda type check in MultiplicativeLR
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: optim"
] | 8
|
CONTRIBUTOR
|
Fixes #81554
## TestResult
### Before
```python
In [3]: import torch
...: class SimpleLinearModel(torch.nn.Module):
...: def __init__(self):
...: super(SimpleLinearModel, self).__init__()
...: self.linear = torch.nn.Linear(10, 1)
...:
...: def forward(self, x):
...: return self.linear(x)
...:
...: net = SimpleLinearModel()
...: optimizer = torch.optim.Adam(net.parameters(), lr=0.01)
...: scheduler = torch.optim.lr_scheduler.MultiplicativeLR(optimizer, 0.95)
...: for i in range(10):
...: print(i, scheduler.get_last_lr())
...: scheduler.step()
TypeError: 'float' object is not callable
### After
```python
...: scheduler = torch.optim.lr_scheduler.MultiplicativeLR(optimizer, 0.95)
TypeError: lr_lambda should be a function, but got float
```
| true
|
3,012,437,149
|
[1/N] Deprecate c10::string_view and at::string
|
cyyever
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"module: cpu",
"triaged",
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"release notes: quantization",
"release notes: distributed (c10d)",
"ciflow/periodic",
"ciflow/mps",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 10
|
COLLABORATOR
|
The calls of `c10::string_view` in the code base are replaced by `std::string_view`. The calls of `at::string` are replaced by `std::string`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mingfeima @XiaobingSuper @ashokei @jingxu10 @jerryzh168
| true
|
3,012,427,044
|
[WIP][recompiles] verbose logging for tensor guard checks
|
pianpwk
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,012,408,354
|
Fix additional inputs to error on inconsistent constants
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export",
"skip-url-lint"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
3,012,388,663
|
Unrecognized in PyTorch traces w/ Cuda 12.6
|
drisspg
|
open
|
[
"oncall: profiler"
] | 0
|
CONTRIBUTOR
|
Repro to generate trace w/ unrecognized names:
```Py
import torch
import torch.nn as nn
import copy
from transformer_nuggets.utils.benchmark import profiler
from torchao.float8 import (
Float8LinearConfig,
convert_to_float8_training,
)
# Set up parameters for a single iteration
M_val, K_val, N_val = 1024, 1024, 1024
float8_recipe_name = "tensorwise"
device = torch.device("cuda")
# Create model, input, and gradient
m_orig = nn.Sequential(nn.Linear(K_val, N_val, bias=False)).cuda().bfloat16()
x = torch.randn(M_val, K_val, dtype=torch.bfloat16, device=device).requires_grad_()
grad_output = torch.randn(M_val, N_val, dtype=torch.bfloat16, device=device)
# Float8 dynamic scaling
torch._dynamo.reset()
config = Float8LinearConfig.from_recipe_name(float8_recipe_name)
m_fp8 = convert_to_float8_training(copy.deepcopy(m_orig), config=config)
with profiler("fp8.json", with_stack = True):
# Forward pass
y_fp8 = m_fp8(x)
# Backward pass
y_fp8.backward(grad_output)
torch.cuda.synchronize()
print("Float8 iteration completed")
print("Single iteration benchmark complete")
```
Trace: https://fburl.com/sn8zh330
<img width="934" alt="Image" src="https://github.com/user-attachments/assets/44ce3826-8064-4f70-853c-dde168c696aa" />
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
3,012,375,088
|
[Graph Partition] reorder for minimal number of partitions
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
This pr adds an optimal reordering for minimizing #partitions.
## Optimal reordering for minimizing #partitions
A bfs could minimize #partitions (ignore peak memory for now):
1. For each node, compute node_to_indegree: dict[node, int].
2. Maintain 2 queues: cudagraphable_nodes, and non_cudagraphable_nodes. Iterate through all nodes and add nodes to one of these 2 queues if node_to_indegree[node] == 0.
3. While non_cudagraphable_nodes is not empty: Pop 1 node, schedule it, update the indegree of all its successors, and add its successor nodes to one of the queues if node_to_indegree[successor] == 0.
4. While cudagraphable_nodes is not empty: Pop 1 node, schedule it, update the indegree of all its successors, and add its successor nodes to one of the queues if node_to_indegree[successor] == 0.
5. Repeat step 3 & 4 until all nodes have been scheduled.
We call this strategy `reorder_for_minimizing_partition`.
**Q: Why is this optimal?**
Suppose this is not optimal, we have a counter example with 2 non_cudagraphable regions:
```
[non_cudagrable1, cudagraphable2, non_cudagraphable3]
```
where we can reorder to only 1 non_cudagraphable region:
```
[non_cudagrable1, non_cudagraphable3, cudagraphable2]
```
This reorder means non_cudagraphable3 does not depend on cudagraphable2. So after we scheduled non_cudagraphable1, both non_cudagraphable3 and cudagraphable2 have in_degree as 0. If this is true, Step 3 should have already scheduled non_cudagraphable3 before cudagraphable2 such that the counter example cannot exist.
This shows we cannot find such a counter example and the bfs is optimal on minimizing #partitions.
## Minimize peak memory
`reorder_for_peak_memory` currently uses topological_sort_dfs, topological_sort_lpmf, and topological_sort_bfs, where the later 2 are bfs. ILP brings small benefits and it can hardly scale to more than 100 nodes, according to @xuanzhang816. So ILP is not used for peak memory reorder in the inductor.
Heuristics strategy:
- Conduct reorder_for_peak_memory as the default order
- Conduct reorder_for_minimal_partitions and get results as list[tuple[partition, bool]], where partition: list[BaseSchedulerNode] and bool for cudagraphable.
- If the reorder increases peak memory too much, we use the default order.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,345,286
|
Make `aten.embedding` do not wrap negative index
|
YouJiacheng
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
Fixes #151918
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,342,220
|
[MPS] Fix test_neg_index_mps
|
dcci
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 10
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,330,370
|
[rfc][c10d] RDMA APIs (read/write, rkey)
|
d4l3k
|
open
|
[
"oncall: distributed",
"triaged",
"module: c10d"
] | 7
|
MEMBER
|
There's been a recent interest and requests for RDMA like apis that can be used without requiring both sides to initiate the communication. This is useful in a number of places such as distributed inference, checkpointing, tensorstores, async training and torchft.
There's also been libraries created such as https://github.com/ai-dynamo/nixl/blob/main/docs/nixl.md
Given the interest, it seems like it makes sense to provide these types of APIs out of the box in PyTorch for advanced users.
For a more concrete example of how this could be used see in PyTorch, see https://github.com/pytorch/pytorch/pull/151631
# Design Options
## 1. ProcessGroup API
This would leverage the ProcessGroup API to allow for registering and doing operations on tensors. It would be a very manual usage.
```py
t = torch.tensor(...)
handle = pg.register_tensor(t)
# send to remote
store.set("my_handle", bytes(t))
# recv on remote
handle = pg.restore_handle(store.get("my_handle"))
# read/write operations with the handle
pg.read(handle, t)
pg.write(handle, t)
```
## 2. "Ghost" Tensor via Subclass
We could also abstract this out away from the ProcessGroups and instead provide a mechanism in PyTorch to register tensors and exchange them but provide the main API via tensor subclasses.
```py
from torch.distributed import rdma
t = rdma.register_tensor(torch.tensor(...))
handle = t.handle()
t2 = rdma.from_handle(handle)
# use tensor dispatch to automatically write to the remote tensor
t2.copy_(...)
# access only part of a tensor by fetching from remote tensor on access
t2[100:200] = ...
y = x * t2[10]
# autograd to remote embedding table?
t2[1].sum().backward
# integrate with cuda kernels
# ?????
```
# MVP
For the initial implementation we want to see if we can use Gloo's ib backend to provide these APIs. Though we could also try and provide nixl as a sub library in PyTorch.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
3,012,323,064
|
[MPS] Implement _print_Trunc_to_Int
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
Fixes `test_device_assert_mps`
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,319,291
|
[MPSInductor] Warn-cast double as floats
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151963
* #151872
* #151871
* #151869
To support sqrt over dynamic shapes, i.e. make something like:
```python
torch.compile(dynamic=True)(lambda x: x * math.sqrt(x.size(0))
```
compilable into
```metal
// Source node to ATen node mapping:
// Graph fragment:
// %scalar_tensor_default : [num_users=1] = call_function[target=torch.ops.aten.scalar_tensor.default](args = (%arg0_1,), kwargs = {})
// %convert_element_type_default : [num_users=1] = call_function[target=torch.ops.prims.convert_element_type.default](args = (%scalar_tensor_default, torch.float64), kwargs = {})
// %sqrt_default : [num_users=1] = call_function[target=torch.ops.aten.sqrt.default](args = (%convert_element_type_default,), kwargs = {})
// %convert_element_type_default_1 : [num_users=1] = call_function[target=torch.ops.prims.convert_element_type.default](args = (%sqrt_default, torch.float32), kwargs = {})
// %mul_tensor : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%arg1_1, %convert_element_type_default_1), kwargs = {})
kernel void generated_kernel(
device float* out_ptr0,
constant float* in_ptr0,
constant long& ks0,
uint xindex [[thread_position_in_grid]]
) {
int x0 = xindex;
auto tmp0 = in_ptr0[x0];
auto tmp1 = ks0;
auto tmp2 = static_cast<float>(tmp1);
auto tmp3 = metal::sqrt(tmp2);
auto tmp4 = static_cast<float>(tmp3);
auto tmp5 = tmp0 * tmp4;
out_ptr0[x0] = static_cast<float>(tmp5);
}
```
TODO:
- Figure out if this could be tweaked in fx-passes, but overhead is probably too high
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,305,861
|
[ca] hide unused scalar int sizes from dynamo
|
xmfan
|
open
|
[
"Merged",
"Reverted",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: compiled autograd",
"ci-no-td"
] | 5
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151860
* #152119
* __->__ #151962
* #151731
together with https://github.com/pytorch/pytorch/pull/151731, FIXES https://github.com/pytorch/pytorch/issues/113129 https://github.com/pytorch/pytorch/issues/146168
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,296,181
|
[fake tensor] Cache None, integer and SymInts in the output
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152062
* __->__ #151961
* #151957
* #151477
* #151633
* #151409
| true
|
3,012,291,456
|
[rfc][c10d] First Class Object-Oriented ProcessGroup and Store APIs
|
d4l3k
|
open
|
[
"oncall: distributed",
"triaged",
"module: c10d"
] | 2
|
MEMBER
|
Most users of PyTorch distributed currently use the global methods such as `dist.init_process_group` and `dist.allreduce`. These methods are showing their age when it comes to new paradigms where there may be multiple "worlds" such as in the case of reinforcement learning, checkpointing or fault tolerant scenarios such as implemented by torchft.
This is a proposal to expose methods to allow users to directly instantiate ProcessGroups without requiring subworlds (i.e. splitting) and use them in an object oriented way by directly calling methods on the ProcessGroup object.
# Proposed ProcessGroup API Changes
* `dist.new_group(..., store=...)` add a new `store=` parameter to new_group that allows for constructing a new ProcessGroup without having to have a global process group to split from. We want to have the same API as `init_process_group` but allow for initializing non-default groups.
* `dist.get_default_group` expose the current `_get_default_group` API so users can directly use the object
* `dist.resolve_process_group` expose `_resolve_process_group` to get a group by name
* `ProcessGroup.register_backend` / `ProcessGroup.set_default_backend` (optional) make the internal methods public so it's possible to instantiate a ProcessGroup object directly. This may not be necessary with the changes to `dist.new_group` above. Ex: https://github.com/pytorch/torchft/blob/main/torchft/process_group.py#L532
# Proposed Store API Changes
* `dist.get_user_store()` -- returns a new `.clone()`ed copy of the default store w/ a `/user/` prefix. This will make it easier for more advanced users to instantiate PGs directly as well as for users to use things like in #150943
* `dist.store_from_uri(...)` and `Store.uri()` create persistent URIs that can be exchanged to rehydrate stores and share access across the network. I.e. `tcpstore://1.2.3.4:1234?/some/prefix` or `filestore:///foo/bar?/foo/bar`. This is useful for doing multi-world PG operations such as in torchft.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
3,012,284,695
|
[FlexAttention] Remove Old Constraint on lastdim strides
|
drisspg
|
closed
|
[
"module: nn",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151959
* #151846
Fixes: #148827
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,280,551
|
Inductor Tiling Rewrite
|
eellison
|
open
|
[
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151958
Fix for https://github.com/pytorch/pytorch/issues/149982.
Summary:
This PR does two main things:
1. Rewrites the tiling heuristics. The previous tiling heuristic would have each dependency generate a tiling. Then, we sum up the score for each generated tiling, preferring any 2d tiling over the default. The new tiling heuristics scores each tiling by its global coalesced memory. This gives both a potentially better tiling (especially for more complicated, 3d patterns) as well as information we can use in generating block sizes.
2. Analyses memory dependencies for accesses that would be coalesced with additional tiling. The motivating kernel is in https://github.com/pytorch/pytorch/issues/149982 which is a 32 element reduction. A smaller version of it is [here](https://gist.github.com/eellison/0fa9396f5479eb4dba09756e3bf6ff2a). We need to run this kernel once in the forward per linear layer on a contiguous tensor, and once in the backward on a transposed tensor.
While the contiguous kernel has coalesced accesses, and is performant on master, the transposed version accesses uncoalesced memory on main and is ~2.8x slower. See, this [full log](https://gist.github.com/eellison/fa644bfd9d0ae11dadb62e17a5d48a83) from the above repro. Now, with this PR, it is only ~1.15x slower. See the [updated log](https://gist.github.com/eellison/0b2b653309494d28cf7b48929a022075).
We analyse memory addresses that are not coalesced by any iteration variable. For this following dependency:
`(((32*n0 + n1)//2048)) + 4096*(ModularIndexing(32*n0 + n1, 1, 2048))` we infer that tiling `n0` by 64 makes the first term coalesced.
I'm sure there are still some CI failures to debug..
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @vkuzo
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,258,064
|
[invoke_subgraph] Cache fake tensor if no unbacked symint in the output
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152062
* #151961
* __->__ #151957
* #151477
* #151633
* #151409
| true
|
3,012,243,868
|
[inductor][profiler] lazily import things in standalone_compile
|
davidberard98
|
open
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151956
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,224,820
|
[CI] Update sleef submodule to v3.8
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151955
Should help with RISC-V cross-compilation.
3.9.0 migration is blocked by sleef project switching to C++20
| true
|
3,012,190,822
|
DISABLED test_builtin_score_mods_float16_score_mod2_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_float16_score_mod2_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40960447293).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_float16_score_mod2_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1127, in test_builtin_score_mods
self.run_test(score_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 509, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 847, in sdpa_dense_backward
grad_softmax_scores - sum_scores + grad_logsumexp.unsqueeze(-1)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 420.12 MiB is free. Including non-PyTorch memory, this process has 21.63 GiB memory in use. Of the allocated memory 5.73 GiB is allocated by PyTorch, and 15.64 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_float16_score_mod2_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,190,753
|
DISABLED test_triton_kernel_to_post_grad_tracing_extern_kernel (__main__.TestProvenanceTracingArtifact)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_triton_kernel_to_post_grad_tracing_extern_kernel&suite=TestProvenanceTracingArtifact&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40957718166).
Over the past 3 hours, it has been determined flaky in 23 workflow(s) with 46 failures and 23 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_triton_kernel_to_post_grad_tracing_extern_kernel`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_provenance_tracing.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,190,566
|
DISABLED test_causal_block_non_divisible_cuda (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_causal_block_non_divisible_cuda&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40959408066).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_causal_block_non_divisible_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,190,393
|
DISABLED test_builtin_score_mods_dynamic_float16_score_mask_mod5_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_dynamic_float16_score_mask_mod5_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40960209355).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_dynamic_float16_score_mask_mod5_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1174, in test_builtin_score_mods_dynamic
self.run_dynamic_test(score_mask_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 849, in run_dynamic_test
golden_out1.backward(backward_grad1.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 870, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.1331 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 6, in forward
add = torch.ops.aten.add.Tensor(arg0_1, sub); arg0_1 = sub = add = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 806, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 806, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 274.12 MiB is free. Including non-PyTorch memory, this process has 21.77 GiB memory in use. Of the allocated memory 5.83 GiB is allocated by PyTorch, and 15.67 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_dynamic_float16_score_mask_mod5_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,190,321
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE3_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE3_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40960209355).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE3_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,182,617
|
[inductor][profiler] move `standalone_compile` import to TYPE_CHECKING
|
davidberard98
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151949
Fixes #151829
Allows .standalone_compile imports to be lazy, which prevents unecessary execution when importing torch._inductor.
This matters for profiler, because the profiler imports torch._inductor just to check the config (to see if cuda graphs is used)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,175,303
|
add tlpare logs
|
xuanzhang816
|
open
|
[
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151948
| true
|
3,012,172,942
|
Make `torch.jit.Error` inherit from Exception
|
alanhdu
|
closed
|
[
"oncall: jit",
"module: typing",
"triaged",
"open source",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Summary:
I can confirm that `torch.jit.Error.mro()` contains `Exception` in the inheritance hierarchy.
This avoids a bunch of `pyre-ignore`s in D73352417.
Test Plan: Sandcastle
Differential Revision: D73464544
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @malfet @xuzhao9 @gramster
| true
|
3,012,163,215
|
[profiler] use inspect.getattr_static to avoid importing inductor
|
davidberard98
|
open
|
[
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151946
`hasattr(torch, "_inductor")` causes inductor to be imported, which is part of the issue in #151829. We can use inspect.getattr_static to do this instead, to avoid actually importing inductor. `inspect.getattr_static` will actually return False until inductor gets imported, and it seems safe to assume that cudagraphs (the main reason we're checking for inductor) isn't going to be enabled unless we're actually using inductor (in which case, inductor would be imported)
| true
|
3,012,148,443
|
standalone compile FakeTensor from_graph detection with tensor subclass outputs
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
See https://github.com/pytorch/pytorch/pull/151788#discussion_r2054755043
cc @chauhang @penguinwu
| true
|
3,012,144,707
|
[torchbind] fix error message when attr is a real tensor.
|
ydwu4
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary: Previously, when attr is defined, "if attr" will try to evaluate the data of attr, which is not intendended and we get a ugly error stack if the attr is not evaluable (like a fake tensor) before the callable(attr) check.
Test Plan: Existing tests.
Reviewed By: yushangdi, henryoier
Differential Revision: D73460905
| true
|
3,012,140,729
|
[c10d] Allow split_group to work with non nccl backends
|
deepshah133
|
closed
|
[
"oncall: distributed",
"fb-exported",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 2
|
CONTRIBUTOR
|
Summary: Currently things are hardcoded to only work with nccl backend. Extend it to allow NCCL + custom plugins.
Differential Revision: D73399889
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,012,139,108
|
[PT2] - Allowlist should have precedence
|
flaviotruzzi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 20
|
CONTRIBUTOR
|
Summary: When working on List[List[int]], the ints were being considered Constants regardless of their inclusion on the allowlist.
Test Plan:
CI + new test
https://www.internalfb.com/intern/testinfra/testrun/5066549856504774
Differential Revision: D73137631
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,012,120,658
|
[WIP][dynamic shapes] whitelist at dim-level
|
pianpwk
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,012,119,985
|
Add check for 2-dim mask to COO mask computation
|
aartbik
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Follow up on discussion on https://github.com/pytorch/pytorch/pull/151794 Related to all fixes for https://github.com/pytorch/pytorch/issues/151351
| true
|
3,012,106,714
|
Fix circular imports
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151939
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,073,010
|
[pytorch] use a mutex in initialize_torch_libraries
|
rmaz
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Summary: The TORCH_LIBRARY_THREAD_UNSAFE_LAZY_INIT feature is thread unsafe for calling the initializers, but we want to allow the deferred initializer call to be safe from multiple threads. Add a mutex to ensure we have thread safe construction of the libraries post launch.
Differential Revision: D73457714
| true
|
3,012,071,883
|
[Inductor UT] Generalize device-bias code in `test_flex_attention.py`
|
anmyachev
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"ci-no-td"
] | 25
|
COLLABORATOR
|
@EikanWang @etaf @guangyey please take a look
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.