id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2 values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4 values | body stringlengths 7 62.5k ⌀ | is_title bool 1 class |
|---|---|---|---|---|---|---|---|---|
2,830,972,735 | [Metal] Small speedup for `sum`/`prod` | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146436
* #146429
* __->__ #146428
* #146423
As they can not really be invoked over empty arrays | true |
2,830,957,369 | add the `torch.float8_e8m0fnu` dtype to PyTorch | vkuzo | closed | [
"module: cpu",
"release notes: quantization",
"module: float8"
] | 10 | CONTRIBUTOR | Summary:
Adds the `torch.float8_e8m0fnu` dtype to PyTorch, as detailed in
https://github.com/pytorch/pytorch/issues/146414 . Please see the issue for a detailed definition of the format. Example of basic functionality:
```python
import torch
# round trip
x0 = torch.randn(4, 4, dtype=torch.float32)
x1 = x0.to(torch.float8_e8m0fnu) # RNE rounding
x2 = x1.to(torch.float32) # 2 ** exponent
# creation with empty
x0 = torch.empty(4, 4, dtype=torch.float8_e8m0fnu)
# printing
print(x0)
```
Done in this PR:
* numerical correctness
* op coverage (except for `torch._scaled_mm`): create tensor, cast to/from float32
* printing a tensor works
For future PRs:
* performance optimizations for casting
* torch._scaled_mm
* PT2
* various cleanups (detailed in comments with issue numbers)
Test Plan:
```
pytest test/quantization/core/experimental/test_float8.py -s
```
Reviewers:
Subscribers:
Tasks:
Tags:
cc @yanbing-j @albanD @kadeng @penguinwu @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,830,956,700 | Test typing of arithmetic operators on Tensor (see #145838) | rec | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12 | COLLABORATOR | See #145838
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146426
| true |
2,830,953,854 | [ONNX] Create deprecation warning on dynamo_export | justinchuby | closed | [
"module: onnx",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: onnx",
"topic: deprecation",
"ci-no-td"
] | 28 | COLLABORATOR | Reland #146003
Deprecation of `torch.onnx.dynamo_export`:
* [`torch/onnx/_internal/_exporter_legacy.py`]: Added deprecation warnings to the `OnnxRegistry`, `ExportOptions`, `ONNXRuntimeOptions`, and `dynamo_export` functions, indicating that `torch.onnx.dynamo_export` is deprecated since version 2.6.0 and should be replaced with `torch.onnx.export(..., dynamo=True)`.
| true |
2,830,951,751 | cpp_wrapper: fix test_torchinductor* tests | benjaminglass1 | closed | [
"module: cpu",
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147225
* #146706
* #147403
* #146991
* #147215
* __->__ #146424
* #146109
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,830,948,781 | [Metal][BE] Add `#pragma once` to all headers | malfet | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146436
* #146429
* #146428
* __->__ #146423
| true |
2,830,904,568 | [metal] Add a missing cast to make the call to copysign unambiguous. | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps"
] | 3 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | true |
2,830,885,330 | experimental specialization logging | bobrenjc93 | closed | [
"Stale",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146421
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D69163120](https://our.internmc.facebook.com/intern/diff/D69163120) | true |
2,830,866,031 | [ROCm] Optimize the stride one indexing backwards kernel | doru1004 | closed | [
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"rocm",
"ciflow/rocm"
] | 28 | CONTRIBUTOR | This patch makes several changes to the stride 1 backwards indexing kernel as follows:
- enables the computation across the `sorted_indices` array to happen in parallel by all the lanes in the warp, this means that the accesses to `sorted_indices` are now fully coalesced.
- the duplicate counting now happens in parallel: each lane in the warp counts the duplicates of a different `idx`.
- enable skipping during duplicate count: this optimization ensures that for large number of duplicates we can skip 32 values at time to speed up the count.
- for low number of duplicates i.e. we have less than `warp-size` duplicates then just perform the tail reduction which avoid the wasteful parallel reduction across the warp for this case (it would only add zero values).
- for high number of duplicates i.e. when we have more than `warp-size` duplicates then we still use the full warp of lanes to compute the reduced value with as much parallelism as possible. This is done by making sure that all lanes stick around and cooperatively execute the reduction in case there is a single `idx` which has a large number of duplicates (i.e. a duplicate spike). For this to happen we use shared memory to pass the duplicate count computed in parallel in the first part of the kernel to the cooperative reduction part of the kernel.
Benefits on examples extracted from workloads show a 3.6x to 10x speed-up.
co-author: Hashem Hashemi <Hashem.Hashemi@amd.com>
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,830,850,956 | The value of requires_grad is not set when creating the tensor using TensorMaker | irshadcc | closed | [
"module: internals",
"module: cpp",
"triaged"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
I was trying to create a weight tensor using at::from_blob and I set the requires_grad flag in tensor options. After I created the tensor and checked the requires_grad value, I found that the requires_grad flag is not set.
```C++
#include <iostream>
#include <ATen/ops/embedding.h>
#include <ATen/ops/from_blob.h>
#include <ATen/ops/ones.h>
#include <c10/core/ScalarType.h>
#include <c10/core/TensorOptions.h>
#include <ATen/Tensor.h>
#include <c10/core/TensorImpl.h>
/**
* Create a 2-D Tensor with shape (v,d).
*/
at::Tensor create_weight(int v, int d) {
long numel = v*d;
float* data = new float[numel];
for (long i = 0 ; i < numel; i++) {
data[i] = static_cast<float>(i);
}
auto options = at::TensorOptions()
.requires_grad(true)
.dtype(c10::ScalarType::Float);
auto tensor = at::from_blob(data, {v,d}, options);
if (tensor.requires_grad()) {
std::cout << "requires_grad is true " << "\n";
} else {
std::cout << "requires_grad is false" << "\n";
}
return tensor;
}
int main() {
auto options = at::TensorOptions().dtype(c10::ScalarType::Long);
at::Tensor t = at::ones({ 0}, options);
at::Tensor weight = create_weight(3, 8);
return 0 ;
}
```
### Versions
[env_info.txt](https://github.com/user-attachments/files/18660854/env_info.txt)
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @jbschlosser | true |
2,830,786,352 | [BE]: Add TypeVarTuple to RNN Args for better type inference | Skylion007 | closed | [
"open source",
"Stale"
] | 3 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,830,728,419 | Only call triton in worker process, kick off worker processes earlier, during inductor codegen | jamesjwu | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 41 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146417
### Big idea
This PR extends https://github.com/pytorch/pytorch/pull/144288 by combining calling triton in worker processes with the future cache: we kick off triton compilation in the worker processes earlier, during inductor codegen. Basically instead of calling async_compile.triton for the first time only after the entire code has been generated, we start compiling as soon as we know we'll need to compile the kernel. Then, when loading the generated inductor code, we can simply read from our in memory future cache, considerably increasing the parallelism.
### Implementation Overview
In total, the diff does the following:
- Converts TritonFuture to LambdaFuture, only calling triton.compile on worker processes
- Now that triton.compile() isn't called on the main process, we call TritonBundler on all compiled kernels when we get them back from workers
- Extend @eellison's future cache to a class, mostly as a refactor
- Finally, call async_compile.triton ahead of time in Scheduler.codegen if workers are warmed up. This causes the subsequent
async_compile.triton call that occurs after codegen to cache hit on cold start.
In the diffs after this, I will add more to CompiledTritonKernels so that TritonBundler, on a warm start, automatically populates the in memory cache on warm start with the existing triton kernels, avoiding calling triton altogether on warm starts.
Because LambdaFutures are much faster to kick off than TritonFutures, due to not needing to load from TritonCodeCache at all, the time spent kicking off these worker jobs is pretty minimal for inductor codegen.
Differential Revision: [D69123174](https://our.internmc.facebook.com/intern/diff/D69123174/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,830,722,495 | Silent correctness bug in Inductor when fusing transpose into other ops | lw | closed | [
"high priority",
"triage review",
"oncall: distributed",
"triaged",
"module: correctness (silent)",
"oncall: pt2",
"module: inductor"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
```py
import os
import torch
import torch.distributed._functional_collectives as funcol
os.environ["RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "2743"
torch.distributed.init_process_group(backend="nccl")
def scale(t):
scale = torch.finfo(torch.float8_e4m3fn).max / t.abs().amax(dim=-1, keepdim=True).float()
t = t.mul(scale).to(torch.float8_e4m3fn)
return t, scale
def fp8_rowwise_backward(in_, w, out_grad):
out_grad_fp8, scale_out_grad = scale(out_grad)
w_fp8, scale_w = scale(w.t().contiguous())
out_grad_fp8 = funcol.all_gather_tensor(
out_grad_fp8, gather_dim=0, group=torch.distributed.group.WORLD
)
scale_out_grad = funcol.all_gather_tensor(
scale_out_grad, gather_dim=0, group=torch.distributed.group.WORLD
)
in_grad = torch._scaled_mm(
out_grad_fp8, w_fp8.t(), scale_a=scale_out_grad, scale_b=scale_w.t(), out_dtype=torch.bfloat16
)
out_grad = funcol.all_gather_tensor(
out_grad.t().contiguous(), gather_dim=0, group=torch.distributed.group.WORLD
)
w_grad = out_grad @ in_
return in_grad, w_grad
in_ = torch.randn((3072, 4096), device="cuda", dtype=torch.bfloat16)
w = torch.randn((4096, 4096), device="cuda", dtype=torch.bfloat16)
out_grad = torch.randn((3072, 4096), device="cuda", dtype=torch.bfloat16)
eager_in_grad, eager_w_grad = fp8_rowwise_backward(in_, w, out_grad)
compile_in_grad, compile_w_grad = torch.compile(fp8_rowwise_backward)(in_, w, out_grad)
assert torch.testing.assert_close(compile_w_grad, eager_w_grad)
```
The issue seems to come from Inductor trying to fuse the `scale(out_grad)` operation (which is a row-wise reduction + pointwises) with the `out_grad.t().contiguous()` step, probably in order to load `out_grad` only once. However, these ops aren't easy to fuse (one works on rows, the other one on blocks), and indeed Inductor ends up messing up the transposition. The result it produces for the transposition is in fact the same storage "reinterpreted" as a transposed shape.
### Error logs
```
AssertionError: Tensor-likes are not close!
Mismatched elements: 16690833 / 16777216 (99.5%)
Greatest absolute difference: 422.0 at index (2854, 3714) (up to 1e-05 allowed)
Greatest relative difference: 15335424.0 at index (803, 2794) (up to 0.016 allowed)
```
### Versions
PyTorch nightly `2.7.0.dev20250120+cu126`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @amjames @desertfire @aakhundov | true |
2,830,715,038 | Only call triton in worker process, ahead of time compile | jamesjwu | closed | [
"fb-exported",
"Stale",
"module: inductor",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146415
# Big idea
This PR extends https://github.com/pytorch/pytorch/pull/144288 by combining calling triton in worker processes with the future cache: we kick off triton compilation in the worker processes earlier, during inductor codegen. Basically instead of calling async_compile.triton for the first time only after the entire code has been generated, we start compiling as soon as we know we'll need to compile the kernel. Then, when loading the generated inductor code, we can simply read from our in memory future cache, considerably increasing the parallelism.
# Implementation Overview
In total, the diff does the following:
- Converts TritonFuture to LambdaFuture, only calling triton.compile on worker processes
- Now that triton.compile() isn't called on the main process, we call TritonBundler on all compiled kernels when we get them back from workers
- Extend @eellison's future cache to a class, mostly as a refactor
- Finally, call async_compile.triton ahead of time in Scheduler.codegen if workers are warmed up. This causes the subsequent
async_compile.triton call that occurs after codegen to cache hit on cold start.
In the diffs after this, I will add more to CompiledTritonKernels so that TritonBundler, on a warm start, automatically populates the in memory cache on warm start with the existing triton kernels, avoiding calling triton altogether on warm starts.
Because LambdaFutures are much faster to kick off than TritonFutures, due to not needing to load from TritonCodeCache at all, the time spent kicking off these worker jobs is pretty minimal for inductor codegen.
### Can we split the diff for easier review?
It's best if this diff lands atomically with all of these changes, as doing the ahead of time codegen compile is only performant if we replace TritonFuture with LambdaFuture(as we don't need to load the triton kernel on the main process). However, I've made a diff stack for easier reviewing here:
- D69070048 - Run async_compile.triton ahead of time in Scheduler.codegen
- D68633454 - Only call triton in worker process
Differential Revision: [D69070616](https://our.internmc.facebook.com/intern/diff/D69070616/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,830,683,665 | MX basic dtypes in pytorch/pytorch | vkuzo | open | [
"triaged",
"enhancement",
"needs research",
"module: python frontend"
] | 10 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
# Overview
The Open Compute Project introduced the [MicroScaling formats (MX)](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf) in Sep 2023, defining block-scaled dtypes with E8M0 block scales and FP8|FP6|FP4|INT8 block elements. We propose adding the E8M0 and FP4 dtypes from the MX spec to pytorch/pytorch. We expect the derived dtype logic (combining scales + data) and the modeling logic for converting a high precision model to use MX to live outside of pytorch/pytorch.
Proposed timeline for adding MX basic dtypes, as of 2025-02-03:
| MX basic dtype | already in pytorch/pytorch? | proposed timeline to add to pytorch/pytorch |
|---|---|---|
| E8M0 | no | now (2025Q1) |
| FP8 (E5M2) | yes | n/a |
| FP8 (E4M3) | yes | n/a |
| FP6 (E3M2) | no | revisit later |
| FP6 (E2M3) | no | revisit later |
| FP4 (E2M1) | no | now (2025Q1) |
| INT8 | yes | n/a |
Next, we motivate the inclusion of these dtypes to pytorch/pytorch, detail the proposed implementations of E8M0 and FP4(E2M1), and enumerate open questions about FP6.
# Motivation for adding MX dtypes to pytorch/pytorch
1. We expect MX dtypes to be widely used, and we want to make them easy to use in the PyTorch ecosystem.
2. These dtypes meet the two major [criteria](https://dev-discuss.pytorch.org/t/supporting-new-dtypes-in-pytorch/1833#h-3-what-is-the-criteria-for-adding-a-new-dtype-to-pytorch-core-4) we look for when adding a new dtype to PyTorch:
a. predicted wide usage, as evidenced by silicon support in major accelerators
b. the dtype must be meaningful without any extra metadata (the derived dtype logic of combining the E8M0 scales with the FP8|FP6|FP4|INT8 elements will live elsewhere)
3. These dtypes have been added to ONNX ([1](https://github.com/onnx/onnx/pull/6318)), MLIR ([1](https://github.com/llvm/llvm-project/pull/111028), [2](https://github.com/llvm/llvm-project/pull/108877)) and Jax ml-dtypes ([1](https://github.com/jax-ml/ml_dtypes/pull/166), [2](https://github.com/jax-ml/ml_dtypes/pull/181)), and are under consideration in XLA ([RFC](https://github.com/openxla/xla/discussions/18085)). We would like for PyTorch to be consistent with other frameworks where applicable.
An alternative to adding these basic dtypes to pytorch/pytorch would be to build an out-of-core extension point, similar to [out-of-core device](https://pytorch.org/tutorials/advanced/privateuseone.html). Given (1) the expected wide usage of MX dtypes and (2) the engineering cost/risk to build such an extension point, we propose to add these dtypes directly to pytorch/pytorch at this point. We may revisit an out-of-core extension point at a future time.
# Shell dtypes in PyTorch
Dtypes such as `torch.float32` and `torch.float16` in PyTorch provide broad op coverage across multiple backends for a wide range of use cases. For recently popularized low precision dtypes (such as float8), the hardware support is sporadic and the # of proven use cases is smaller:
* low precision gemm (example: hardware accelerated float8 and int8 gemm, but only on some hardware)
* tensor compression (examples: low precision all-to-all, w4a16 gemm)
We propose to formalize the support of low precision dtypes in PyTorch under the name of **shell dtype**, and clearly enumerate the expected op and backend support for shell dtypes:
* **shell dtype**: a specialized dtype in pytorch/pytorch where only a small subset of ops and backends are expected to be supported
op coverage
* **shall support**:
* tensor creation (empty, fill, zeros, etc)
* tensor operations which do not peek inside or create new data elements (cat, view, reshape, etc)
* **might support**: tensor operations which do peek inside or create new data elements, on a case-by-case basis.
* key considerations include real world importance, presence of hardware accelerated kernels in widely adopted accelerators, maturity
* examples: casting, low precision gemm, nan/inf checks
* counter-examples: +,-,*,/ (lack of widely adopted use case)
* there is **no expectation of wide op coverage**
* backend coverage - case by case
Existing low precision dtypes in pytorch/pytorch such as torch.float8_e4m3fn and torch.float8_e5m2 as well as the new dtypes proposed in this RFC can be categorized as shell dtypes.
# Demystifying dtype naming suffixes
The "fn" and "fnuz" suffixes found in non-IEEE dtype names across PyTorch, MLIR and LLVM mean:
* "f" - finite value encodings only, no infinity
* "n" - nan value encodings differ from the IEEE spec
* "uz" - "unsigned zero" only, i.e. no negative zero encoding
Sources: ([1](https://github.com/openxla/stablehlo/blob/main/rfcs/20230321-fp8_fnuz.md)), ([2](https://discourse.llvm.org/t/rfc-add-apfloat-and-mlir-type-support-for-fp8-e5m2/65279/15))
# E8M0 detailed proposal
<img width="423" alt="Image" src="https://github.com/user-attachments/assets/206ac948-2de3-48dc-8995-47b6f4cafe85" />
_(image source: https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf , Table 7)_
* proposed name in PyTorch: `torch.float8_e8m0fnu`
* float8 prefix to clearly mark that this is a floating point dtype with 8 bits, aligned with naming from LLVM ([PR](https://github.com/llvm/llvm-project/pull/107127)).
* e8m0 suffix for the EM bits
* f suffix is for finite values only (no infinity)
* n suffix is for non-standard NaN encoding
* u suffix is for unsigned
* encoding semantics: match the [OCP MX spec, Section 5.4.1](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf) exactly
* expected op support
* float32 -> e8m0 cast
* rounding: RNE, matching the IEEE 754 default rounding mode.
* note that this does not match the rounding to e8m0 in the MX spec (round towards zero), or the rounding to e8m0 in CUDA 12.8 (round towards zero or round towards positive infinity). Alternative rounding modes can be provided out of core or with a separate in-core API, which is out of scope for this document.
* saturation: set to NaN
* e8m0 -> float32 cast
* pow(2, e8m0_val - e8m0_bias), with accounting for NaN
* tensor creation and operations which do not peek inside or create data elements (cat, reshape, etc)
* scaled gemm via torch._scaled_mm (unofficial and no BC guarantees)
# FP4 (E2M1) detailed proposal
<img width="514" alt="Image" src="https://github.com/user-attachments/assets/e7da6e27-b842-49ca-a9d6-707996f0b27b" />
_(image source: https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf, Table 5)_
* proposed name: `torch.float4_e2m1fn_x2`
* e2m1 for the EM bits (sign is implied)
* f suffix for finite values only (no infinity)
* _x2 suffix for packed representation of two float4_e2m1f values into one byte
* encoding semantics: match the [OCP MX spec, Section 5.3.3](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf) exactly
* expected op support
* tensor creation and operations which do not peek inside or create data elements (cat, reshape, etc)
* float32|bfloat16 -> float4_e2m1fn_x2 cast
* rounding mode: RNE (matching MX spec default)
* overflow behavior: saturate (matching MX spec default)
* float4_e2m1fn_x2 -> float32|bfloat16 cast
* scaled gemm via torch._scaled_mm (unofficial and no BC guarantees)
# FP6 (E3M2 and E2M3)
It is not immediately clear whether FP6 warrants addition to pytorch/pytorch, due to:
1. lack of known silicon support in hardware that is expected to be widely adopted
2. unclear packing semantics we would want in pytorch/pytorch
We can revisit adding FP6 at a later time.
# Expected PyTorch low level modeling code for a MX-compliant scaled gemm
This is what we expect the code for preparing high precision tensors for an MX-compliant low precision scaled gemm to look like with the new dtypes.
```python
M, K, N = 128, 256, 512
a_bf16 = torch.randn(M, K, device="cuda", dtype=torch.bfloat16)
b_bf16 = torch.randn(K, N, device="cuda", dtype=torch.bfloat16)
# calculate 32x1 blocked scale in float32 - not shown, this can be done with
# in-core operations
a_32x1_blocked_scale_fp32 = calculate_blocked_scale(a_bf16, block_size=32)
# cast the blocked float32 scale to e8m0 with RNE rounding (NEW)
# note: other rounding methods may be implemented out-of-core
a_32x1_blocked_scale_e8m0 = a_32x1_blocked_scale_fp32.to(torch.float8_e8m0fnu)
# cast the scale back to float32 (NEW)
a_32x1_blocked_scale_e8m0_fp32 = a_32x1_blocked_scale_e8m0.to(torch.float32)
# scale the original tensor
a_bf16_scaled = a_bf16 / a_32x1_blocked_scale_e8m0_fp32
# cast bf16 to float8|float6|float4 (NEW)
a_float4_e2m1f_x2 = a_bf16_scaled.to(torch.float4_e4m1f_x2)
# the logic for b is the same as a and is skipped for brevity
b_float4_e2m1f_x2, b_32x1_blocked_scale_e8m0 = ...
# call the scaled gemm kernel
# note that this is a private API with no BC guarantees, for now
c_bf16 = torch._scaled_mm(
a_float4_e2m1f_x2,
b_float4_e2m1f_x2,
a_scale=a_32x1_blocked_scale_e8m0,
b_scale=b_32x1_blocked_scale_e8m0,
out_dtype=torch.bfloat16,
)
```
# Proposed future integrations
## triton
We hope to work together with the triton team on consistent naming and functionality of basic MX dtypes across PyTorch and triton.
## PyTorch 2.0
We plan to plumb E8M0 and FP4 throughout the PT2 stack, similar to what was done with float8.
## PyTorch export
If there is a strong need, we are open to eventually making the MX basic dtypes exportable.
## e2e training / inference flows
We expect the higher-level abstractions around MX to live outside of pytorch/pytorch (for example, in [torchao](https://github.com/pytorch/ao)), and be implemented similarly to the higher-level abstractions that exist today for float8 training and inference. We plan to evolve torchao's current [emulation-only MX training/inference prototype](https://github.com/pytorch/ao/tree/main/torchao/prototype/mx_formats) to use PyTorch core dtypes and hardware accelerated MX gemms.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD | true |
2,830,668,570 | add support for capturing provenance of unary operations | bobrenjc93 | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor",
"ci-no-td"
] | 17 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146413
* #145848
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,830,668,448 | use DTRACE_ENV_VAR as the trace logs directory of set | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146413
* __->__ #146412
* #145848
```
(/home/bobren/local/a/pytorch-env) [7:47] devgpu035:/home/bobren/local/a/pytorch TORCH_DTRACE=/tmp/bb python r1.py
``` | true |
2,830,650,321 | Enable ruff and other linters on ipynb notebooks in PyTorch too | Skylion007 | open | [
"module: lint",
"triaged"
] | 0 | COLLABORATOR | ### 🚀 The feature, motivation and pitch
Enable ruff linter on ipynb notebooks in the PyTorch repo. We also have various formatters that support ipynb notebooks in the repo and should considering enabling them. Might be relevant to @justinchuby @aorenste
### Alternatives
_No response_
### Additional context
_No response_ | true |
2,830,643,881 | [BE][Ez]: Enable ruff rule E731. use `def` instead of anonymous lambda | Skylion007 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | Not sure why this isn't enabled, only 1 fix is needed and it supports autofixes. | true |
2,830,642,787 | ROCM Infra failures during checkout of PyTorch | atalman | closed | [
"high priority",
"module: rocm",
"module: ci",
"triaged"
] | 4 | CONTRIBUTOR | ## Current Status
ongoing
## Error looks like
Error during checkout pytorch: https://github.com/pytorch/pytorch/actions/runs/13130864428/job/36636990502
```
6m 59s
Run pytorch/pytorch/.github/actions/checkout-pytorch@main
Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT"
Run retry () {
/var/home/pytorchci/actions-runner/_work/pytorch/pytorch
Run actions/checkout@v4
Syncing repository: pytorch/pytorch
Getting Git version info
Temporarily overriding HOME='/var/home/pytorchci/actions-runner/_work/_temp/9b4bfb3f-3d55-473b-9826-ed3ce52fcbab' before making global git config changes
Adding repository directory to the temporary git global config as a safe directory
/usr/bin/git config --global --add safe.directory /var/home/pytorchci/actions-runner/_work/pytorch/pytorch
Deleting the contents of '/var/home/pytorchci/actions-runner/_work/pytorch/pytorch'
Initializing the repository
Disabling automatic garbage collection
Setting up auth
/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :"
/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader
/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :"
/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic ***
Fetching the repository
/usr/bin/git -c protocol.version=2 fetch --prune --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/*
Error: error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: CANCEL (err 8)
Error: error: 7257 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
Error: fatal: early EOF
Error: fatal: fetch-pack: invalid index-pack output
The process '/usr/bin/git' failed with exit code [1](https://github.com/pytorch/pytorch/actions/runs/13132732678/job/36642995371#step:2:1)28
Waiting 16 seconds before trying again
/usr/bin/git -c protocol.version=2 fetch --prune --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/*
Error: error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: CANCEL (err [8](https://github.com/pytorch/pytorch/actions/runs/13132732678/job/36642995371#step:2:9))
Error: error: 2928 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
Error: fatal: early EOF
Error: fatal: fetch-pack: invalid index-pack output
The process '/usr/bin/git' failed with exit code 1[28](https://github.com/pytorch/pytorch/actions/runs/13132732678/job/36642995371#step:2:31)
Waiting 16 seconds before trying again
/usr/bin/git -c protocol.version=2 fetch --prune --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/*
Error: error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: CANCEL (err 8)
Error: error: 3320 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
Error: fatal: early EOF
Error: fatal: fetch-pack: invalid index-pack output
Error: The process '/usr/bin/git' failed with exit code 128
```
## Incident timeline (all times pacific)
8PM Monday Feb 3
5AM Tuesday Feb 4, notified AMD team
## User impact
Multiple Rocm workflows are failing during checkout
## Root cause
*What was the root cause of this issue?*
## Mitigation
Notified AMD team on Feb 4, 5AM
## Prevention/followups
*How do we prevent issues like this in the future?*
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,830,559,499 | [BE][Ez]: ISC001 Auto concatenate implicit one line strings | Skylion007 | closed | [
"oncall: distributed",
"oncall: jit",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 8 | COLLABORATOR | Apply ruff rule about implicit string concatenation, this autofixes strings that are all the same type and on the same line. These lines are broken up likely as the result of autoformatters in the past. All fixes are automated using the autofixes in ISC001.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,830,395,844 | [ROCm] Unskip std:bad_alloc failures | jataylo | closed | [
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 6 | COLLABORATOR | Flakey MI300 issue related to memory usage should now be resolved after https://github.com/pytorch/pytorch/actions/runs/13007160888?pr=145829.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,830,284,433 | Only enable aotriton on x86_64 and aarch64 | Xeonacid | closed | [
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 5 | NONE | Make `USE_FLASH_ATTENTION` and `USE_MEM_EFF_ATTENTION` depend on `CPU_INTEL OR CPU_AARCH64`.
[aotriton pre-built](https://github.com/ROCm/aotriton/releases) is only available on x86_64.
Although `AOTRITON_INSTALL_FROM_SOURCE` can be specified to build from source, building aotriton requires CUDA, so on architectures without CUDA support (like riscv64), it still needs to be disabled. | true |
2,830,228,799 | Small improvements to NJT matrix multiplies | michael-diggin | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: nested tensor"
] | 6 | CONTRIBUTOR | Fixes #146404
Adds changes to the matmul and matmul_backward operation for nested jagged tensors, to support back propagation when the output is a regular strided tensor.
This required adding support for the nested matmul operation to work when the nested tensor wasn't 'self', i.e
`A@B` where `A` isn't nested but `B` is.
The operation schemas had to be updated to reflect that either input can be a strided tensor instead (and the gradient), so an extra assertion is added in an edge case where neither input is nested.
Unit tests are also added.
| true |
2,830,216,130 | Can't back prop through NJT matrix multiplication when output is strided tensor | michael-diggin | closed | [
"triaged",
"module: nestedtensor",
"actionable"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
When performing a matmul between two NJTs such that the output is strided, it currently fails on the backward pass.
This is on latest nightly.
Repro:
```python
import torch
nt0 = torch.nested.nested_tensor([torch.rand(2, 6), torch.rand(3, 6)], layout=torch.jagged, requires_grad=True)
nt1 = torch.nested.nested_tensor_from_jagged(torch.rand(5, 6), offsets=nt0.offsets()).requires_grad_(True)
out = torch.matmul(nt0.transpose(-2, -1), nt1)
out.sum().backward()
```
Gives the following error:
```
NestedTensor matmul_backward_default(grad: jt_all, self: jt_all, other: any, mask: any): expected grad to be a jagged layout NestedTensor
```
This is because `out` is not a nested tensor, and the ops in https://github.com/pytorch/pytorch/blob/main/torch/nested/_internal/ops.py#L2550 assert that the input gradient must be nested (as the error suggests).
### Versions
<details>
<summary>Versions</summary>
<br>
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.53
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] optree==0.13.1
[pip3] pynvjitlink-cu12==0.4.0
[pip3] pytorch-triton==3.2.0+gitb2684bf3
[pip3] torch==2.7.0.dev20250204+cu124
[pip3] torchaudio==2.5.1+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu121
[conda] Could not collect
</details>
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | true |
2,830,185,748 | [1/N] Use std::string_view in torchgen | cyyever | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"module: aotinductor"
] | 21 | COLLABORATOR | Moves remaining c10::sv to std::sv
cc @desertfire @chenyang78 @penguinwu @yushangdi @benjaminglass1 | true |
2,830,052,032 | [2/N] Remove NOLINT suppressions | cyyever | closed | [
"oncall: jit",
"triaged",
"open source",
"Merged",
"release notes: cpp"
] | 3 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | true |
2,829,945,230 | [ARM] Unit test failure - FreezingCpuTests.test_linear_binary_folding_cpu | robert-hardwick | open | [
"module: tests",
"triaged",
"module: arm"
] | 0 | COLLABORATOR | ### 🐛 Describe the bug
This test is not currently been enabled in ci and has been failing for an unknown period of time.
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_binary_folding.py", line 302, in test_linear_binary_folding
test_linear_fusion(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/inductor/test_binary_folding.py", line 255, in test_linear_fusion
out_optimized = out_optimized(inp)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1485, in _call_user_compiler
raise BackendCompilerFailed(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1464, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 131, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/__init__.py", line 2339, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1906, in compile_fx
return aot_autograd(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1158, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 765, in load
compiled_fn = dispatch_and_compile()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1143, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 205, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1424, in fw_compiler_freezing
opt_model, preserved_arg_indices = freeze(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/freezing.py", line 110, in freeze
freezing_passes(aot_autograd_gm, aot_example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/fx_passes/freezing_patterns.py", line 73, in freezing_passes
pattern.apply(gm.graph) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 1859, in apply
if is_match(m) and entry.extra_check(m):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/fx_passes/mkldnn_fusion.py", line 938, in is_linear_add_bias
assert weight_meta.dtype in (
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
python test/inductor/test_binary_folding.py FreezingCpuTests.test_linear_binary_folding_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
@zhuhaozhe hoping that you might be able to help identify the issue as it appears you touched this code in https://github.com/pytorch/pytorch/pull/129138
We are asserting in here because the weight_meta.dtype is float32
https://github.com/pytorch/pytorch/blob/e0f22e54e8f8b6b0281d627cac117ef36f9db603/torch/_inductor/fx_passes/mkldnn_fusion.py#L938-L941
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+git8feb7c9
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+git8feb7c9
[conda] No relevant packages
cc @mruberry @ZainRizvi @malfet @snadampal @milpuz01 | true |
2,829,624,297 | DISABLED test_ddp_comm_hook_sparse_gradients (__main__.DistributedDataParallelTest) | pytorch-bot[bot] | open | [
"oncall: distributed",
"module: flaky-tests",
"skipped"
] | 1 | NONE | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_ddp_comm_hook_sparse_gradients&suite=DistributedDataParallelTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36629187366).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_ddp_comm_hook_sparse_gradients`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 597, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 837, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 891, in _check_return_codes
raise RuntimeError(
RuntimeError: Process 1 terminated or timed out after 300.030312538147 seconds
```
</details>
Test file path: `distributed/test_c10d_gloo.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @wdvr | true |
2,829,608,019 | Input ignore not execute in `torch.addr()` | ILCSFNO | closed | [
"triaged",
"module: linear algebra",
"module: python frontend"
] | 6 | CONTRIBUTOR | ### 🐛 Describe the bug
The doc of [`torch.addr()`](https://pytorch.org/docs/stable/generated/torch.addr.html#torch-addr) shows its description as below:
https://github.com/pytorch/pytorch/blob/1c16cf70c37652dde7950ca174278b425af03611/torch/_torch_docs.py#L702-L703
It shows that when `beta` is set to 0, the `input` will be ignored, which means that whether `input` is an expected matrix or not, there shouldn't be an error happened for the misuse of `input`.
Instead, it should ignore this just like `input` doesn't exist, that is,
```txt
# formula with all parameters
out=β input+α (vec1⊗vec2)
# formula with β==0
out=α (vec1⊗vec2)
```
But now when `beta` is set to 0, with input of unexpected size, it will raise error like below:
### Minified Repro
```python
import torch
import numpy as np
x = torch.tensor(np.random.randn(10, 10)) # input with unexpected size
vec1 = torch.tensor(np.random.randn(3))
vec2 = torch.tensor(np.random.randn(3))
out = torch.addr(x, vec1, vec2, beta=0) # expected behavior: ignore input and just calc between vec1 & vec2
```
### Output
```txt
RuntimeError: The expanded size of the tensor (3) must match the existing size (10) at non-singleton dimension 1. Target sizes: [3, 3]. Tensor sizes: [10, 10]
```
### Versions
pytorch==2.5.0
torchvision==0.20.0
torchaudio==2.5.0
pytorch-cuda=12.1
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @albanD | true |
2,829,557,136 | [TEST][Sparse] Force CUTLASS backend in TestSparseSemiStructuredCUTLASS | Aidyn-A | closed | [
"module: sparse",
"module: cuda",
"module: tests",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | We have noticed some discrepancy between the ways the `test_sparse_semi_structured.py` was called. And in some ways, the test falsely fails, because it was attempting to run on a wrong backend. All because `SparseSemiStructuredTensor._FORCE_CUTLASS = True` was never set in the setup of `TestSparseSemiStructuredCUTLASS` as it was in its `TestSparseSemiStructuredCUSPARSELT` counterpart https://github.com/pytorch/pytorch/blob/8444fe019a9c8b0a6ede01891efe9f0ca2c760a8/test/test_sparse_semi_structured.py#L1039-L1046
When I run tests via pytest, just by shear luck it calls `test_values_backend_cutlass_cuda` which sets the backend to CUTLASS https://github.com/pytorch/pytorch/blob/bb4bd5f00b35eaaecb47d17caddfbd69e1f733df/test/test_sparse_semi_structured.py#L475 before `test_conversions_all_patterns_cuda_*`:
```
test/test_sparse_semi_structured.py::TestSparseSemiStructuredCUDA::test_values_backend_cutlass_cuda PASSED [0.0071s] [ 72%]
test/test_sparse_semi_structured.py::TestSparseSemiStructuredCUTLASSCUDA::test_conversions_all_patterns_cuda_bfloat16 PASSED [0.0484s] [ 73%]
test/test_sparse_semi_structured.py::TestSparseSemiStructuredCUTLASSCUDA::test_conversions_all_patterns_cuda_float16 PASSED [0.0041s] [ 73%]
test/test_sparse_semi_structured.py::TestSparseSemiStructuredCUTLASSCUDA::test_conversions_all_patterns_cuda_int8 PASSED [0.0079s] [ 73%]
```
In this scenario everything is good.
But in `python test/test_sparse_semi_structured.py -v -k cuda` way, the order of the tests is not the same, and it sets cuSparseLt backend just before running `test_conversions_all_patterns_cuda_*` which causes failures:
```
test_cusparselt_backend_cuda (__main__.TestSparseSemiStructuredCUSPARSELTCUDA.test_cusparselt_backend_cuda) ... ok
...
test_conversions_all_patterns_cuda_bfloat16 (__main__.TestSparseSemiStructuredCUTLASSCUDA.test_conversions_all_patterns_cuda_bfloat16) ... FAIL
test_conversions_all_patterns_cuda_float16 (__main__.TestSparseSemiStructuredCUTLASSCUDA.test_conversions_all_patterns_cuda_float16) ... FAIL
test_conversions_all_patterns_cuda_int8 (__main__.TestSparseSemiStructuredCUTLASSCUDA.test_conversions_all_patterns_cuda_int8) ... ERROR
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @ptrblck @msaroufim @eqy @mruberry @ZainRizvi | true |
2,829,459,447 | [2/N][cp][example] flex attention in context parallel (backward pass) | XilunWu | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"module: context parallel"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146397
* #145896
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,829,244,133 | [MPSInductor] Implement `prod` reduction | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146396
* #146389
* #146380
Mostly reusing `sum` reduction logic
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,829,183,378 | [dynamo][builtin-skipfile-cleanup] Remove random | anijain2305 | closed | [
"Stale",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146395
* #146339
* #146116
* #146322
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,829,116,864 | [cpp_builder] refactor to reduce libcudart_static logs | henrylhtsang | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Want to reduce logs from `log_msg = f'"libcudart_static.a" not found under {path}'`, which was added in https://github.com/pytorch/pytorch/pull/142175
Differential Revision: D69096354
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,829,086,969 | PEP585: More fixes 2 | aorenste | closed | [
"oncall: distributed",
"oncall: jit",
"release notes: quantization",
"fx",
"ciflow/inductor",
"release notes: AO frontend"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146393
* #146392
* #146391
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad | true |
2,829,086,893 | PEP585: More UP006 fixes | aorenste | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"release notes: quantization",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | This should be the final PR before we can enable RUFF UP006.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146392
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,829,086,816 | PEP585: Add noqa to necessary tests | aorenste | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146391
| true |
2,829,072,165 | CPU-specific Inductor Error with `view` on `torch.nn.Embedding` output | cw-tan | closed | [
"triaged",
"oncall: pt2",
"oncall: cpu inductor"
] | 4 | NONE | ### 🐛 Describe the bug
The following minimal example runs with `device="cuda"` but fails with `device="cpu"` with latest torch 2.6. The error is specific to doing an operation on the `view` of the output of `torch.nn.Embedding` (error does not appear if we just do elementwise multiplication on the `torch.nn.Embedding` output for example).
```python
import torch
class Model(torch.nn.Module):
def __init__(self, num_classes, num_channels):
super().__init__()
self.embed = torch.nn.Embedding(num_classes, num_channels * num_channels)
self.num_channels = num_channels
def forward(self, x, classes):
x.requires_grad_()
weights = self.embed(classes).view(-1, self.num_channels, self.num_channels)
aux = torch.bmm(weights, x.unsqueeze(-1)).square().sum()
grad = torch.autograd.grad(aux, [x])[0]
return grad
device = "cpu" # "cuda"
num_batch = 512
num_channels = 256
num_classes = 3
x = torch.randn(num_batch, num_channels, dtype=torch.float32, device=device)
classes = torch.randint(0, num_classes, (num_batch,), dtype=torch.int64, device=device)
model = Model(num_classes, num_channels).to(device=device)
eager_out = model(x, classes)
print(eager_out)
batch_dim = torch.export.Dim("batch", min=1, max=1024)
exported = torch.export.export(
model,
(
x,
classes,
),
strict=False,
dynamic_shapes={"x": {0: batch_dim}, "classes": {0: batch_dim}},
)
model = torch.compile(exported.module())
out = model(x, classes)
loss = out.square().mean()
loss.backward()
print(model.embed.weight.grad)
```
### Error logs
```
C0203 22:59:51.318000 3540 site-packages/torch/_inductor/scheduler.py:1059] [0/0] Error in codegen for ComputedBuffer(name='buf6', layout=MutationLayoutSHOULDREMOVE('cpu', torch.float32, size=[3, 65536], stride=[65536, 1]), data=Scatter(device=device(type='cpu'), dtype=torch.float32, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x7fe5f08be160>, ranges=[512, 65536], output_indexer=<function index_output_size_and_inner_fn.<locals>.fn at 0x7fe5f08b2ac0>, scatter_mode='atomic_add'))
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.31
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU
Nvidia driver version: 527.56
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 141
Model name: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz
Stepping: 1
CPU MHz: 2303.999
BogoMIPS: 4607.99
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 10 MiB
L3 cache: 24 MiB
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.2.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu | true |
2,829,017,152 | [MPSInductor] Implement `min` and `max` reductions | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146396
* __->__ #146389
* #146380
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,829,002,299 | [WIP][CUDA][cuDNN] Experimental `cudnn_rms_norm` | eqy | open | [
"module: cudnn",
"module: cuda",
"open source",
"module: norms and normalization",
"Stale",
"topic: not user facing"
] | 6 | COLLABORATOR | opt-in for now behind two new native functions---the plan would be to eventually add it as the `CUDA:` backend to `rms_norm`
Initial experiments show forward ~4-5x speed, up fwd+bwd ~3x speedup
cc @csarofeen @ptrblck @xwang233 @msaroufim | true |
2,828,995,143 | [ROCm] TopK optimizations for AMD GPUs | apakbin | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"rocm",
"rocm priority",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 25 | CONTRIBUTOR | TopK performance on ROCm performs better on the test suite with the default config.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,828,987,917 | [ca] refactor compile reasons and log to tlparse | xmfan | closed | [
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: compiled autograd"
] | 3 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146386
* #146229
This PR accumulates comple reasons inside each CacheNode, and logs them to tlparse on each CA compile. This defines a compile as an autograd structure change, and a recompile as a dynamic shape change.
sample tlparse: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpdbo7gt/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
for compiles:
```python
[
"!0: Cache miss due to new autograd node: torch::autograd::GraphRoot (NodeCall 0) with key size 39, previous key sizes=[]"
]
```
for recompiles:
```python
[
"!0: Cache miss due to new autograd node: torch::autograd::GraphRoot (NodeCall 0) with key size 39, previous key sizes=[]",
"!1: Cache miss due to 7 changed tensor shapes (total of 7): sizes[0], sizes[1], sizes[2], sizes[3], sizes[4], sizes[5], sizes[6]"
]
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,974,284 | [WIP] Confirm XPU Regression | EikanWang | closed | [
"triaged",
"open source",
"topic: not user facing",
"ciflow/xpu"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146385
| true |
2,828,954,346 | [Experiment] Fix an unaligned memory access issue in mm_template | desertfire | closed | [
"topic: not user facing",
"module: inductor",
"ciflow/inductor-rocm"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146384
Summary:
Fixes a corner case in the Triton MM template, where the dimension M (dynamic size) can be smaller than BLOCK_M (similarly for the N dimenstion) can trigger unaligned memory access error.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,828,937,589 | Error about vscode c++ configuration libtorch | mikeallen39 | closed | [
"module: windows",
"module: cpp"
] | 1 | NONE | ### 🐛 Describe the bug
<pre>
{
"configurations": [
{
"name": "Win32",
"includePath": [
"${workspaceFolder}/**",
"D:/dependencies/libtorchcu116/libtorch/include",
"D:/dependencies/libtorchcu116/libtorch/include/torch/csrc/api/include"
],
"defines": [
"_DEBUG",
"UNICODE",
"_UNICODE"
],
"compilerPath": "D:/dependencies/mingw64/bin/g++.exe",
"cStandard": "c11",
"cppStandard": "c++17"
}
],
"version": 4
}
</pre>
I downloaded "libtorch-win-shared-with-deps-1.13.1+cu116 (1).zip", and configured as described in the above code.
But I encountered the following problem: `#include error detected. Please update includePath. Squiggle curves have been disabled for this translation unit (D:\github actual project\gptq\quant_cuda.cpp). C/C++(1696)`
I really can't find where the problem is, can anyone help me please?
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jbschlosser | true |
2,828,934,871 | [Metal][BE] Fix the arguments of `polygamma` | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | MEMBER | In the public API, order comes before input, while here they're
reversed. Match for consistency (and make this less error prone).
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,933,966 | [dynamic shapes][real tensor tracing] propagate unbacked hint when creating mod replacement | pianpwk | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 10 | CONTRIBUTOR | Fixes data-dependent errors for 2 PT2I models in draft export
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,828,923,299 | [MPSInductor] Add support for `sum` reduction | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146396
* #146389
* __->__ #146380
- Add `threadgroup_sum` template to `c10/metal/reduction_utils.h` that so far uses barrier to compute the reductions
TODOs:
- Implement efficient reduction using cooperative functions such as `simd_shuffle_down`
- Figure out how to merge several sum reduction together
- Implement `reduction_store` that will only write results from the first thread
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,891,387 | Re-add stft option to align window for center = false | jackzhxng | closed | [
"Merged",
"ciflow/trunk",
"release notes: onnx",
"ciflow/slow"
] | 19 | CONTRIBUTOR | Skips advancing the fc window on https://github.com/pytorch/pytorch/pull/145437, since I just found that there were non-trivial efforts to do so a while ago that eventually was reverted: https://github.com/pytorch/pytorch/pull/73434
Works around the issue by keeping the stft sans center overload
| true |
2,828,887,519 | [aarch64] CUDA 12.8 aarch64 builds to nightly binaries | tinglvv | closed | [
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 9 | COLLABORATOR | https://github.com/pytorch/pytorch/issues/145570
Adding Cuda 12.8 and keeping 12.6 for the sbsa build, supported CUDA_ARCH: 9.0, 10.0, 12.0
Refactor the binaries matrix for cuda sbsa build. Previously cuda-aarch64 was hardcoded to cuda 12.6. Now reads 12.6 and 12.8, new build naming example [manywheel-py3_9-cuda-aarch64-12_8-build](https://github.com/pytorch/pytorch/actions/runs/13132625006/job/36640885079?pr=146378#logs)
TODO: once 12.8 is stable, remove 12.6 in sbsa
cc @atalman @malfet @ptrblck @nWEIdia
| true |
2,828,873,926 | FlexAttention compiled backward gives garbage data in certain stride situations for K.grad | leijurv | closed | [
"high priority",
"triaged",
"module: correctness (silent)",
"oncall: pt2",
"module: dynamic shapes",
"module: inductor",
"module: flex attention"
] | 11 | NONE | ### 🐛 Describe the bug
```python
import torch
import torch.nn.attention.flex_attention
torch.set_default_device("cuda")
print(torch.__version__)
flex_compiled = torch.compile(torch.nn.attention.flex_attention.flex_attention)
for fix_issue in [False, True]:
for i in range(10):
torch.manual_seed(0)
shape = (1, 16, 4096, 64)
Q = torch.randn(shape, requires_grad=True)
K = torch.randn(shape, requires_grad=True)
V = torch.randn(shape, requires_grad=True)
flex_compiled(Q, K, V) # why does this line have to be here??
K_sliced = K[:, :, :-128]
V_sliced = V[:, :, :-128]
if fix_issue:
K_sliced = K_sliced.clone()
flex_compiled(Q, K_sliced, V_sliced).sum().backward()
print("Q", Q.grad.mean(), "K", K.grad.mean(), "V", V.grad.mean(), K_sliced.is_contiguous())
```
When `K_sliced.is_contiguous()` is true, there is no issue. When `K_sliced.is_contiguous()` is false, the `K.grad` contains garbage data.
See:
```
2.6.0+cu124
Q tensor(0.0002, device='cuda:0') K tensor(4.3783e-07, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(4.9176e-07, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(-1.0043e-06, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(5.5709e-07, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(-2.0452e-06, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(-2.7107e-07, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(-3.6084e-07, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(3.0617e-06, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(1.6151e-08, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(7.5523e-07, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
```
Same on nightly:
```
2.7.0.dev20250203+cu124
Q tensor(0.0002, device='cuda:0') K tensor(-1.8570e-06, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(1.0961e-06, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(7.7451e-06, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(-1.1176e-05, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(-6.7828e-07, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(-1.1892e-05, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(2.9753e-06, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(2.1702e-05, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(9.9964e-06, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(2.9386e-07, device='cuda:0') V tensor(1., device='cuda:0') False
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
Q tensor(0.0002, device='cuda:0') K tensor(3.0923e-11, device='cuda:0') V tensor(1., device='cuda:0') True
```
This is an *extremely* weird issue that was really hard to pin down. At first I thought "uninitialized memory" but that's not fully satisfactory, because then you would expect NaN from time to time, and I've never seen it NaN, and additionally compute-sanitizer insists that's not happening. But at the same time, I found that seemingly unrelated changes would drastically affect the loss, for example, adding an unused allocation like `unused_tensor = torch.zeros_like(flex_out)` could dramatically change the final loss. I was able to reduce all the weird coincidences down to one: in this reproduction, you can see there is this line: `flex_compiled(Q, K, V)`. For some reason, the bug does not occur if you comment that out. (!!!)
### Versions
<details>
<summary>Env</summary>
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250203+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 26
On-line CPU(s) list: 0-25
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 26
Stepping: 8
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 832 KiB (26 instances)
L1i cache: 832 KiB (26 instances)
L2 cache: 104 MiB (26 instances)
L3 cache: 416 MiB (26 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-25
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] numpy==1.21.5
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+gitb2684bf3
[pip3] torch==2.7.0.dev20250203+cu124
[pip3] triton==3.2.0
[conda] Could not collect
```
</details>
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @amjames @desertfire @aakhundov @Chillee @drisspg @yanboliang @BoyuanFeng @ydwu4 @bdhirsh | true |
2,828,872,624 | PyWork: preserve Python reference counting when used in functional collectives | d4l3k | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 27 | MEMBER | @fegin found an issue where torchft is not compatible with functional collectives.
Found in https://github.com/pytorch/torchtitan/pull/806
The root cause is because PyProcessGroup/PyWork are not compatible with functional collectives due to a nasty ownership bug.
PyWork relies on a pybind trampoline to propagate requests to Python unfortunately the way Pybind works is that the Python object owns the C++ object rather than some form of shared ownership. Thus what happens is that the PyWork Python object will collected when returned to C++ from the PyProcessGroup but the C++ PyWork object still exists. When the PyWork object is used, this causes a deadlock as the corresponding Python object no longer exists
To solve this, we introduce a new `PyWorkHolder` class which holds a reference to the `py::object` as well as the trampoline class. This resolves any dependency issues since we can now hold ownership in C++ to both the Python and C++ objects.
To make this cleaner we introduce a `WORK_OVERRIDE` macro which is a patched version of `PYBIND11_OVERRIDE` that returns a `PyWorkHolder` rather than just `PyWork` and use for all collectives in PyProcessGroup.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o
Test plan:
```
cd pytorch
pytest test/distributed/test_c10d_functional_native.py
```
```
cd torchft
pytest torchft/process_group_test.py -k functional -v -x -s
```
| true |
2,828,866,216 | [inductor] Remove SimplifyIndexing pass in codegen | jansel | closed | [
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146375
I'm not convinced this does anything since we simplify again later on.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,864,225 | [Dynamo] Fix spammy optimizer warning | mlazos | closed | [
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 6 | CONTRIBUTOR | Fixes https://discuss.pytorch.org/t/torch-compile-optimizer-step-generates-excessive-warning-messages/216067/7
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,828,861,684 | [inductor] Pre-populate cache for simplify_with_ranges return value | jansel | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 14 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146373
* #146297
* #146282
* #146257
* #146255
* #146254
* #146252
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,853,898 | [Submodule] Turning flash-attention integration into 3rd party submod (#144120) | drisspg | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"skip-pr-sanity-checks",
"ciflow/inductor",
"suppress-bc-linter",
"ci-no-td",
"module: sdpa"
] | 29 | CONTRIBUTOR | Summary:
# Summary
### Sticky points
Cuda-graph rng handling has changed / deviated from original implementation. We will be left with a dangling 'offset' val and confusing naming due to BC
## Dependencies
- Flash PR: https://github.com/Dao-AILab/flash-attention/pull/1419
### Other Points
- The BC linter is complaining about losing generate.py and its functions which is not real BC surface
cc albanD
imported-using-ghimport
Test Plan:
Imported from OSS
Building in dev
`buck build @//mode/dev-nosan -c fbcode.nvcc_arch=h100a //caffe2:ATen-cu --show-full-output `
I and Nming the .so I do see that the flash symbols are correctly named:
```
0000000001c3dfb0 t pytorch_flash::run_mha_bwd(pytorch_flash::Flash_bwd_params&, CUstream_st*)::$_0::operator()() const::{lambda()#1}::operator()() const::{lambda()#1}::operator()() const::{lambda()#7}::operator()() const
0000000001c36080 t pytorch_flash::run_mha_fwd(pytorch_flash::Flash_fwd_params&, CUstream_st*, bool)::$_0::operator()() const::{lambda()#2}::operator()() const::{lambda()#1}::operator()() const::{lambda()#6}::operator()() const
0000000001c360e0 t pytorch_flash::run_mha_fwd(pytorch_flash::Flash_fwd_params&, CUstream_st*, bool)::$_0::operator()() const::{lambda()#2}::operator()() const::{lambda()#1}::operator()() const::{lambda()#7}::operator()() const
0000000001c35fc0 t pytorch_flash::run_mha_fwd(pytorch_flash::Flash_fwd_params&, CUstream_st*, bool)::$_0::operator()() const::{lambda()#1}::operator()() const::{lambda()#1}::operator()() const::{lambda()#6}::operator()() const
0000000001c36020 t pytorch_flash::run_mha_fwd(pytorch_flash::Flash_fwd_params&, CUstream_st*, bool)::$_0::operator()() const::{lambda()#1}::operator()() const::{lambda()#1}::operator()() const::{lambda()#7}::operator()() const
```
Reviewed By: vkuzo
Differential Revision: D68502879
Pulled By: drisspg
cc @albanD | true |
2,828,852,412 | Torchrun does not handle worker failure gracefully | shravan-achar | open | [
"oncall: distributed"
] | 1 | NONE | ### 🐛 Describe the bug
```
def train_func():
import os
import torch.distributed as dist
import time
import sys
dist.init_process_group(backend="nccl")
ws = dist.get_world_size()
rank = dist.get_rank()
endpoint = os.getenv("PET_RDZV_ENDPOINT")
print(f"WS: {ws}, RANK: {rank}")
print(endpoint)
dist.barrier()
for i in range(40):
print(i)
time.sleep(1)
if i > 10 and rank == 1:
sys.exit(1)
```
This function is converted into a script. This is the command used,
```
torchrun <converted_script.py>
Env variables:
PET_RDZV_ENDPOINT= test-torch-restart-master-0:23456
PYTHONUNBUFFERED= 1
MASTER_PORT=23456
PET_MASTER_PORT=23456
MASTER_ADDR=test-torch-restart-master-0
PET_MASTER_ADDR=test-torch-restart-master-0
WORLD_SIZE=3
RANK=0
PET_NODE_RANK=0
PET_NPROC_PER_NODE=1
PET_NNODES=3
```
We expected that when worker process with rank 1 failed, all of the workers must be restarted by torchrun. We are not seeing that. What we see is that worker just fails while the other workers continue to run (including the master). Is that accurate with the expectations of how torchrun is supposed to work with handling of worker failures? Also, the failed worker does not get restarted.
We are running on Vanilla Kubernetes (Linux and x86_64)
### Versions
torchrun version 2.1.2
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,828,847,555 | [MPSInductor] Add support for any reduction | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146380
* __->__ #146370
* #146369
- Add `_new_accvar` function that creates a threadgroup variable
- As threadgroup variables can not be initialized in place, add explicit initialization for reduction var
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,847,470 | [MPSInductor] Prep change for reduction support | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146380
* #146370
* __->__ #146369
Add `group_pos` parameter as well as set `group_size` when invoking reduction kernels
Separates loads and stores and insert threadgroup barrier if reduction is in place
Should be a no-op right now
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,837,568 | DeepSeek: block quantization | ngimel | open | [
"oncall: quantization"
] | 2 | COLLABORATOR | DeepSeek is using 128x1 and 128x128 quantization. Currently _scaled_mm supports row-wise quantization (although for some sizes performance of `fast_accum=False` leaves a lot to be desired), but there's no support for 128x1 and 128x128. There's some work for block quantization support for mx format for blackwell, but it likely won't work on Hopper.
We need to decide if this support on Hopper is important and implement if if so. cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @drisspg | true |
2,828,819,891 | [dynamo] Initial support for `nonstrict_trace` | StrongerXi | closed | [
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147572
* #147571
* #146950
* __->__ #146367
* #146714
## Context
> **Note:** `mark_traceable` got renamed to `nonstrict_trace` after
> offline discussion. The reasons are (1) it aligns with `torch.export`'s
> `nonstrict` notion, and (2) it's more definitive in behavior suggestion.
1. [Overall Design](https://docs.google.com/document/d/1O-dR2ZQaJQVt_v67AVcDCw2yJLtqgkZFwoXK0buEWRg/edit?tab=t.0)
2. [Dynamo graph representation with `torch._higher_order_ops.flat_apply`](https://docs.google.com/document/d/1YHl5nPTJvYeCPE5TO9uA18DPWNgUYGE4gCn6bFvXcBM/edit?tab=t.0#heading=h.xtw3hhbro4gn)
## Summary
This patch adds a `torch._dynamo.nonstrict_trace` decorator, which
currently is an enhanced version of `torch._dynamo.allow_in_graph` (see
docstring for their differences). Specifically, this patch focuses on
the UI and functionality prototyping/plumbing.
The main enhancement is supporting more input types, and the
implementation challenge lies in reconstructing the input objects from
Dynamo `VariableTracker` (while accounting for buffered side-effects and
guards). This patch takes a middle-ground (simple implementation with a
bit of user labor), by
1. asking the user to provide pytree registration for non-proxy-able
input types,
2. letting Dynamo trace through `pytree_flatten` (which accounts for
buffered side-effects and guards automatically),
3. and passing in the TreeSpec as a graph attribute constant into
`torch._higher_order_ops.flat_apply` (which unflattens the inputs and
invokes the underlying function).
## Next Steps
In subsequent patches, we will try to support the following:
- annotating on class method
- reads to global tensors
- inputs that contains `pytree.register_constant`-ed instances.
- function as input
- more output types (e.g., any pytree-registered type)
- `torch.nn.Module` as inputs
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,828,790,426 | [mps/inductor] Adjust more tests that expect float64 as input. | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,779,624 | [RFC][LOGS] Add options to show cutlass logs | henrylhtsang | closed | [
"fb-exported",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 10 | CONTRIBUTOR | Summary:
Use `os.environ["TORCH_LOGS"] = "+cutlass"` to see CUTLASS backend logs.
For example,
```
cutlass-3/python/cutlass_library/manifest.py:731] [0/0] Culled cutlass_tensorop_i168256xorgemm_b1_256x64_1024x4_tn_align128 from manifest
...
cutlass_library/generator.py:58] [0/0] *** CreateConvOperator3x
cutlass_library/generator.py:58] [0/0] *** conv_kind: 1
cutlass_library/generator.py:58] [0/0] *** ConvOperation3x::init: conv_kind: 1
cutlass_library.library.TileDescription object at 0x7feea280c640>
```
Test Plan: tested offline.
Differential Revision: D69082809
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,828,735,413 | [DeviceMesh] Add some documentation for `from_group` API and add a 2D test | wz337 | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"module: dtensor",
"release notes: distributed (dtensor)"
] | 9 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu | true |
2,828,729,573 | print out partial fx graph for all data-dependent errors | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146363
* #146296
* #146298
The previous implementation didn't catch the following type of errors
```
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not extract specialized integer from data-dependent expression u2 (unhinted: u2). (Size-like symbols: none)
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,828,715,688 | Simple tensor parallel example forces all inputs/outputs to be replicated | bdhirsh | closed | [
"oncall: distributed",
"module: dtensor"
] | 5 | CONTRIBUTOR | Creating a fresh issue from the comment [here](https://github.com/pytorch/pytorch/issues/108840#issuecomment-2631806300):
Running this repro and printing all inputs/outputs, they all appear to have `device_mesh=DeviceMesh('cuda', [0, 1]), placements=(Replicate(),))`. Is that expected?
```
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.distributed.tensor.parallel import parallelize_module, ColwiseParallel, RowwiseParallel
from torch.distributed.device_mesh import DeviceMesh
from torch import nn
def run(rank, world_size):
# Set environment variables for distributed setup
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
# Initialize the process group (using NCCL for GPUs)
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
# Create a device mesh across all GPUs in the process group.
# This mesh will be identical on every process.
device_ids = list(range(world_size)) # e.g., [0, 1] for a 2-GPU setup
mesh = DeviceMesh("cuda", device_ids)
# Define a simple MLP model.
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.linear1 = nn.Linear(8, 8)
self.relu = nn.ReLU()
self.linear2 = nn.Linear(8, 8)
def forward(self, x):
return self.linear2(self.relu(self.linear1(x)))
model = ToyModel().cuda()
# Define a parallelization plan:
# - Partition the weights of "linear1" column-wise (i.e. split columns across devices).
# - Partition the weights of "linear2" row-wise (i.e. split rows across devices).
parallelize_plan = {
"linear1": ColwiseParallel(),
"linear2": RowwiseParallel(),
}
# Apply tensor parallelism according to the plan.
tp_model = parallelize_module(model, parallelize_plan=parallelize_plan, device_mesh=mesh)
# Print the parameter shapes and sample values to verify sharding.
print(f"Rank {rank} weight distributions:")
for name, param in tp_model.named_parameters():
print(f" {name}: shape = {param.shape}, device = {param.device}")
# Print a snippet of the flattened parameter values.
print(f" sample values: {param.view(-1)[:4]}")
# Compile the model using torch.compile with the "inductor" backend.
compiled_model = torch.compile(tp_model, backend="inductor")
# Dummy input for a forward pass.
input_tensor = torch.ones(4, 8).cuda()
# Run the forward pass.
output = compiled_model(input_tensor)
print(f"Rank {rank} output: {output}")
# Clean up the process group.
dist.destroy_process_group()
if __name__ == "__main__":
world_size = 2 # Adjust based on the number of GPUs you have.
mp.spawn(run, args=(world_size,), nprocs=world_size, join=True)
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu | true |
2,828,699,083 | Jz/test old stft | jackzhxng | closed | [
"release notes: onnx",
"ciflow/slow"
] | 2 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
| true |
2,828,668,514 | Dynamo Unsupported: call_method UserDefinedObjectVariable(zip) __next__ [] {} | zou3519 | open | [
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks"
] | 1 | CONTRIBUTOR | Repro:
```py
@torch.compile(backend="eager", fullgraph=True)
def f(x, z):
next(z)
return x.sin()
x = torch.randn(3)
z = zip([0, 1], [2, 3])
f(x, z)
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,828,668,457 | Dynamo Unsupported: call_method UserDefinedObjectVariable(zip) __iter__ () {} | zou3519 | open | [
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks"
] | 0 | CONTRIBUTOR | Repro:
```py
@torch.compile(backend="eager", fullgraph=True)
def f(x, z):
iter(z)
return x.sin()
x = torch.randn(3)
z = zip([0, 1], [2, 3])
f(x, z)
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,828,660,091 | [ROCm][TunableOp] Support leading dimensions in TunableOp signature. | naromero77amd | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3 | COLLABORATOR | This is a feature enhancement that:
- May improve performance by distinguishing GEMMs with different leading dimensions.
- Fix correctness issues reported by users.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang | true |
2,828,648,030 | [Dynamo] Better unsupported message for Fake Tensor Exception | zou3519 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146357
I cannot repro this. But this line shows up in internal logs, and I want
to know what the exception is and the context inside it. All of the
exceptions_allowed_to_be_fallback are dataclasses, so they should print
nicely.
Test Plan:
- code reading
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,828,633,280 | [cutlass backend] fix bug for accuminator dtype | henrylhtsang | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 12 | CONTRIBUTOR | Will add unit tests for accuracy.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146743
* __->__ #146356
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,629,889 | [dynamo] replace hardcoded eval frame control flags skip_code_recursive_flag/cache_limit_hit_flag | williamwen42 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146355
* #145603
This PR and the previous:
- Moves parts of `eval_frame.c` to C++.
- Reduces code duplication in `dynamo__custom_eval_frame` and makes the control flow more clear.
- Enables `convert_frame` to signal to `eval_frame.cpp` in a general manner how to evaluate this frame, recursive frames, and future frames with the same code object (default/compile, skip, run-only). e.g. this will allow us to change skipping/cache limit hit eval_frame behavior directly from convert_frame without requiring changes to C/C++.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,828,624,396 | Remove fp16 accumulation default from inductor cutlass backend | Chillee | closed | [
"module: inductor",
"ciflow/inductor"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146354
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,593,875 | Fix assertion failure in gemm template lowering | dmpots | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 15 | CONTRIBUTOR | Summary:
This commit fixes a crash in the gemm template lowering caused by hitting an [assert](https://github.com/pytorch/pytorch/blob/fd515e4f59bfa0ac9faa5185b7a02f3222c4cd08/torch/_inductor/codegen/common.py#L1181) that a buffer was previously removed.
The assert triggers because in the first gemm lowering we use a local accumulation buffer, which causes the original buffer name to be added to the `removed_buffers` set. Then in the next gemm lowering we use the global buffer for accumulation, but that buffer name is already in the `removed_buffers` set.
The fix is to add a unique suffix to the buffer name to avoid triggering the assert from different gemm lowerings.
Differential Revision: D68814625
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,551,389 | Build a storage reader/writer to write checkpoints in HF format | ankitageorge | closed | [
"oncall: distributed",
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: new features",
"topic: not user facing",
"ci-no-td"
] | 18 | CONTRIBUTOR | Summary: Title - we want to write checkpoints in HF format with DCP, this diff allows this for the non-distributed use case.
Test Plan:
buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/distributed/checkpoint:test_hf_torchtune_storage
N6476188 --> able to save and load tensor in hf format
Differential Revision: D68444967
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,828,539,503 | [export] Fix requires_grad deserialization | angelayi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 6 | CONTRIBUTOR | Test Plan: CI
Differential Revision: D69072095
| true |
2,828,529,070 | Dynamo Unsupported: call_method BuiltinVariable(str) isalnum [LazyVariableTracker()] {} | zou3519 | closed | [
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks"
] | 0 | CONTRIBUTOR | Repro:
```py
@torch.compile(backend="eager", fullgraph=True)
def f(x, c):
str.isalnum(c)
return x.sin()
x = torch.randn(3)
f(x, "foobar")
```
Kinda weird, but OK. Should also just support all the str methods while we're at it.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,828,526,138 | Dynamo Unsupported: call_method UserDefinedObjectVariable(dict_itemiterator) __next__ [] {} | zou3519 | open | [
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks"
] | 0 | CONTRIBUTOR | Repro:
```py
@torch.compile(backend="eager", fullgraph=True)
def f(x, it):
next(it)
return x.sin()
x = torch.randn(3)
dct = {'a': 3, 'b': 3}
f(x, iter(dct.items()))
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,828,520,472 | Dynamo Unsupported call_method UserDefinedObjectVariable(generator) __iter__ () {} | zou3519 | open | [
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks"
] | 0 | CONTRIBUTOR | Repro:
```py
@torch.compile(backend="eager", fullgraph=True)
def f(x, it):
iter(it)
return x.sin()
def get_gen(i):
for i in range(10):
yield i
x = torch.randn(3)
gen = get_gen(10)
f(x, gen)
```
Not clear to me how supportable this is. `__next__` is also an issue.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,828,514,678 | Dynamo Unsupported call_method UserDefinedObjectVariable(enumerate) __iter__ () {} | zou3519 | open | [
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks"
] | 0 | CONTRIBUTOR | Repro:
```py
@torch.compile(backend="eager", fullgraph=True)
def f(x, it):
iter(it)
return x.sin()
x = torch.randn(3)
f(x, enumerate(range(0, 3)))
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,828,511,553 | Dynamo Unsupported: call_method UserDefinedObjectVariable(enumerate) __next__ [] {} | zou3519 | open | [
"triaged",
"oncall: pt2",
"module: m1",
"module: dynamo",
"module: graph breaks"
] | 0 | CONTRIBUTOR | Repro:
```py
@torch.compile(backend="eager", fullgraph=True)
def f(x, it):
next(it)
return x.sin() * c['a']
x = torch.randn(3)
f(x, enumerate(range(0, 3)))
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,828,505,302 | Only call triton in worker process, ahead of time compile | jamesjwu | closed | [
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Summary:
### Big idea
This PR extends https://github.com/pytorch/pytorch/pull/144288 by combining calling triton in worker processes with the future cache: we kick off triton compilation in the worker processes earlier, during inductor codegen. Basically instead of calling async_compile.triton for the first time only after the entire code has been generated, we start compiling as soon as we know we'll need to compile the kernel. Then, when loading the generated inductor code, we can simply read from our in memory future cache, considerably increasing the parallelism.
### Implementation Overview
In total, the diff does the following:
- Converts TritonFuture to LambdaFuture, only calling triton.compile on worker processes
- Now that triton.compile() isn't called on the main process, we call TritonBundler on all compiled kernels when we get them back from workers
- Extend eellison's future cache to a class, mostly as a refactor
- Finally, call async_compile.triton ahead of time in Scheduler.codegen if workers are warmed up. This causes the subsequent
async_compile.triton call that occurs after codegen to cache hit on cold start.
In the diffs after this, I will add more to CompiledTritonKernels so that TritonBundler, on a warm start, automatically populates the in memory cache on warm start with the existing triton kernels, avoiding calling triton altogether on warm starts.
Because LambdaFutures are much faster to kick off than TritonFutures, due to not needing to load from TritonCodeCache at all, the time spent kicking off these worker jobs is pretty minimal for inductor codegen.
### Can we split the diff for easier review?
It's best if this diff lands atomically with all of these changes, as doing the ahead of time codegen compile is only performant if we replace TritonFuture with LambdaFuture(as we don't need to load the triton kernel on the main process). However, I've made a diff stack for easier reviewing here:
- D69070048 - Run async_compile.triton ahead of time in Scheduler.codegen
- D68633454 - Only call triton in worker process
Test Plan:
Compile times look overall good across the board:
https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Mon%2C%2027%20Jan%202025%2017%3A44%3A28%20GMT&stopTime=Mon%2C%2003%20Feb%202025%2017%3A44%3A28%20GMT&granularity=hour&mode=training&dtype=amp&lDeviceName=cuda%20(a100)&rDeviceName=cuda%20(a100)&lBranch=gh/jamesjwu/100/head&lCommit=a2bf134869bcc237e0c9ec5196331e282f826804&rBranch=gh/jamesjwu/100/base&rCommit=8b2932150f884427bf64235c7dd3b0e9f1727da1
FB FM V4 servicelab shows about a significant cold start improvement:
https://fburl.com/scuba/pt2_compile_events/035nnz0f
There's one model that got considerably slower with dynamic shapes, debugging now
Differential Revision: D69070616
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,504,634 | Dynamo Unsupported: call_method UserDefinedObjectVariable(dict_valueiterator) __next__ [] {} | zou3519 | open | [
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks"
] | 0 | CONTRIBUTOR | Repro:
```py
@torch.compile(backend="eager", fullgraph=True)
def f(x, it):
next(it)
return x.sin()
x = torch.randn(3)
dct = {"a": 3, "b": 3}
f(x, iter(dct.values()))
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,828,501,499 | Only call triton in worker process, ahead of time compile | jamesjwu | closed | [
"module: inductor",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
# Big idea
This PR extends https://github.com/pytorch/pytorch/pull/144288 by combining calling triton in worker processes with the future cache: we kick off triton compilation in the worker processes earlier, during inductor codegen. Basically instead of calling async_compile.triton for the first time only after the entire code has been generated, we start compiling as soon as we know we'll need to compile the kernel. Then, when loading the generated inductor code, we can simply read from our in memory future cache, considerably increasing the parallelism.
# Implementation Overview
In total, the diff does the following:
- Converts TritonFuture to LambdaFuture, only calling triton.compile on worker processes
- Now that triton.compile() isn't called on the main process, we call TritonBundler on all compiled kernels when we get them back from workers
- Extend @eellison's future cache to a class, mostly as a refactor
- Finally, call async_compile.triton ahead of time in Scheduler.codegen if workers are warmed up. This causes the subsequent
async_compile.triton call that occurs after codegen to cache hit on cold start.
In the diffs after this, I will add more to CompiledTritonKernels so that TritonBundler, on a warm start, automatically populates the in memory cache on warm start with the existing triton kernels, avoiding calling triton altogether on warm starts.
Because LambdaFutures are much faster to kick off than TritonFutures, due to not needing to load from TritonCodeCache at all, the time spent kicking off these worker jobs is pretty minimal for inductor codegen.
### Can we split the diff for easier review?
It's best if this diff lands atomically with all of these changes, as doing the ahead of time codegen compile is only performant if we replace TritonFuture with LambdaFuture(as we don't need to load the triton kernel on the main process). However, I've made a diff stack for easier reviewing here:
- D69070048 - Run async_compile.triton ahead of time in Scheduler.codegen
- D68633454 - Only call triton in worker process
Differential Revision: [D69070616](https://our.internmc.facebook.com/intern/diff/D69070616/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,494,844 | Only call triton in worker process; run async_compile.triton ahead of time in Scheduler.codegen | jamesjwu | closed | [
"module: inductor",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
### Big idea
This PR extends https://github.com/pytorch/pytorch/pull/144288 by combining calling triton in worker processes with the future cache: we kick off triton compilation in the worker processes earlier, during inductor codegen. Basically instead of calling async_compile.triton for the first time only after the entire code has been generated, we start compiling as soon as we know we'll need to compile the kernel. Then, when loading the generated inductor code, we can simply read from our in memory future cache, considerably increasing the parallelism.
### Implementation Overview
In total, the diff does the following:
- Converts TritonFuture to LambdaFuture, only calling triton.compile on worker processes
- Now that triton.compile() isn't called on the main process, we call TritonBundler on all compiled kernels when we get them back from workers
- Extend @eellison's future cache to a class, mostly as a refactor
- Finally, call async_compile.triton ahead of time in Scheduler.codegen if workers are warmed up. This causes the subsequent async_compile.triton call that occurs after codegen to cache hit on cold start.
In the diffs after this, I will add more to CompiledTritonKernels so that TritonBundler, on a warm start, automatically populates the in memory cache on warm start with the existing triton kernels, avoiding calling triton altogether on warm starts.
Because LambdaFutures are much faster to kick off than TritonFutures, due to not needing to load from TritonCodeCache at all, the time spent kicking off these worker jobs is pretty minimal for inductor codegen.
### Can we split the diff for easier review?
It's best if this diff lands atomically with all of these changes, as doing the ahead of time codegen compile is only performant if we replace TritonFuture with LambdaFuture(as we don't need to load the triton kernel on the main process). However, I've made a diff stack for easier reviewing here:
D69070048 - Run async_compile.triton ahead of time in Scheduler.codegen
D68633454 - Only call triton in worker process
Differential Revision: [D69013710](https://our.internmc.facebook.com/intern/diff/D69013710/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,492,191 | Only call triton in worker process; run async_compile.triton ahead of time in Scheduler.codegen | jamesjwu | closed | [
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
### Big idea
This PR extends https://github.com/pytorch/pytorch/pull/144288 by combining calling triton in worker processes with the future cache: we kick off triton compilation in the worker processes earlier, during inductor codegen. Basically instead of calling async_compile.triton for the first time only after the entire code has been generated, we start compiling as soon as we know we'll need to compile the kernel. Then, when loading the generated inductor code, we can simply read from our in memory future cache, considerably increasing the parallelism.
### Implementation Overview
In total, the diff does the following:
- Converts TritonFuture to LambdaFuture, only calling triton.compile on worker processes
- Now that triton.compile() isn't called on the main process, we call TritonBundler on all compiled kernels when we get them back from workers
- Extend @eellison's future cache to a class, mostly as a refactor
- Finally, call async_compile.triton ahead of time in Scheduler.codegen if workers are warmed up. This causes the subsequent async_compile.triton call that occurs after codegen to cache hit on cold start.
In the diffs after this, I will add more to CompiledTritonKernels so that TritonBundler, on a warm start, automatically populates the in memory cache on warm start with the existing triton kernels, avoiding calling triton altogether on warm starts.
Because LambdaFutures are much faster to kick off than TritonFutures, due to not needing to load from TritonCodeCache at all, the time spent kicking off these worker jobs is pretty minimal for inductor codegen.
### Can we split the diff for easier review?
It's best if this diff lands atomically with all of these changes, as doing the ahead of time codegen compile is only performant if we replace TritonFuture with LambdaFuture(as we don't need to load the triton kernel on the main process). However, I've made a diff stack for easier reviewing here:
D69070048 - Run async_compile.triton ahead of time in Scheduler.codegen
D68633454 - Only call triton in worker process
Differential Revision: [D69013710](https://our.internmc.facebook.com/intern/diff/D69013710/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,490,066 | Dynamo graph break on `call_method UserDefinedObjectVariable(list_iterator) __next__ [] {}` | zou3519 | open | [
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks"
] | 0 | CONTRIBUTOR | Repro:
```py
@torch.compile(backend="eager", fullgraph=True)
def f(x, it):
next(it)
return x.sin()
x = torch.randn(3)
it = iter([1, 2, 3])
f(x, it)
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,828,458,016 | [dynamo] Support functools.partial variables through inspect.signature | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146339
* #146116
* #146322
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,828,452,937 | __pow__ operator on cfloat z=0 on mps produces `nan` | jcampbell | open | [
"triaged",
"module: complex",
"module: correctness (silent)",
"module: mps"
] | 4 | NONE | ### 🐛 Describe the bug
Using the `**` operator on a complex dtype that has value zero returns nan when using the `mps` device
```
import torch
device = torch.device("mps")
t = torch.tensor(0 + 0j, dtype=torch.cfloat).to(device) # only the scalar zero appears to cause this issue
t = t ** 2
print(t)
```
Returns:
```
tensor(nan+nanj, device='mps:0')
```
### Versions
python collect_env.py
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.8 (main, Jan 14 2025, 23:36:58) [Clang 19.1.6 ] (64-bit runtime)
Python platform: macOS-15.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | true |
2,828,449,213 | Honor Dr.CI classification results on auto commit hash update | huydhn | closed | [
"Merged",
"topic: not user facing",
"test-config/default"
] | 3 | CONTRIBUTOR | Disable `ignore_flaky_failures` was a safer choice, but it seems that this option doesn't work with the current state of the CI. For example, https://github.com/pytorch/pytorch/pull/125806 hasn't been merged since May because there would always be a failure in one type or another. This effectively disables the automate mechanism.
My proposal here is to relax this rule and allows the bot to merge auto commit has update with `@pytorchbot merge` like a regular PR. Then we will at least have something working. If this causes issue, we can revert it back and try to longer route of improving CI reliability. | true |
2,828,405,077 | [ONNX] Fix torchlib function errors | justinchuby | open | [
"module: onnx",
"triaged"
] | 4 | COLLABORATOR | Tracking issue for new function errors from the torchlib migration.
- [x] unflatten (https://github.com/microsoft/onnxscript/pull/2070)
- [ ] embedding bag
- [ ] as_strided
- [x] unfold (https://github.com/microsoft/onnxscript/pull/2067) | true |
2,828,398,757 | [WIP][dynamic shapes] mark backed size symbols as size-like | pianpwk | open | [
"Stale",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | experimental, to apply upper-bound / maxsize size-oblivious semantics to backed symbols
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,828,392,136 | Only call triton in worker process, ahead of time compile | jamesjwu | closed | [
"fb-exported",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146334
This PR extends https://github.com/pytorch/pytorch/pull/144288 by combining calling triton in worker processes with the future cache: we kick off triton compilation in the worker processes earlier, during inductor codegen. Basically instead of calling async_compile.triton for the first time only after the entire code has been generated, we start compiling as soon as we know we'll need to compile the kernel. Then, when loading the generated inductor code, we can simply read from our in memory future cache, considerably increasing the parallelism.
Because LambdaFutures are much faster to kick off than TritonFutures, due to not needing to load from TritonCodeCache at all, the time spent kicking off these worker jobs is pretty minimal for inductor codegen.
Differential Revision: [D69013710](https://our.internmc.facebook.com/intern/diff/D69013710/) | true |
2,828,383,088 | Add optional generator to distribution sampler/rsample methods. | vladoovtcharov | open | [
"module: distributions",
"triaged",
"open source",
"topic: not user facing"
] | 5 | NONE | Fixes part of #45115 and #11340
Adds a generator parameter to all the sample/rsample methods of torch distribution classes
cc @fritzo @neerajprad @alicanb @nikitaved | true |
2,828,378,673 | DeepSeek: fine-grained overlap | ngimel | open | [
"oncall: distributed"
] | 3 | COLLABORATOR | DeepSeek implements fine-grain concurrency using computation of one batch to hide communication of another (see picture). Currently we have no convenient mechanism to express this kind of parallelism, given that model code usually specifies the series of computations and communications for a single microbatch and puts a loop over microbatches on top. It's quite possible that this sort of parallelism should live in torchtitan, or maybe we should rely on torch.compile to come up with suitable schedules. The design space is pretty large here.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @Chillee @janeyx99
<img width="793" alt="Image" src="https://github.com/user-attachments/assets/2f885a6d-0d8c-4da9-a031-18b4a4a18fcc" /> | true |
2,828,377,716 | DeepSeek: hierarchical a2a | ngimel | open | [
"oncall: distributed"
] | 0 | COLLABORATOR | Deepseek implements limited-node hierarchical routing to reduce cross-node traffic, where each token is sent to a preset number of nodes. The routing is done in 2 pipelined stages - first token is sent to peer GPU on a node, and then that GPU routes it to correct GPUs within a node in the dispatch stage. For DeepSeek published parameters, with EP=64 all the experts are located on 8 nodes, and each token is sent to 8 experts. With the naive implementation, token would likely go to all 7 other nodes, with the cross-node traffic of `model_dim * 7 * data_type_size`. With node-limited routing (and proper deduplication, the implementation should be careful about this) the token will be sent to 3 other nodes, resulting in the cross-node traffic of `model_dim * 3 * data_type_size`, cutting the cross-node traffic in half.
For the combine all2all (all2all that happens after MOE computation or MOE gradient computation), the contributions to this token from GPUs on a node are summed within a node (an op that can roughly be described as ReduceScatterAll2All) and then with a cross-node send they are returned to the GPU where the token lived originally, with similar reduction to cross-node traffic.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @yifuwang | true |
2,828,376,655 | DeepSeek: MLA attention | ngimel | open | [
"triaged",
"module: sdpa"
] | 6 | COLLABORATOR | DeepSeek uses MLA attention which currently doesn't have efficient implementation in pytorch. cc @drisspg | true |
2,828,375,627 | DeepSeek: a2a communication with metadata on the GPU | ngimel | open | [
"oncall: distributed"
] | 4 | COLLABORATOR | For e2e routing data is computed on the GPU and to avoid CPU synchronization the all2all op itself should be implemented in such a way that if would read the necessary metadata from the GPU. The wrinkle here is that with splits data on the GPU we won't know what size output we need to allocate, the typical way to get around this is to allocated fixed size output (N_max_tokens) and come up with an algorithm to drop extra tokens if more than the limit is going to get routed to this expert.
Additionally, with this token choice routing we would need to preface actual all2all with some allgather comm to get correct offsets where each token should end up, and we should make sure that the latency of this allgather comm is small (it should communicate only token ids, so the size is very small).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @yifuwang, @kwen2501 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.