id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,888,319,388
|
Add Structured Knowledge Accumulation (SKA) Layer to PyTorch
|
BouarfaMahi
|
open
|
[
"module: nn",
"triaged"
] | 0
|
NONE
|
🚀 The feature, motivation and pitch
### Description
We propose adding **Structured Knowledge Accumulation (SKA)** layers as native subclasses to PyTorch, introducing **forward-only, entropy-driven learning** without backpropagation. SKA enables self-organizing neural networks that learn **without a loss function**, unlocking efficient, biologically plausible AI. A paradigm shift for scalable neural networks.
### Why SKA?
- **No Backpropagation**: Forward-only learning eliminates gradient computation overhead.
- **Entropy Minimization**: Layers structure knowledge progressively via local updates.
- **Autonomous Learning**: Decision shifts drive learning without a loss function.
- **Scalability**: Suited for distributed AI and edge devices with low memory needs.
### Proposed Implementation
1. **Create `SKALinear` and `SKAConv2d` Layers**
- Subclass `torch.nn.Module` for SKA-based updates using entropy shifts.
- Example:
```python
class SKALinear(nn.Module):
def __init__(self, in_features, out_features, eta=0.01):
super().__init__()
self.weight = nn.Parameter(torch.randn(out_features, in_features) * 0.01)
self.bias = nn.Parameter(torch.zeros(out_features))
self.prev_D = None
self.eta = eta
def forward(self, x):
z = x @ self.weight.t() + self.bias
D = torch.sigmoid(z)
if self.prev_D is not None:
delta_D = D - self.prev_D
# Entropy-driven update
grad_update = -(z * D * (1 - D) + delta_D) / torch.log(torch.tensor(2.0))
self.weight.data -= self.eta * torch.matmul(grad_update.t(), x) / x.shape[0]
self.bias.data -= self.eta * grad_update.mean(dim=0)
self.prev_D = D.detach()
return D
```
2. **No Explicit Loss Function**
- SKA learns autonomously, requiring no supervision or loss minimization.
- Entropy decreases naturally over forward passes, structuring knowledge.
3. **Track Learning via Entropy Reduction & Alignment**
- Measure layer-wise entropy:
$$H^{(l)} = -\frac{1}{\ln 2} \sum_{k=1}^{K} \mathbf{z}^{(l)}_k \cdot \Delta \mathbf{D}^{(l)}_k$$
(summed over K forward passes)
- Log cosine similarity between $z$ and $ΔD$ for interpretability, e.g., via a `metrics` property.
### Benefits of SKA Integration in PyTorch
🔹 **Drop-in Replacement**: Swap `nn.Linear` with `SKALinear`.
🔹 **Efficient Training**: No backward passes, GPU-friendly tensor ops.
🔹 **Memory-Efficient**: Eliminates gradient storage.
🔹 **Interpretability**: Entropy heatmaps replace loss curves.
---
### References & Resources
- SKA Research Paper:
- [2503.13942v1](https://arxiv.org/abs/2503.13942) - Theoretical Foundation
- [2504.03214V1](https://arxiv.org/abs/2504.03214) - Theoretical Extension
- [SKA GitHub](https://github.com/quantiota/Arxiv) - Initial codebase
- [Kaggle SKA Competition](https://www.kaggle.com/competitions/structured-knowledge-accumulation) - Community benchmark
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,888,313,637
|
[export] Fix logging so that it doesn't result in max recursion error
|
angelayi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"fx",
"ciflow/inductor",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Test Plan:
buck2 run mode/dev-nosan sigmoid/inference/ts_migration:pt2i_readiness_main -- --model_id=487493491 --test_suite ads_all --mode test_full_model
Produces https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmp2wsjQH/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
Differential Revision: D70416613
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,888,293,720
|
[triton 3.3] Fix inductor/test_profiler.py test
|
davidberard98
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148230
test_inductor_profiling_kernel_names_pointwise is checking that the profiler correctly records the input shapes to the kernel. After triton 3.3, we get a different number of args (because the constexpr args are passed in, from the python perspective). This just patches the test to pass in either case.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,291,030
|
[cutlass backend] Add main tests for mm, addmm and bmm - step 1
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148236
* #148234
* #148233
* __->__ #148229
This adds very good coverage for normal mm tests {aoti x torch.compile} x {default, dynamic}.
There are some parts that are less tested. For example:
* different layout combo
* shapes that are less aligned
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,259,491
|
ROCm: Disable torch check for Multiplication of two Float8_e5m2 matrices
|
jagadish-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"rocm",
"ciflow/rocm"
] | 5
|
CONTRIBUTOR
|
ROCm supports Multiplication of two Float8_e5m2 matrices.
Hence disabling the torch check for ROCm.
Test command (on ROCm h/w supporting fp8)
python test/test_matmul_cuda.py TestFP8MatmulCudaCUDA.test_float8_basics_cuda -v
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,888,240,713
|
Significantly speed up save_cache_artifacts
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148227
* #148226
While using save_cache_artifacts on internal workloads, we have noticed that repeatedly calling this function after every batch is incredibly expensive. This PR significantly speeds up this function call by opting out of pickle and redesigning serialization algorithm.
Essentially what we want is to be able to call serialize many times without incurring costs from scratch.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,240,628
|
Add AppendingByteSerializer class
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148227
* __->__ #148226
This PR adds a new util class that enables efficient appending of sequential byte data with custom serialization and deserialization.
| true
|
2,888,218,329
|
[Inductor-CPU] ATen SDPA kernel runtime is not captured in profiling results
|
sanchitintel
|
closed
|
[
"oncall: cpu inductor"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
With `torch.compile`, SDPA op runtime is not being captured in PyTorch profiling results.
The profiling results may have an item such as `graph_0_cpp_fused__scaled_dot_product_flash_attentio.....`, but it doesn't correspond to SDPA, and instead corresponds to a kernel whose output may be input to SDPA.
This issue manifests regardless of whether CPP wrapper is enabled in Inductor config.
It also happens regardless of whether max-autotune is enabled (When autotuning is enabled, flex-attention is used for next token generation, but the ATen SDPA op's Flash Attention kernel is used for first token generation).
The following Inductor config was enabled, but its use is not necessary:
```
inductor_config.profiler_mark_wrapper_call = True
inductor_config.cpp.enable_kernel_profile = True
inductor_config.cpp.descriptive_names = "inductor_node"
```
### Snippet of generated code from GPT-J to elaborate
<details>
```c++
#include <ATen/record_function.h>
#include "/tmp/torchinductor_user/3b/c3bi5gk6mslf6u4iaqafhxm64z6u65e3eain4xlary5blqnvv6xx.h"
extern "C" void cpp_fused__scaled_dot_product_flash_attention_for_cpu_1_convert_element_type_43_convert_element_type_42_index_put_2_index_put_3_16(const float* in_ptr0,
const float* in_ptr1,
const int64_t* in_ptr2,
const int64_t* in_ptr3,
const bfloat16* in_ptr4,
bfloat16* out_ptr0,
bfloat16* out_ptr1,
bfloat16* out_ptr2,
bfloat16* out_ptr3)
{
RECORD_FUNCTION("graph_0_cpp_fused__scaled_dot_product_flash_attention_for_cpu_1_convert_element_type_43_convert_element_type_42_index_put_2_index_put_3_16", c10::ArrayRef<c10::IValue>({}));
#pragma omp parallel num_threads(32)
{
int tid = omp_get_thread_num();
{
#pragma omp for
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(4194304L); x0+=static_cast<int64_t>(32L))
{
{
if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(4194304L)))
{
auto tmp0 = at::vec::VectorizedN<float,2>::loadu(in_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(32));
auto tmp1 = at::vec::convert<bfloat16,1,float,2>(tmp0);
tmp1.store(out_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(32));
}
}
}
}
{
#pragma omp for
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(1024L); x0+=static_cast<int64_t>(1L))
{
#pragma GCC ivdep
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(16L); x1+=static_cast<int64_t>(1L))
{
for(int64_t x2=static_cast<int64_t>(0L); x2<static_cast<int64_t>(256L); x2+=static_cast<int64_t>(32L))
{
{
if(C10_LIKELY(x2 >= static_cast<int64_t>(0) && x2 < static_cast<int64_t>(256L)))
{
auto tmp0 = at::vec::VectorizedN<float,2>::loadu(in_ptr1 + static_cast<int64_t>(x2 + 256L*x1 + 4096L*x0), static_cast<int64_t>(32));
auto tmp2 = in_ptr2[static_cast<int64_t>(x0)];
auto tmp36 = at::vec::Vectorized<bfloat16>::loadu(in_ptr4 + static_cast<int64_t>(x2 + 256L*x1 + 4096L*x0), static_cast<int64_t>(32));
auto tmp1 = at::vec::convert<bfloat16,1,float,2>(tmp0);
auto tmp3 = static_cast<int64_t>(64);
auto tmp4 = ((tmp2 < 0) != (tmp3 < 0) ? (tmp2 % tmp3 != 0 ? tmp2 / tmp3 - 1 : tmp2 / tmp3) : tmp2 / tmp3);
auto tmp5 = 32L;
auto tmp6 = c10::convert<int64_t>(tmp5);
auto tmp7 = decltype(tmp4)(tmp4 + tmp6);
auto tmp8 = tmp4 < 0;
auto tmp9 = tmp8 ? tmp7 : tmp4;
auto tmp10 = tmp9;
auto tmp11 = c10::convert<int64_t>(tmp10);
TORCH_CHECK((0 <= tmp11) & (tmp11 < 32L), "index out of bounds: 0 <= tmp11 < 32L");
auto tmp13 = in_ptr3[static_cast<int64_t>(tmp9)];
auto tmp14 = c10::convert<int32_t>(tmp13);
auto tmp15 = static_cast<int32_t>(64);
auto tmp16 = decltype(tmp14)(tmp14 * tmp15);
auto tmp17 = c10::convert<int64_t>(tmp16);
auto tmp18 = mod(tmp2, tmp3);
auto tmp19 = static_cast<int32_t>(0);
auto tmp20 = tmp18 != tmp19;
auto tmp21 = std::signbit(tmp18);
auto tmp22 = std::signbit(tmp3);
auto tmp23 = tmp21 != tmp22;
auto tmp24 = tmp20 & tmp23;
auto tmp25 = decltype(tmp18)(tmp18 + tmp3);
auto tmp26 = tmp24 ? tmp25 : tmp18;
auto tmp27 = decltype(tmp17)(tmp17 + tmp26);
auto tmp28 = 2048L;
auto tmp29 = c10::convert<int64_t>(tmp28);
auto tmp30 = decltype(tmp27)(tmp27 + tmp29);
auto tmp31 = tmp27 < 0;
auto tmp32 = tmp31 ? tmp30 : tmp27;
auto tmp33 = tmp32;
auto tmp34 = c10::convert<int64_t>(tmp33);
TORCH_CHECK((0 <= tmp34) & (tmp34 < 2048L), "index out of bounds: 0 <= tmp34 < 2048L");
tmp1.store(out_ptr1 + static_cast<int64_t>(x2 + 256L*x1 + 4096L*x0), static_cast<int64_t>(32));
tmp1.store(out_ptr2 + static_cast<int64_t>(x2 + 256L*tmp32 + 524288L*x1), static_cast<int64_t>(32));
tmp36.store(out_ptr3 + static_cast<int64_t>(x2 + 256L*tmp32 + 524288L*x1), static_cast<int64_t>(32));
}
}
}
}
}
}
}
}
```
This kernel's outputs are input to the SDPA op
```python
cpp_fused__scaled_dot_product_flash_attention_for_cpu_1_convert_element_type_43_convert_element_type_42_index_put_2_index_put_3_16((const float*)(buf42.data_ptr()), (const float*)(buf49.data_ptr()), (const int64_t*)(arg736_1.data_ptr()), (const int64_t*)(_frozen_param626.data_ptr()), (const bfloat16*)(buf50.data_ptr()), (bfloat16*)(buf51.data_ptr()), (bfloat16*)(buf52.data_ptr()), (bfloat16*)(arg628_1.data_ptr()), (bfloat16*)(arg629_1.data_ptr()));
arg628_1.reset();
arg629_1.reset();
// Topologically Sorted Source Nodes: [scaled_dot_product_attention_1], Original ATen: [aten._to_copy, aten._scaled_dot_product_flash_attention_for_cpu]
auto tmp_tensor_handle_17 = reinterpret_tensor_wrapper(buf50, 4, int_array_5, int_array_6, 0L);
RAIIAtenTensorHandle tmp_tensor_handle_17_raii(tmp_tensor_handle_17);
AtenTensorHandle var_1 = buf19.get();
AtenTensorHandle buf54_handle;
AtenTensorHandle buf55_handle;
AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_cpu__scaled_dot_product_flash_attention_for_cpu(buf51, buf52, tmp_tensor_handle_17_raii, 0.0, 0, &var_1, 0, &buf54_handle, &buf55_handle));
```
</details>
### Potential solutions?
Tried [this patch](https://github.com/pytorch/pytorch/commit/734f940f527a53bde1334b8a8819062c78029f2f#diff-b60511be1e7fafc2c45e7c0cb3e769ad48b2a1060a69759f58979ffc33b38a79) from Chunyuan and ported it to the main branch in #148290, which should help with profiling GPT-J, the motivating example for this issue)
### Versions
Main branch
cc @leslie-fang-intel @jgong5
| true
|
2,888,209,250
|
[EZ][BE] Increase tolerances for interpolate op
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148224
* #148211
* #148187
* #148154
Not sure why tolerances were set like that, this logic was added in https://github.com/pytorch/pytorch/pull/104181 without much explanation
But if I'm to make a guess, it's likely due to the inaccuracy of bilinear op, that has since been replaced by shader
| true
|
2,888,190,711
|
[inductor][ck] add kBatch_sweep to config.rocm
|
coconutruben
|
closed
|
[
"module: rocm",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 6
|
CONTRIBUTOR
|
Summary:
# Why
enable testing and users to specify a set of kBatches to try rather than relying on our hand written heuristic
# What
add rocm.kBatch_sweep as a list of kBatches to try out. These will generate a product of CK instances, one per kBatch for each existing op, though they are often filtered out if they are likely to fail at runtime
Test Plan: n/a
Reviewed By: chenyang78
Differential Revision: D70226055
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,181,103
|
make saved_tensor_hooks work better in compile for doing activation compression
|
bdhirsh
|
open
|
[
"module: activation checkpointing",
"module: autograd",
"triaged",
"oncall: pt2",
"module: pt2-dispatcher"
] | 4
|
CONTRIBUTOR
|
One potential use case of quantized dtype (e.g. float8) is in compressing activations: you run the model forward, autograd saves some activations for backward. Rather than saving these activations in higher precision, you may want to compress them (say to float8), and decompress them when they are later needed in the backward, to reduce peak memory.
The usual eager API for doing something like this is `saved_tensor_hooks` - you specify a pack function that autograd invokes when it saves an activation (to compress), and an unpack function that autograd calls when the activation is used in the backward (to decompress).
One can try to use `saved_tensor_hooks` in compile for this purpose, but there are a few issues today:
(1) the unpack hooks will all be called **at the same time for all activations**. Some context: `torch.compile` handles training by construction a mega `autograd.Function`, containing a compiled forward and backwared graph. Today, `saved_tensor_hooks` integrate with compile by getting turned on at runtime, on the resulting `autograd.Function`. This means that autograd is obligated to run your unpack hooks on every tensor that is saved for backward. This has the downside that you won't actually save any memory: we will unpack all of your activations to the larger size at the same time. Instead, what you really want is for this unpacking to happen **inside** of the compiled backward graph, so each activation can delay being unpacked until it is actually used
(2) the pack hooks will not be compiled either. This is not necessarily a memory problem, but if you are quantizing your activations in the forward, you may want this quantization routine to be compiled as part of your forward graph. For the same reasons above, this will not happen today.
After some brainstorming with @xmfan and @zou3519 , we have a potential solution, that requires tweaking the way compile handles `saved_tensor_hooks`. It comes in a few pieces:
(1) Allow compile to inline pack/unpack hooks into the corresponding forward and backward graphs. Concretely: if there is an ambient set of saved_tensor_hooks around a compiled region, we can trace these hooks into a graph at compile time. First, grab the FakeTensor activations that we traced in AOTAutograd. Then grab the ambient pack and unpack hook, and run `make_fx()` on them with each activation to produce a pack and unpack graph. Finally, after partitioning, stitch the pack graph onto the end of the forward graph (once per activation), and the unpack graph onto the beginning of the backward graph (once per activation). We can potentially be a bit smarter here: instead of stitching the unpack graphs onto the beginning of the backward, we can embed them into the location where each activation is first used in the backward graph
We need some way for the user to specify that their pack/unpack hooks are valid for us to inline at compile time. since this might not be the case (their pack/unpack hooks must be naively traceable). We could potentially require them to use `mark_traceable` for this.
(2) Another constraint today is that we don't support `saved_tensor_hooks()` being entered inside a compiled region. We could consider lifting this restriction. One way to do this would be for dynamo/AOT to annotate which pack/unpack hooks a given set of intermediate nodes in the graph map to, so we know which activations should map to which hooks in the compiler.
Here is also a partial paste for using saved_tensor_hooks to quantize/dequantize activations: https://www.internalfb.com/phabricator/paste/view/P1743484780. It works in eager, but won't work in compile without trying to fix (1) above. (This paste also tries to lazily dequantize in the unpack hooks, which we may not actually want)
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu @zou3519
| true
|
2,888,144,994
|
[ci] disable cudagraph for tts_angular on dashboard
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
tts_angular with cudagraph is flaky. Its speedup varies from .05 to 1.01. This PR disables cudagraph for tts_angular to avoid the noise. Since tts_angular shows ~1x speedup while other torchbench models show ~2x speedup, skipping tts_angular would wrongly bump the cudagraph speedup. So this PR only disables cudagraph for tts_angular instead of skipping tts_angular.
[Dashboard ](https://github.com/pytorch/pytorch/actions/runs/13597394087)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,888,140,251
|
[dynamo] rename test_graph_break_messages -> test_error_messages
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148401
* __->__ #148220
* #148205
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,888,128,252
|
MPS vs Metal vs CPU performance comparison
|
manuelcandales
|
open
|
[
"module: performance",
"triaged",
"module: mps"
] | 0
|
CONTRIBUTOR
|
The following numbers are averages over 1000 runs, produced on an M1 Pro (16GB RAM), using the script at the bottom of this issue.
exp, tanh and erfinv are operations currently implemented as Metal shaders
Things to notice:
- On the MPS backend, multiplying a 1-element tensor by 1 is 3 to 5 times more expensive than computing exp, tanh or erfinv on that same 1-element tensor. Multiplying a 10,000-element tensor by 1 is 2 times more expensive that computing exp, tanh or erfinv on that same tensor.
- Contrast the performance of exp, tanh, erfinv vs sqrt, log, sin, cos (all unary operations)
- Dispatch time of MPS ops trashes their performance for small to medium size inputs.
- Ops implemented as Metal shaders don't have that dispatch overhead.
- For small sized inputs: CPU wins over Metal and MPS.
- For medium to large sized inputs: Metal wins over MPS and CPU.
```
Tiny tensors (1-element):
-------------------------
Metal Performance:
exp : 7.6 us [1.4 dispatch] vs 0.9 us on CPU
tanh : 4.6 us [1.2 dispatch] vs 0.9 us on CPU
erfinv : 5.0 us [1.3 dispatch] vs 1.1 us on CPU
MPS Performance:
sqrt : 21.0 us [20.7 dispatch] vs 0.9 us on CPU
log : 19.9 us [19.6 dispatch] vs 0.9 us on CPU
sin : 20.4 us [20.0 dispatch] vs 0.9 us on CPU
cos : 19.6 us [19.2 dispatch] vs 0.9 us on CPU
mul.Scalar : 24.1 us [23.7 dispatch] vs 2.1 us on CPU
add.Scalar : 23.6 us [23.2 dispatch] vs 2.1 us on CPU
sub.Scalar : 24.2 us [23.8 dispatch] vs 2.2 us on CPU
div.Scalar : 23.7 us [23.4 dispatch] vs 2.2 us on CPU
mul : 24.5 us [24.2 dispatch] vs 0.9 us on CPU
add : 23.7 us [23.4 dispatch] vs 1.0 us on CPU
sub : 24.7 us [24.3 dispatch] vs 1.0 us on CPU
div : 23.3 us [22.9 dispatch] vs 0.9 us on CPU
linear : 25.5 us [23.5 dispatch] vs 1.6 us on CPU
Medium tensors (10,000 elements):
---------------------------------
Metal Performance:
exp : 9.6 us [1.2 dispatch] vs 37.8 us on CPU
tanh : 12.0 us [1.3 dispatch] vs 49.1 us on CPU
erfinv : 12.0 us [1.3 dispatch] vs 108.9 us on CPU
MPS Performance:
sqrt : 20.4 us [20.1 dispatch] vs 33.0 us on CPU
log : 20.6 us [20.3 dispatch] vs 42.3 us on CPU
sin : 20.6 us [20.2 dispatch] vs 41.4 us on CPU
cos : 20.1 us [19.8 dispatch] vs 43.6 us on CPU
mul.Scalar : 24.4 us [24.1 dispatch] vs 2.9 us on CPU
add.Scalar : 24.7 us [24.5 dispatch] vs 3.0 us on CPU
sub.Scalar : 24.7 us [24.4 dispatch] vs 3.0 us on CPU
div.Scalar : 24.5 us [24.3 dispatch] vs 2.9 us on CPU
mul : 24.5 us [24.3 dispatch] vs 1.7 us on CPU
add : 24.7 us [24.4 dispatch] vs 1.9 us on CPU
sub : 24.9 us [24.1 dispatch] vs 1.9 us on CPU
div : 27.0 us [24.3 dispatch] vs 1.8 us on CPU
linear : 28.5 us [24.7 dispatch] vs 6.6 us on CPU
Large tensors (1000,000 elements):
----------------------------------
Metal Performance:
exp : 32.6 us [1.3 dispatch] vs 300.7 us on CPU
tanh : 59.9 us [1.2 dispatch] vs 687.8 us on CPU
erfinv : 80.6 us [1.2 dispatch] vs 4154.6 us on CPU
MPS Performance:
sqrt : 38.4 us [26.1 dispatch] vs 202.0 us on CPU
log : 49.6 us [33.1 dispatch] vs 415.7 us on CPU
sin : 57.6 us [33.6 dispatch] vs 371.6 us on CPU
cos : 55.0 us [33.7 dispatch] vs 402.7 us on CPU
mul.Scalar : 55.4 us [33.9 dispatch] vs 195.2 us on CPU
add.Scalar : 49.7 us [28.8 dispatch] vs 197.7 us on CPU
sub.Scalar : 60.8 us [40.8 dispatch] vs 195.7 us on CPU
div.Scalar : 68.6 us [40.0 dispatch] vs 197.9 us on CPU
mul : 45.8 us [26.7 dispatch] vs 189.9 us on CPU
add : 54.5 us [35.4 dispatch] vs 203.6 us on CPU
sub : 47.0 us [28.5 dispatch] vs 193.2 us on CPU
div : 43.0 us [25.1 dispatch] vs 185.5 us on CPU
linear : 560.9 us [311.7 dispatch] vs 1370.6 us on CPU
```
Benchmarking script:
```python
import torch
import time
torch.set_grad_enabled(False)
def benchmark_op(op, *args, num_iters=1000):
# Warm-up
for _ in range(10):
op(*args)
torch.mps.synchronize()
# Benchmark
start = time.time()
for _ in range(num_iters):
op(*args)
end_dispatch = time.time()
torch.mps.synchronize()
end_execution = time.time()
execution_time = 1000000 * (end_execution - start) / num_iters
dispatch_time = 1000000 * (end_dispatch - start) / num_iters
return execution_time, dispatch_time
def benchmark_vs_cpu(op, *args, num_iters=1000):
e_time, d_time = benchmark_op(op, *args, num_iters=1000)
cpu_args = [arg.cpu() if isinstance(arg, torch.Tensor) else arg for arg in args]
cpu_time, _ = benchmark_op(op, *cpu_args, num_iters=1000)
spaces = " " * (20 - len(op.__name__))
print(f" {op.__name__}{spaces}: {e_time:.1f} us [{d_time:.1f} dispatch] vs {cpu_time:.1f} us on CPU")
def bench_metal_ops(x, y):
print("\n Metal Performance:")
# unary
benchmark_vs_cpu(torch.ops.aten.exp, x)
benchmark_vs_cpu(torch.ops.aten.tanh, x)
benchmark_vs_cpu(torch.ops.aten.erfinv, x)
def bench_mps_ops(x, y):
print("\n MPS Performance:")
# unary
benchmark_vs_cpu(torch.ops.aten.sqrt, x)
benchmark_vs_cpu(torch.ops.aten.log, x)
benchmark_vs_cpu(torch.ops.aten.sin, x)
benchmark_vs_cpu(torch.ops.aten.cos, x)
# binary with scalar
benchmark_vs_cpu(torch.ops.aten.mul.Scalar, x, 1)
benchmark_vs_cpu(torch.ops.aten.add.Scalar, x, 1)
benchmark_vs_cpu(torch.ops.aten.sub.Scalar, x, 1)
benchmark_vs_cpu(torch.ops.aten.div.Scalar, x, 1)
# binary
benchmark_vs_cpu(torch.ops.aten.mul, x, y)
benchmark_vs_cpu(torch.ops.aten.add, x, y)
benchmark_vs_cpu(torch.ops.aten.sub, x, y)
benchmark_vs_cpu(torch.ops.aten.div, x, y)
# linear
benchmark_vs_cpu(torch.ops.aten.linear, x, y, None)
def bench_with_shape(*shape):
x = torch.randn(*shape, requires_grad=False, device="mps", dtype=torch.float32)
y = torch.randn(*shape, requires_grad=False, device="mps", dtype=torch.float32)
bench_metal_ops(x, y)
bench_mps_ops(x, y)
print("\nTiny tensors (1-element):")
print("-------------------------")
bench_with_shape(1, 1)
print("\nMedium tensors (10,000 elements):")
print("---------------------------------")
bench_with_shape(100, 100)
print("\nLarge tensors (1000,000 elements):")
print("----------------------------------")
bench_with_shape(1000, 1000)
```
cc @msaroufim @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,888,106,454
|
DISABLED test_dynamic_sources_dynamic_override (__main__.MiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 15
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_dynamic_sources_dynamic_override&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37999979503).
Over the past 3 hours, it has been determined flaky in 45 workflow(s) with 92 failures and 45 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_dynamic_sources_dynamic_override`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/dynamo/test_misc.py", line 7877, in test_dynamic_sources_dynamic_override
self.assertEqual(counter.frame_count, 1)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13592535338/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4092, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 1 but got 3.
Absolute difference: 2
Relative difference: 2.0
To execute this test, run the following from the base repo dir:
python test/dynamo/test_misc.py MiscTests.test_dynamic_sources_dynamic_override
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_misc.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,888,106,281
|
DISABLED test_guard_failure_fn2 (__main__.MiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_guard_failure_fn2&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37999438720).
Over the past 3 hours, it has been determined flaky in 45 workflow(s) with 92 failures and 45 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_guard_failure_fn2`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/dynamo/test_misc.py", line 6694, in test_guard_failure_fn2
guard_failure[0],
TypeError: 'NoneType' object is not subscriptable
To execute this test, run the following from the base repo dir:
python test/dynamo/test_misc.py MiscTests.test_guard_failure_fn2
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_misc.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,888,106,195
|
DISABLED test_guard_failure_fn_shape_control_dynamic_shapes (__main__.DynamicShapesMiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 4
|
NONE
|
Platforms: mac, macos, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_guard_failure_fn_shape_control_dynamic_shapes&suite=DynamicShapesMiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38003139493).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 12 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_guard_failure_fn_shape_control_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @malfet @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,888,106,144
|
DISABLED test_mark_unbacked_strict_dynamic_shapes (__main__.DynamicShapesMiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: asan, linux, rocm, slow, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mark_unbacked_strict_dynamic_shapes&suite=DynamicShapesMiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38001883288).
Over the past 3 hours, it has been determined flaky in 40 workflow(s) with 80 failures and 40 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mark_unbacked_strict_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_misc.py", line 10877, in test_mark_unbacked_strict
with self.assertRaisesRegex(RuntimeError, "RelaxedUnspecConstraint"):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: RuntimeError not raised
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_UBSAN=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesMiscTests.test_mark_unbacked_strict_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,888,106,073
|
DISABLED test_dynamic_sources_dynamic_override_dynamic_shapes (__main__.DynamicShapesMiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 13
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_dynamic_sources_dynamic_override_dynamic_shapes&suite=DynamicShapesMiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38002538615).
Over the past 3 hours, it has been determined flaky in 45 workflow(s) with 92 failures and 45 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_dynamic_sources_dynamic_override_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,888,086,555
|
Expose functions used in custom backend in torch_python dll
|
wschin
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: bug fixes",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Fixes #148208. There are solutions for exposing symbols implicitly from inline functions (i.e., inline function A calls non-inline function B in foo.h. Code includes foo.h has to see the symbol B in DLL).
Solution 1: tag the entire struct where the inline functions are defined as member functions with TORCH_PYTHON_API --- this PR does this for python_arg_parser.h. An alternative solution exists but will slow down dispatching a lot --- drop inline keyword and move implementation to .cc file.
Solution 2: tag individual functions with TORCH_PYTHON_API. This PR does this for python_tensor.h.
Related discussion about hiding torch_python symbols: https://github.com/pytorch/pytorch/pull/142214
| true
|
2,888,043,173
|
[not for merge] [AOTI] selectively build code at O1
|
benjaminglass1
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148212
* #144349
* #144293
* #146928
Still TODO:
1. Port these improvements to `cpp_wrapper` mode, if the speedup is worth it.
2. Remove the now-unneeded `cpp_prefix` include from the shared `cpu.h` AOTI header.
3. Fix CMake packaging, even if it's a bodge job "put it all in one file instead of two if we CMake package" style fix.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,014,821
|
[MPS][BE] Combine two `upsample_kernel_out_template` into one
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148224
* __->__ #148211
* #148187
* #148154
- First, by stopp inverting sizes and strides, i.e. passing them as is, but reading them in inverse order in the shader as 1st stride of 4D tensor is one used for batches, 2nd for channels and 3rd and 4th for spatial coordinates
- Pass `scales` as float2 even in linear tensor
Above allows one to collide two flavors `upsample_kernel_out_template` into one
| true
|
2,887,991,646
|
[inductor] Lowerings for max_pool3d
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148210
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,887,991,512
|
[inductor] support dilation in max_pool2d lowering
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148209
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,887,981,853
|
Regression: Missing Symbols in PyTorch DLL (torch_python)
|
wschin
|
closed
|
[
"module: cpp-extensions",
"module: cpp",
"triaged",
"actionable"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
We use some functions in python_arg_parser.h for our backend and those symbols are gone after #136743. In python_arg_parsers.h, you will see inline implementation such as
```cpp
inline at::Tensor PythonArgs::tensor(int i) {
if (args[i] && THPVariable_CheckExact(args[i])) {
return THPVariable_Unpack(args[i]);
}
return tensor_slow(i);
}
```
so functions like `tensor_slow` needs to be exposed by tagging `tensor_slow` with `TORCH_PYTHON_API`. Alternatively, we can try to remove `inline` and move implementation in cpp file.
### Versions
latest main branch can repro.
cc @malfet @zou3519 @xmfan @jbschlosser
| true
|
2,887,958,210
|
Add option to shut down idle async_compile workers after timeout
|
jamesjwu
|
open
|
[
"triaged",
"actionable",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
We've seen internally that running 32 compile threads across large jobs can lead to significant memory pressure over time, as the workers last for the entire training session. PT2 does not know when we may need to compile again, so we do need the workers at any point, but it should be possible to add a config option that kills workers after N minutes of inactivity. If we end up needing the workers again, we can always call `AsyncCompile.use_process_pool()` again to warm up a new set of workers.
Making this configurable can lead to CPU memory wins in large jobs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,887,954,654
|
[cond] support output the same unbacked symbol from two branches
|
ydwu4
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148206
Previously, we didn't track the unbacked symbols leaked out of true_branch and false_branch if they have the same shape expr. This cause the the fake output of cond operator itself doesn't set up its unbacked_bindings meta properly (because they're ignored).
In this PR, we also check whether there're leaked out unbacked symbols and create new unbacked symbols for it and track it as output of cond.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,887,936,065
|
[dynamo] remove internal stack trace for fullgraph=True graph breaks
|
williamwen42
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compile ux"
] | 4
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148401
* #148220
* __->__ #148205
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,887,921,732
|
[debug] 'No available kernel' error for cudnn on A100
|
XilunWu
|
closed
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148357
* __->__ #148204
* #148125
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,887,906,031
|
[fr] Added protection against missing stack frames in fr
|
VieEeEw
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Summary: We have quite a while failures due to this unprotected access. https://fburl.com/scuba/ai_rca_debug_tracing/qtnb63qf
Test Plan:
Reviewed By: fduwjj
Differential Revision: D70358287
| true
|
2,887,879,842
|
Move estimate runtime and pick loop order heuristics into choices.py
|
exclamaforte
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Just a warmup; there are several more of these in the scheduler that I'll move to choices in a follow-up PR.
Test Plan:
Existing tests should cover refactor.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,887,760,901
|
Compile breaks flex-attention with jagged tensors
|
lgienapp
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 1
|
NONE
|
### 🐛 Describe the bug
I was playing around with the code from this https://pytorch.org/tutorials/intermediate/transformer_building_blocks.html but ran into the error below when combining jagged tensors, flex-attention, and torch.compile.
Error: `torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'NestedIntNode' object has no attribute 'expr'`
MWE:
```python
import torch
from torch.nn.attention.flex_attention import flex_attention
seqlen = 32
num_heads = 8
head_dim = 128
device = "cpu" # same on "cuda:0"
flex_attention_compiled = torch.compile(flex_attention)
q, k, v = torch.nested.nested_tensor(
[torch.rand(length, num_heads, 3 * head_dim) for length in torch.randint(low=6, high=25, size=(seqlen,))],
layout=torch.jagged,
device=device,
).chunk(3, axis=-1)
flex_attention(q, k, v) # works
flex_attention_compiled(q, k, v) # fails
```
### Error logs
Stacktrace (with `TORCH_LOGS="+dynamo" TORCHDYNAMO_VERBOSE=1`):
```python
W0228 18:36:49.213000 6496 /.venv/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py:6307] [1/0] failed during evaluate_expr(s1, hint=j1, size_oblivious=False, forcing_spec=False
E0228 18:36:49.213000 6496 /.venv/lib/python3.12/site-packages/torch/fx/experimental/recording.py:299] [1/0] failed while running evaluate_expr(*(s1, j1), **{'fx_node': False})
Traceback (most recent call last):
File "debug.py", line 18, in <module>
flex_attention_compiled(q, k, v) # fails
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1036, in _compile
raise InternalTorchDynamoError(
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2341, in CALL
self._call(inst)
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2341, in CALL
self._call(inst)
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2341, in CALL
self._call(inst)
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/variables/misc.py", line 325, in call_function
func(ComptimeContext(tx))
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/comptime.py", line 362, in <lambda>
comptime(lambda ctx: ctx.get_local("val").force_static())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/comptime.py", line 113, in force_static
self.__variable.evaluate_expr()
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/variables/tensor.py", line 1201, in evaluate_expr
return guard_scalar(self.sym_num)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 1213, in guard_scalar
return guard_int(a)
^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 1405, in guard_int
return a.node.guard_int("", 0) # NB: uses Python backtrace
^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/fx/experimental/sym_node.py", line 492, in guard_int
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6303, in evaluate_expr
return self._evaluate_expr(
^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6502, in _evaluate_expr
concrete_val = compute_concrete_val()
^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6347, in compute_concrete_val
return sympy.sympify(hint)
^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/sympy/core/sympify.py", line 417, in sympify
return a._sympy_()
^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/torch/__init__.py", line 576, in _sympy_
return self.node.expr
^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'NestedIntNode' object has no attribute 'expr'
from user code:
File "/.venv/lib/python3.12/site-packages/torch/nn/attention/flex_attention.py", line 1320, in flex_attention
torch._dynamo.mark_static(x, -3)
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/decorators.py", line 560, in mark_static
comptime.force_static(t.size(index))
File "/.venv/lib/python3.12/site-packages/torch/_dynamo/comptime.py", line 362, in force_static
comptime(lambda ctx: ctx.get_local("val").force_static())
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1019-nvidia-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 NVL
Nvidia driver version: 550.127.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9534 64-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3718.0659
CPU min MHz: 1500.0000
BogoMIPS: 4892.67
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 64 MiB (64 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
EDIT: updated Versions output since it misreported results for `uv`-based environment.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,887,741,487
|
Fix recompile reason logging
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148200
for the following test case
```
@torch.compile(dynamic=False, backend=cnts)
def fn(x, y, z):
return x * y * z[0]
fn(1, torch.randn(1), {0: torch.randn(1)})
fn(2, torch.randn(2), {0: torch.randn(2)})
fn(3, torch.randn(3), {0: torch.randn(3)})
fn(4, torch.randn(4), {0: torch.randn(4)})
fn(5, torch.randn(5), {0: torch.randn(5)})
```
previously we would log
```
0/0: L['x'] == 1
0/0: L['x'] == 1
0/0: L['x'] == 1
0/0: L['x'] == 1
```
but after this change we now log
```
0/0: L['x'] == 1
0/1: L['x'] == 2
0/2: L['x'] == 3
0/3: L['x'] == 4
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,887,715,749
|
[Inductor-CPU] qlinear_binary output may have undefined strides with dynamic shape support
|
sanchitintel
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo",
"oncall: cpu inductor"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
When `M` dimension of the `qlinear_binary` activation is 1 and dynamic shape support is enabled with `torch.compile`, `qlinear_binary` op's output's outermost dim's stride may be a symbolic value that's undefined. The issue may be related to Dynamo.
### Code to reproduce the issue
https://gist.github.com/leslie-fang-intel/f9686fa15b181d82294861aac1d5abad
### Error
NameError: name 's1' is not defined
### Versions
Main branch
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,887,706,842
|
[XPU] Fix graph partition tests
|
benjaminglass1
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 7
|
COLLABORATOR
|
These tests are currently broken in ciflow/xpu due to explicitly calling CUDA tensors.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,887,700,180
|
Enable oneDNN dispatch for gemm bf16bf16->bf16
|
aditew01
|
closed
|
[
"triaged",
"open source",
"module: arm",
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 11
|
COLLABORATOR
|
Currently, `linear` layers using BF16 are dispatched to OpenBLAS, provided that sbgemm_ is available.
However, profiling on AArch64 shows that dispatching to oneDNN results in a significant speedup. This PR updates the dispatch logic to leverage oneDNN for improved performance.
Attaching some benchmark results. Instance: NeoverseV1., on 16 threads.
<img width="482" alt="Screenshot 2025-02-28 at 17 18 38" src="https://github.com/user-attachments/assets/b84e7455-af6e-417f-920d-bdd2bec2e8f9" />
cc @malfet @snadampal @milpuz01
| true
|
2,887,691,103
|
[inductor][triton] Decide how to deprecate "old triton versions"
|
davidberard98
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Right now we have a mess of at least 3 "versions" of Triton - i.e. commit ranges that we are compatible with.
This is beneficial for a few reasons:
* Ability to bisect old versions of Triton
* Compatibility with users who have different (i.e. old) versions of Triton installed - also fbcode/oss mismatches,
* Possibly other Triton forks for different hardware, which may be based off of old versions of Triton
But it has some downsides - mainly messy code trying to handle the various versions of Triton. Also, we don't test the old versions, so there's nothing ensuring that these old code paths are actually still correct. We should probably decide on a policy or a way to determine when we can clean up handling for an old version of Triton.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,887,674,334
|
ci: move xpu triton build to manylinux 2.28
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
Follow PR #148129 to remove manylinux builds for triton xpu
| true
|
2,887,655,430
|
PyTorch nightly MPS SDPA op is unusable
|
malfet
|
closed
|
[
"high priority",
"triage review",
"module: regression",
"module: mps"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Attempt to run https://github.com/malfet/llm_experiments/blob/main/run_llama.py resulted in crash
```
% python run_llama.py --device mps
Loaded stories15M.pt in 0.20 seconds
Once upon a time/AppleInternal/Library/BuildRoots/d187755d-b9a3-11ef-83e5-aabfac210453/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Runtimes/MPSRuntime/MPSRuntime.mm:202: failed assertion `Unsupported MPS operation mps.placeholder'
zsh: abort python run_llama.py --device mps
```
And reverting https://github.com/pytorch/pytorch/pull/147545 fixes the problem
### Versions
nightly
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
2,887,558,434
|
[ONNX] torch.matmul() breaks dynamic shapes during export
|
morozovve
|
closed
|
[
"module: onnx",
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 2
|
NONE
|
### 🐛 Describe the bug
When exporting model, that
1) contains torch.matmul() operation on Nd matrices
2) has dynamically-shaped input
3) shapes need to be broadcasted during matmul()
exporter decides, that possible range of dimension sizes is just one number -- the one, that input value has.
When broadcasting (unsqueeze + repeat) is performed manually and then torch.bmm() is used, everything works as expected.
Here's a snippet:
```
import onnx
import torch
import torch.nn as nn
# x: (batch_size, N, 3, 3)
# y: (batch_size, 10_000, 3)
# output: (batch_size, N, 10_000, 3)
class MatMulModule(nn.Module):
'''
Version 1: using auto-broadcasting with torch.matmul().
Dynamic shapes are not working properly.
'''
def __init__(self):
super(MatMulModule, self).__init__()
def forward(self, x, y):
x = x[:, :, None, :, :]
y = y[:, None, :, :, None]
return torch.matmul(x, y)
class BatchMatMulModule(nn.Module):
'''
Version 2: using torch.bmm() with manual broadcasting.
Dynamic shapes are working properly.
'''
def __init__(self):
super(BatchMatMulModule, self).__init__()
def forward(self, x, y):
bs = x.shape[0]
N = x.shape[1]
N2 = y.shape[1]
# Manually broadcast & reshape for torch.bmm()
x = x[:, :, None, :, :].repeat(1, 1, N2, 1, 1).reshape(bs * N * N2, 3, 3)
y = y[:, None, :, :, None].repeat(1, N, 1, 1, 1).reshape(bs * N * N2, 3, 1)
return torch.bmm(x, y).reshape(bs, N, N2, 3)
# Create model instance
# model = BatchMatMulModule() # Works fine
model = MatMulModule() # Doesn't work
# Example inputs
x = torch.randn(1, 5, 3, 3)
y = torch.randn(1, 10_000, 3)
# Dynamic shapes
dyn_name = 'dim1'
dyn_min = 1
dyn_max = 10
dynamic_shapes = {}
dynamic_shapes['x'] = {1: torch.export.Dim(dyn_name, min=dyn_min, max=dyn_max)}
dynamic_shapes['y'] = {0: torch.export.Dim.STATIC}
# Export
ep = torch.export.export(model, args=tuple(), kwargs={'x': x, 'y': y}, dynamic_shapes=dynamic_shapes)
onnx_program = torch.onnx.export(
ep,
dynamo=True,
optimize=True,
)
onnx_program.save('matmul.onnx')
# Check input shapes
print("Input shapes for first arg:")
m = onnx.load('matmul.onnx')
print(m.graph.input[0])
```
During the export of `MatMulModule()` with `TORCH_LOGS="+dynamic"` flag following lines appear:
```
[torch.onnx] Run decomposition...
torch/fx/experimental/symbolic_shapes.py:5802] _update_var_to_range s0 = VR[5, 5] (update)
torch/fx/experimental/symbolic_shapes.py:5963] set_replacement s0 = 5 (range_refined_to_singleton) VR[5, 5]
torch/fx/experimental/symbolic_shapes.py:6281] eval Eq(s0, 5) [guard added] (utils/_stats.py:21 in wrapper), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(s0, 5)"
torch/fx/experimental/symbolic_shapes.py:6401] eval 5 [trivial]
```
At the end snippet shows, how first input is defined in ONNX model; for `MatMulModule()` its shape is
```
shape {
dim {dim_value: 1}
dim {dim_param: "5"}
dim {dim_value: 3}
dim {dim_value: 3}
}
```
And for `BatchMatMulModule()` the shape is
```
shape {
dim {dim_value: 1}
dim {dim_param: "s0"}
dim {dim_value: 3}
dim {dim_value: 3}
}
```
which suggest that latter didn't reduce dynamic input range to just one number.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 565.57.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7713P 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3720.7029
CPU min MHz: 1500.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca debug_swap
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] ament-flake8==0.14.4
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxscript==0.2.0
[pip3] pytorch-triton==3.0.0+45fff310c8
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,887,549,991
|
Add cache size to error message
|
kevmo314
|
closed
|
[
"triaged",
"open source",
"module: dynamo",
"release notes: dynamo"
] | 5
|
CONTRIBUTOR
|
Adds the configured limit size to the cache size limit error message.
When the limit is hit right now, it only tells you that the cache size limit has been reached, not what the limit is. That's not too helpful if you want to bump the limit.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,887,545,435
|
[BE][Ez]: Use itertools.chain.from_iterable when possible
|
Skylion007
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 3
|
COLLABORATOR
|
Often makes the code more readable, more efficient, and adds support for infinite iterables.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,887,492,014
|
Support huggingface reading and writing for multi rank case
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 12
|
CONTRIBUTOR
|
Summary: This diff adds the ability for HF reader/writer to read/write in a distributed way. We do this by sending all the tensors meant for the same file to the same rank.
Test Plan:
ensure existing tests pass
I also ran a full end to end test on my devserver to read/write from my HF repo
Differential Revision: D70096439
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,887,474,790
|
[inductor][cpu] Fix error with FlexibleLayout weights in BMM
|
frost-intel
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 21
|
COLLABORATOR
|
Fixes #148074
When node A is reshaped (is a `ReinterpretView`) and node B has a `FlexibleLayout`, then the layout of node B *may* be changed during the `kernel.select(options["W"], 0, self.b_index)` call, which could cause the assertion in `kernel.select` to fail.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,887,470,990
|
[MPS][BE][EZ] Aggregate macros
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148211
* __->__ #148187
* #148154
Refactor `INSTANTIATE_UPSAMPLE_BILINEAR2D(DTYPE)`, `INSTANTIATE_UPSAMPLE_BICUBIC2D(DTYPE)` and `INSTANTIATE_UPSAMPLE_BILINEAR2DAA(DTYPE)` use common `INSTANTIATE_UPSAMPLE2D`
Then combine multiple invocations into `INSTANTIATE_UPSAMPLE_ALL`
I.e. functionally it's a no-op, but achieves the same with fewer lines of code
| true
|
2,887,464,637
|
[BE][PYFMT] migrate PYFMT for `test/inductor/` to `ruff format`
|
XuehaiPan
|
open
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144556
* __->__ #148186
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,887,463,917
|
[BE][PYFMT] migrate PYFMT for `torch/ao/` to `ruff format`
|
XuehaiPan
|
open
|
[
"open source",
"Stale",
"release notes: quantization",
"topic: not user facing",
"fx",
"release notes: AO frontend",
"no-stale"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148185
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,887,448,318
|
Add cuda 11.8 guard for cufile preload
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Follow up after https://github.com/pytorch/pytorch/pull/148137
Make sure we don't try to load cufile on CUDA 11.8
Test:
```
>>> import torch
/usr/local/lib64/python3.9/site-packages/torch/_subclasses/functional_tensor.py:276: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
>>> torch.__version__
'2.7.0.dev20250227+cu118'
>>>
```
| true
|
2,887,030,399
|
Implement batching rule for masked_fill_
|
LeanderK
|
open
|
[
"triaged",
"module: functorch"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
Very simple, i need a batching-rule for masked_fill_ and the warning encourages me to write an issue
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::masked_fill_.Tensor. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:81.)
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,886,998,885
|
[Profiler] Add profiler activity for HPU devices
|
wdziurdz
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Fixes #148181
| true
|
2,886,998,175
|
[Profiler] Add profiler activity for HPU devices
|
wdziurdz
|
closed
|
[
"feature",
"oncall: profiler"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
It is necessary to separate HPU devices from other devices to correctly profile HPU devices. Sometimes, it is necessary to collect only traces from HPU devices. Without this capability, profiling becomes very difficult.
### Alternatives
_No response_
### Additional context
_No response_
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,886,832,964
|
[pytree] add another simplified pytree module `torch.pytree`
|
XuehaiPan
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: pytree",
"ci-test-showlocals"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148328
* __->__ #148180
* #137400
* #152624
Differences between `torch.pytree` and `torch.utils.pytree`:
1. APIs in `torch.utils.pytree` have a `tree_` prefix:
```python
leaves, treespec = torch.utils.pytree.tree_flatten(tree)
new_tree = torch.utils.pytree.tree_map(func, tree)
leaves, treespec = torch.pytree.flatten(tree)
new_tree = torch.pytree.map(func, tree)
```
This is similar to the JAX pytree API: `jax.tree_util.tree_*` vs. `jax.tree.*`.
2. The argument order of `unflatten` is reversed for better `functools.partial` support:
```python
tree = torch.utils.pytree.tree_unflatten(leaves, treespec)
tree = torch.pytree.unflatten(treespec, leaves)
unflatten_fn = functools.partial(torch.pytree.unflatten, treespec)
tree1 = unflatten_fn(leaves1)
tree2 = unflatten_fn(leaves2)
```
This is also aligned with the JAX pytree API: `jax.tree.unflatten(treedef, leaves)`.
Because we are adding a completely new module, there are no BC issues.
cc @zou3519
| true
|
2,886,599,403
|
Dynamo failure on handling list comparisons
|
CaoE
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-polyfill"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
This should be caused by https://github.com/pytorch/pytorch/pull/144485.
When Dynamo processes list comparison via `torch.export.export_for_training`:
`if m.f != -1: ` https://github.com/WongKinYiu/yolov7/blob/main/models/yolo.py#L604
Possible values of `m.f`: [-1, -2, -3, -4, -5, -6] or -1 or 75, etc.
Seems when `m.f` is a list, The following error will occur:
```
...
File "pytorchs/pytorch/torch/export/_trace.py", line 695, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "pytorchs/pytorch/torch/_dynamo/eval_frame.py", line 1579, in inner
result_traced = opt_f(*args, **kwargs)
File "pytorchs/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "pytorchs/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "pytorchs/pytorch/torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
File "pytorchs/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "pytorchs/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "pytorchs/pytorch/torch/_dynamo/convert_frame.py", line 1365, in __call__
return self._torchdynamo_orig_callable(
File "pytorchs/pytorch/torch/_dynamo/convert_frame.py", line 564, in __call__
return _compile(
File "pytorchs/pytorch/torch/_dynamo/convert_frame.py", line 993, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "pytorchs/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "pytorchs/pytorch/torch/_dynamo/convert_frame.py", line 725, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "pytorchs/pytorch/torch/_dynamo/convert_frame.py", line 759, in _compile_inner
out_code = transform_code_object(code, transform)
File "pytorchs/pytorch/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "pytorchs/pytorch/torch/_dynamo/convert_frame.py", line 235, in _fn
return fn(*args, **kwargs)
File "pytorchs/pytorch/torch/_dynamo/convert_frame.py", line 679, in transform
tracer.run()
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 2925, in run
super().run()
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 685, in wrapper
return inner_fn(self, inst)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 1706, in CALL_FUNCTION
self.call_function(fn, args, {})
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 923, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "pytorchs/pytorch/torch/_dynamo/variables/functions.py", line 496, in call_function
return super().call_function(tx, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/variables/functions.py", line 347, in call_function
return super().call_function(tx, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/variables/functions.py", line 149, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 929, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 3131, in inline_call
return tracer.inline_call_()
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 3268, in inline_call_
self.run()
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 1697, in COMPARE_OP
self.push(compare_op_handlers[inst.argval](self, self.popn(2), {}))
File "pytorchs/pytorch/torch/_dynamo/variables/builtin.py", line 1070, in call_function
return handler(tx, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/variables/builtin.py", line 907, in builtin_dispatch
rv = fn(tx, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/variables/builtin.py", line 811, in <lambda>
handlers.append(lambda tx, args, _: binop_handler(tx, *args))
File "pytorchs/pytorch/torch/_dynamo/variables/builtin.py", line 558, in handler
return tx.inline_user_function_return(
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 929, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 3131, in inline_call
return tracer.inline_call_()
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 3268, in inline_call_
self.run()
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 685, in wrapper
return inner_fn(self, inst)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 1706, in CALL_FUNCTION
self.call_function(fn, args, {})
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 923, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "pytorchs/pytorch/torch/_dynamo/variables/functions.py", line 347, in call_function
return super().call_function(tx, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/variables/functions.py", line 149, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 929, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 3131, in inline_call
return tracer.inline_call_()
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 3268, in inline_call_
self.run()
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 685, in wrapper
return inner_fn(self, inst)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 1706, in CALL_FUNCTION
self.call_function(fn, args, {})
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 923, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "pytorchs/pytorch/torch/_dynamo/variables/misc.py", line 765, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/variables/lists.py", line 480, in call_method
return super().call_method(tx, name, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/variables/lists.py", line 400, in call_method
return super().call_method(tx, name, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/variables/lists.py", line 147, in call_method
return variables.UserFunctionVariable(polyfills.list_cmp).call_function(
File "pytorchs/pytorch/torch/_dynamo/variables/functions.py", line 347, in call_function
return super().call_function(tx, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/variables/functions.py", line 149, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 929, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 3131, in inline_call
return tracer.inline_call_()
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 3268, in inline_call_
self.run()
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "pytorchs/pytorch/torch/_dynamo/symbolic_convert.py", line 1442, in FOR_ITER
val = it.next_variable(self)
File "pytorchs/pytorch/torch/_dynamo/variables/iter.py", line 385, in next_variable
args.append(get_item(it))
File "pytorchs/pytorch/torch/_dynamo/variables/iter.py", line 381, in get_item
return it.next_variable(tx)
File "pytorchs/pytorch/torch/_dynamo/variables/base.py", line 456, in next_variable
unimplemented(f"next({self})")
File "pytorchs/pytorch/torch/_dynamo/exc.py", line 380, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: next(ConstantVariable(int: -1))
from user code:
File "utils/model_zoo/models_v2/pytorch/yolov7/inference/cpu/yolov7/models/yolo.py", line 610, in forward
return self.forward_once(x, profile) # single-scale inference, train
File "utils/model_zoo/models_v2/pytorch/yolov7/inference/cpu/yolov7/models/yolo.py", line 615, in forward_once
if m.f != -1: # if not from previous layer
File "pytorchs/pytorch/torch/_dynamo/polyfills/__init__.py", line 244, in cmp_ne
return not cmp_eq(a, b)
File "pytorchs/pytorch/torch/_dynamo/polyfills/__init__.py", line 234, in cmp_eq
result = a.__eq__(b)
File "pytorchs/pytorch/torch/_dynamo/polyfills/__init__.py", line 89, in list_cmp
for a, b in zip(left, right):
```
### Reproducer
```
import torch
class Model(torch.nn.Module):
def __init__(self, K, N):
super().__init__()
self.linear = torch.nn.Linear(K, N)
def forward(self, input):
inputs = []
mf = [1, 2]
if mf != -1:
inputs.append(input)
return self.linear(inputs[0])
if __name__ == "__main__":
with torch.no_grad():
M = 1024
K = 1024
N = 1024
input = torch.randn(M, K)
m = Model(K, N).eval()
example_inputs = (input,)
exported_model = torch.export.export_for_training(
m,
example_inputs,
)
c_m = exported_model.module()
res = c_m(input)
```
### Versions
latest torch.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @chauhang @amjames
| true
|
2,886,596,580
|
[Break XPU][Inductor] Generalize device-bias code and fix test_graph_partition for XPU
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147727
* __->__ #148178
* #148155
This PR generalized the device-bias code introduced by #147038 . And align the behavior between XPU and CUDA on add + mm + pointwise pattern (for XPU, from addmm + pointwise to mm + fused_add_pointwise) , which fix the failed test case `test_graph_partiton` on XPU.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,886,578,173
|
[Dynamo] Replace `unimplemented` with`unimplemented_v2` in `torch/_dynamo/variables/base.py`
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 11
|
CONTRIBUTOR
|
Part of #147913
Replace `unimplemented` with`unimplemented_v2` in `torch/_dynamo/variables/base.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,886,560,117
|
Fix addbmm & addmv & baddbmm out dtype check
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148176
* #148174
----
- torch.addbmm
- torch.addmv
- torch.baddbmm
ISSUE related:
https://github.com/pytorch/pytorch/issues/138399
| true
|
2,886,512,098
|
The unevenness of torch.randint() during large range(3e9) sampling.
|
nerymrjr
|
closed
|
[
"module: random"
] | 1
|
NONE
|
### 🐛 Describe the bug
When the sample range is around 3,000,000,000, the performance of torch.randint() becomes very uneven. Specifically, the probability of samples in the first 2/5 of the range is roughly twice that of the last 1/2:
```python
>>> from collections import defaultdict
>>> n = 3000000000
>>> size_ = 1000000
>>> x = torch.randint(0, n, (size_,), dtype=torch.int64)
>>> d = defaultdict(int)
>>> for ix in x.tolist():
... l = ix // (n // 10)
... d[l] += 1
...
>>> print(d)
defaultdict(<class 'int'>, {4: 92172, 7: 69152, 9: 69731, 1: 139636, 2: 140027, 6: 70105, 3: 139854, 8: 70072, 5: 69788, 0: 139463})
```
However, when the sample range is increased further to 10,000,000,000, this phenomenon disappears:
```python
>>> from collections import defaultdict
>>> n = 10000000000
>>> size_ = 1000000
>>> x = torch.randint(0, n, (size_,), dtype=torch.int64)
>>> d = defaultdict(int)
>>>
>>> for ix in x.tolist():
... l = ix // (n // 10)
... d[l] += 1
...
>>> print(d)
defaultdict(<class 'int'>, {1: 99160, 8: 100007, 3: 99880, 7: 100033, 5: 99780, 4: 100346, 0: 100200, 6: 100093, 9: 100563, 2: 99938})
```
### Versions
[pip3] numpy==2.0.2
[pip3] optree==0.13.1
[pip3] torch==2.6.0
[pip3] triton==3.2.0
cc @pbelevich
| true
|
2,886,510,116
|
Fix torch.matmul related out dtype check
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148174
----
- torch.matmul -> CompositeImplicitAutograd -> dot_out (when left_dim == 1 & right_dim == 1)
-> mv_out (when left_dim == 2 & right_dim == 1)
-> mm_out (when left_dim == 1 & right_dim == 2)
-> ...
- torch.dot
- torch.vdot
- torch.mm
- torch.mv
ISSUE related:
https://github.com/pytorch/pytorch/issues/138399
| true
|
2,886,505,615
|
[Don't merge]Upgrade submodule oneDNN to v3.7 (#147498)(ZI)
|
xuhancn
|
open
|
[
"module: mkldnn",
"open source",
"Stale",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,886,472,245
|
[Inductor] Layout created with non-sympy.Expr sizes
|
DDEle
|
open
|
[
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It seems that many variables hinted as `sympy.Expr` is actually not in torch inductor. To expose the problem easily, just add runtime assert like below
```diff
diff --git a/torch/_inductor/ir.py b/torch/_inductor/ir.py
index 17f896d8f1c..8fc703fb2d5 100644
--- a/torch/_inductor/ir.py
+++ b/torch/_inductor/ir.py
@@ -3194,6 +3194,9 @@ class Layout(OutputSpec):
stride: Optional[list[Expr]] = None,
offset: Expr = Integer(0),
) -> None:
+ if not all(isinstance(s, sympy.Expr) for s in size):
+ print("!")
+ assert False
if stride is None:
stride = FlexibleLayout.contiguous_strides(size)
self.device = device
```
And run some random inductor ut `pytest -vs test/inductor/test_torchinductor_opinfo.py` (Ctrl-C after ~20min to save time...) and I got
```
282 failed, 2726 passed, 540 skipped, 45 xfailed in 1270.44s (0:21:10)
```
I think the problem is about general inductor but not CPU specific as it happens in a very early stage.
### Error logs
Taking `pytest -vs test/inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCPU::test_comprehensive_argmax_cpu_int32` as an example. With the patch above, it fails with:
```
$ pytest -vs test/inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCPU::test_comprehensive_argmax_cpu_int32
========================================================================== test session starts ==========================================================================
platform linux -- Python 3.9.19, pytest-8.3.4, pluggy-1.5.0 -- /home/gta/miniforge3/envs/yi/bin/python3.9
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase(PosixPath('/home/gta/pytorch/.hypothesis/examples'))
benchmark: 5.1.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/gta/pytorch
configfile: pytest.ini
plugins: hypothesis-6.122.3, hydra-core-1.3.2, benchmark-5.1.0, xdist-3.6.1, typeguard-4.3.0
collected 1 item
Running 1 items in this shard
test/inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCPU::test_comprehensive_argmax_cpu_int32 !
FAILED [8.0479s]
=============================================================================== FAILURES ================================================================================
_______________________________________________________ TestInductorOpInfoCPU.test_comprehensive_argmax_cpu_int32 _______________________________________________________
Traceback (most recent call last):
File "/home/gta/miniforge3/envs/yi/lib/python3.9/unittest/case.py", line 59, in testPartExecutor
yield
File "/home/gta/miniforge3/envs/yi/lib/python3.9/unittest/case.py", line 592, in run
self._callTestMethod(testMethod)
File "/home/gta/miniforge3/envs/yi/lib/python3.9/unittest/case.py", line 550, in _callTestMethod
method()
File "/home/gta/pytorch/torch/testing/_internal/common_utils.py", line 3155, in wrapper
method(*args, **kwargs)
File "/home/gta/pytorch/torch/testing/_internal/common_device_type.py", line 474, in instantiated_test
raise rte
File "/home/gta/pytorch/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/home/gta/pytorch/torch/testing/_internal/common_utils.py", line 1616, in wrapper
fn(*args, **kwargs)
File "/home/gta/pytorch/torch/testing/_internal/common_device_type.py", line 1172, in test_wrapper
raise e
File "/home/gta/pytorch/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/home/gta/pytorch/torch/testing/_internal/common_device_type.py", line 1447, in only_fn
return fn(self, *args, **kwargs)
File "/home/gta/pytorch/torch/testing/_internal/common_utils.py", line 2293, in wrapper
fn(*args, **kwargs)
File "/home/gta/pytorch/torch/testing/_internal/common_device_type.py", line 1239, in dep_fn
return fn(slf, *args, **kwargs)
File "/home/gta/pytorch/torch/testing/_internal/common_device_type.py", line 1239, in dep_fn
return fn(slf, *args, **kwargs)
File "/home/gta/pytorch/torch/testing/_internal/common_device_type.py", line 1239, in dep_fn
return fn(slf, *args, **kwargs)
File "/home/gta/pytorch/torch/testing/_internal/common_utils.py", line 1616, in wrapper
fn(*args, **kwargs)
File "/home/gta/pytorch/torch/testing/_internal/common_utils.py", line 1538, in wrapper
fn(*args, **kwargs)
File "/home/gta/miniforge3/envs/yi/lib/python3.9/unittest/mock.py", line 1336, in patched
return func(*newargs, **newkeywargs)
File "/home/gta/miniforge3/envs/yi/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/gta/miniforge3/envs/yi/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/gta/pytorch/test/inductor/test_torchinductor_opinfo.py", line 886, in inner
raise e
File "/home/gta/pytorch/test/inductor/test_torchinductor_opinfo.py", line 878, in inner
fn(self, device, dtype, op)
File "/home/gta/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1127, in test_comprehensive
raise e
File "/home/gta/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1109, in test_comprehensive
self.check_model(
File "/home/gta/pytorch/test/inductor/test_torchinductor.py", line 467, in check_model
actual = run(*example_inputs, **kwargs)
File "/home/gta/pytorch/torch/_dynamo/eval_frame.py", line 589, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/home/gta/pytorch/torch/_inductor/compile_fx.py", line 748, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/home/gta/pytorch/torch/_inductor/compile_fx.py", line 733, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/home/gta/pytorch/torch/_inductor/compile_fx.py", line 1405, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/home/gta/pytorch/torch/_inductor/compile_fx.py", line 1056, in codegen_and_compile
graph.run(*example_inputs)
File "/home/gta/pytorch/torch/_inductor/graph.py", line 874, in run
return super().run(*args)
File "/home/gta/pytorch/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
File "/home/gta/pytorch/torch/_inductor/graph.py", line 1478, in run_node
result = super().run_node(n)
File "/home/gta/pytorch/torch/fx/interpreter.py", line 236, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/gta/pytorch/torch/_inductor/graph.py", line 1167, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/gta/pytorch/torch/_inductor/graph.py", line 1157, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File "/home/gta/pytorch/torch/_inductor/lowering.py", line 462, in wrapped
out = decomp_fn(*args, **kwargs)
File "/home/gta/pytorch/torch/_inductor/lowering.py", line 5684, in inner
result = Reduction.create(reduction_type=reduction_type, input_node=x, **kwargs)
File "/home/gta/pytorch/torch/_inductor/ir.py", line 1397, in create
inner_fn=cls._unroll_reduction_fn(
File "/home/gta/pytorch/torch/_inductor/ir.py", line 1297, in _unroll_reduction_fn
flatten_index = FixedLayout(
File "/home/gta/pytorch/torch/_inductor/ir.py", line 3199, in __init__
assert False
torch._inductor.exc.InductorError: LoweringException: AssertionError:
target: aten.argmax.default
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cpu', torch.int32, size=[3, 5], stride=[5, 1]))
))
args[1]: 0
args[2]: True
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCPU.test_comprehensive_argmax_cpu_int32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
======================================================================== short test summary info ========================================================================
FAILED [8.0479s] test/inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCPU::test_comprehensive_argmax_cpu_int32 - torch._inductor.exc.InductorError: LoweringException: AssertionError:
========================================================================== 1 failed in 19.25s ===========================================================================
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0a0+gitc0d067f
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.39
Python version: 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8469 CPU @2.00GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 5
CPU(s) scaling MHz: 23%
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.15.0
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] pytorch-triton-xpu==3.2.0+gite98b6fcb
[pip3] torch==2.7.0a0+gitc0d067f
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.5.0.dev20241211+xpu
[pip3] torchmetrics==1.0.3
[pip3] torchrec==1.1.0.dev20241126+cpu
[pip3] torchvision==0.19.0a0+d23a6e1
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] pytorch-triton-xpu 3.2.0+gite98b6fcb pypi_0 pypi
[conda] torch 2.7.0a0+gitc0d067f dev_0 <develop>
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241211+xpu pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchrec 1.1.0.dev20241126+cpu pypi_0 pypi
[conda] torchvision 0.19.0a0+d23a6e1 pypi_0 pypi
```
cc @chauhang @penguinwu
| true
|
2,886,448,919
|
Inference llama after Export PTQ
|
mhs4670go
|
open
|
[
"oncall: quantization"
] | 0
|
NONE
|
### 🐛 Describe the bug
Hello. I'm trying to run LLama 3.2 1B model after [Export PTQ](https://pytorch.org/tutorials/prototype/pt2e_quant_ptq.html). This means the model is already exported when it's run. The reason why I do this is because I want to evaluate quantized model with lm-harness-evaluation or something.
Here's the codes I tried to run.
```python
def prepare_inputs(text, tokenizer, max_length=256):
encoding = tokenizer(
text,
max_length=max_length,
padding="max_length",
truncation=True,
return_tensors="pt",
)
return encoding
model_path='/home/seongwoo/Llama-3.2-1B'
tokenizer = AutoTokenizer.from_pretrained(model_path)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(model_path)
# Set attributes for lm_eval
dev = model.device
conf = model.config
name_or_path = model.name_or_path
# Generate tokens
prompt = "how are you?"
inputs = prepare_inputs(prompt, tokenizer)
model.eval()
"""
Note that just running the model after calling torch.export.export() works well.
model = torch.export.export(model, (inputs.input_ids,))
model.module()(inputs.input_ids) # works well
"""
# The codes are from guides.
model = torch.export.export_for_training(model, args=(inputs.input_ids,)).module()
from torch.ao.quantization.quantizer.xnnpack_quantizer import (
XNNPACKQuantizer,
get_symmetric_quantization_config,
)
# 2. Register observers in each nodes
quantizer = XNNPACKQuantizer().set_global(get_symmetric_quantization_config())
model = prepare_pt2e(model, quantizer)
prompts = [
"how are you?",
"what is your name?",
"how's the wether today?",
"Happy birthday!",
"what's your hobby?",
]
# 3. Calibration
for p in prompts:
inputs = prepare_inputs(p, tokenizer)
model(inputs.input_ids)
# 4. Quantization
model = convert_pt2e(model)
model(inputs.input_ids) ########### The error happened here.
import lm_eval
from lm_eval.utils import make_table
eval_model = lm_eval.models.huggingface.HFLM(pretrained=model)
results = lm_eval.simple_evaluate(
eval_model,
tasks='wikitext',
batch_size='auto',
)
print (make_table(results))
```
The error messages are here.
```bash
Traceback (most recent call last):
File "/home/sw4670.chae/test/llama3.py", line 87, in <module>
print(model(inputs.input_ids))
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/fx/graph_module.py", line 822, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/fx/graph_module.py", line 400, in __call__
raise e
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/fx/graph_module.py", line 387, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.43", line 315, in forward
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/_higher_order_ops/wrap.py", line 55, in __call__
return wrapper()
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/_higher_order_ops/wrap.py", line 53, in wrapper
return wrapped_func(*args, **kwargs)
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/fx/graph_module.py", line 822, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/fx/graph_module.py", line 400, in __call__
raise e
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/fx/graph_module.py", line 387, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.31", line 14, in forward
File "/home/sw4670.chae/circle-exir/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1928, in __getattr__
raise AttributeError(
AttributeError: 'GraphModule' object has no attribute 'submod_1'
```
### Versions
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-breakpoint==1.1.0
[pip3] flake8-bugbear==23.6.5
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-import-order==0.18.2
[pip3] flake8-plugin-utils==1.3.3
[pip3] flake8-pyi==23.5.0
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
2,886,428,777
|
Improvement with comprehensive docstrings and implementation of class method for the code.
|
kaushik701
|
open
|
[
"open source",
"Stale"
] | 4
|
NONE
|
The code was improved by adding comprehensive docstrings, using proper type annotations, implementing a class method for context retrieval, removing redundant checks, and enhancing overall code organization while maintaining all existing functionality.
Fixes #ISSUE_NUMBER
| true
|
2,886,402,170
|
PyTorch's nightly version no longer includes the CU118, CU124, and CU121 versions
|
1556900941lizerui
|
open
|
[
"needs reproduction",
"module: binaries",
"module: cuda",
"triaged"
] | 4
|
NONE
|
I tried to download the CUDA version of Pytorch nightly for CU124 and CU121, but was prompted that the relevant version information could not be found. I can only download the CU126 version, but this version only supports CUDA compute capability 9.0 and is not suitable for my graphics card. Is there a way to download the previous package?
cc @seemethere @malfet @osalpekar @atalman @ptrblck @msaroufim @eqy
| true
|
2,886,367,799
|
Add note to get start xpu
|
ZhaoqiongZ
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Installing PyTorch from binaries will automatically install the runtime packages of Intel® Deep Learning Essentials. In this case, if we activate oneAPI in a standalone installation of Intel® Deep Learning Essentials, there will be an environment issue. Therefore, add a note to remind users to avoid this situation.
| true
|
2,886,363,153
|
Build pytorch for rocm failed
|
FlintWangacc
|
open
|
[
"module: build",
"module: rocm",
"triaged"
] | 15
|
NONE
|
### 🐛 Describe the bug
Build pytorch for rocm failed.
```shell
[6907/7754] Building HIPCC object caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/torch_hip_generated_AdaptiveAveragePooling.hip.o FAILED: caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/torch_hip_generated_AdaptiveAveragePooling.hip.o /home/hmsjwzb/work/framework/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/torch_hip_generated_AdaptiveAveragePooling.hip.o cd /home/hmsjwzb/work/framework/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip && /usr/bin/cmake -E make_directory /home/hmsjwzb/work/framework/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/. && /usr/bin/cmake -D verbose:BOOL=OFF -D build_configuration:STRING=RELEASE -D generated_file:STRING=/home/hmsjwzb/work/framework/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/./torch_hip_generated_AdaptiveAveragePooling.hip.o -P /home/hmsjwzb/work/framework/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/torch_hip_generated_AdaptiveAveragePooling.hip.o.cmake clang++: warning: argument unused during compilation: '--offload-compress' [-Wunused-command-line-argument] In file included from /home/hmsjwzb/work/framework/pytorch/aten/src/ATen/native/hip/AdaptiveAveragePooling.hip:4: In file included from /home/hmsjwzb/work/framework/pytorch/aten/src/ATen/core/Tensor.h:3: In file included from /home/hmsjwzb/work/framework/pytorch/build/aten/src/ATen/core/TensorBody.h:16: In file included from /home/hmsjwzb/work/framework/pytorch/c10/core/Scalar.h:9: In file included from /home/hmsjwzb/work/framework/pytorch/c10/core/ScalarType.h:13: In file included from /home/hmsjwzb/work/framework/pytorch/c10/util/complex.h:9: In file included from /usr/include/thrust/complex.h:1030: In file included from /usr/include/thrust/detail/complex/complex.inl:22: In file included from /usr/include/thrust/type_traits/is_trivially_relocatable.h:19: In file included from /usr/include/thrust/type_traits/is_contiguous_iterator.h:27: In file included from /usr/include/thrust/detail/type_traits/pointer_traits.h:23: In file included from /usr/include/thrust/iterator/iterator_traits.h:62: /usr/include/thrust/iterator/detail/device_system_tag.h:23:10: fatal error: 'thrust/system/__THRUST_DEVICE_SYSTEM_NAMESPACE/detail/execution_policy.h' file not found 23 | #include __THRUST_DEVICE_SYSTEM_TAG_HEADER | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /usr/include/thrust/iterator/detail/device_system_tag.h:22:43: note: expanded from macro '__THRUST_DEVICE_SYSTEM_TAG_HEADER' 22 | #define __THRUST_DEVICE_SYSTEM_TAG_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/execution_policy.h> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <scratch space>:142:1: note: expanded from here 142 | <thrust/system/__THRUST_DEVICE_SYSTEM_NAMESPACE/detail/execution_policy.h> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1 error generated when compiling for host.
```
### Versions
```shell
Collecting environment information... PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 19.1.7 (https://github.com/llvm/llvm-project.git cd708029e0b2869e80abe31ddb175f7c35361f90) CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35 Is CUDA available: N/A CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: GenuineIntel Model name: 13th Gen Intel(R) Core(TM) i9-13900 CPU family: 6 Model: 183 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 1 Stepping: 1 CPU max MHz: 5600.0000 CPU min MHz: 800.0000 BogoMIPS: 3993.60 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 896 KiB (24 instances) L1i cache: 1.3 MiB (24 instances) L2 cache: 32 MiB (12 instances) L3 cache: 36 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-31 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Mitigation; Clear Register File Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected
Versions of relevant libraries: [pip3] No relevant packages [conda] magma-cuda121 2.6.1 1 pytorch [conda] mkl-include 2025.0.1 pypi_0 pypi [conda] mkl-static 2025.0.1 pypi_0 pypi
```
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,887,587,631
|
[Request Help] “torch._dynamo.exc.UserError: Dynamic control flow is not supported at the moment.” “torch._dynamo.exc.UncapturedHigherOrderOpError: Cond doesn't work unless it is captured completely with torch.compile.”
|
liye0626
|
open
|
[
"triaged",
"oncall: pt2",
"oncall: export"
] | 4
|
NONE
|
Background: I tried to integrate EAGLE-2 into ExecuTorch, but encountered some errors.
Relate code:
```python
if i not in noleaf_index: # An error occurred at this branch
cid = i
depth = position_ids_list[i]
for j in reversed(range(depth + 1)):
retrieve_indices[rid][j] = cid
cid = mask_index_list[cid - 1]
rid += 1
```
Specifically, I got an error after adding a branch in model forward
- code:
```python
if A_Tensor: # In forward, a Boolean value or bool tensor calculated using real-time data
...
```
- error:
torch._dynamo.exc.UserError: Dynamic control flow is not supported at the moment. Please use torch.cond to explicitly capture the control flow. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#cond-operands
I refer to the above page and add the module "CondBranchNestedFunction", encountered a new error
- code:
```python
from functorch.experimental.control_flow import cond
@torch.compile(dynamic=True, fullgraph=True) # dynamic setting True and False will both result in an error
class CondBranchNestedFunction(torch.nn.Module):
@torch.compile(dynamic=True, fullgraph=True)
def forward(self, tmpA, i):
def true_fn(i):
i+=1
return None
def false_fn(i):
return None
return cond(tmpA, true_fn, false_fn, [i])
self.condition_func = CondBranchNestedFunction()
self.condition_func = torch.compile(self.condition_func, mode='max-autotune')
tmpA = True # or bool Tensor. In forward, a Boolean value or bool tensor calculated using real-time data
tmpB = torch.tensor(i)
self.condition_func(tmpA, tmpB)
```
- error:
"torch._dynamo.exc.UncapturedHigherOrderOpError: Cond doesn't work unless it is captured completely with torch.compile."
I have tried using torch.compile in some locations. But the error did not change. I would like to ask which direction I should go to solve the problem.
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,886,345,345
|
DISABLED test_nonstrict_trace_pre_existing_custom_class (__main__.DecoratorTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 4
|
NONE
|
Platforms: linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nonstrict_trace_pre_existing_custom_class&suite=DecoratorTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37962003305).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 8 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nonstrict_trace_pre_existing_custom_class`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
Truncated for length
```
dler(tx, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builtin.py", line 1797, in call_getattr
name = name_var.as_python_constant()
torch._dynamo.exc.InternalTorchDynamoError: RecursionError: maximum recursion depth exceeded
from user code:
File "/var/lib/jenkins/workspace/test/dynamo/test_decorators.py", line 312, in fn
res = trace_me(p)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 1007, in helper
if _is_leaf(node, is_leaf=is_leaf):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 802, in _is_leaf
return (is_leaf is not None and is_leaf(tree)) or _get_node_type(
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 795, in _get_node_type
if _is_namedtuple_instance(tree):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py", line 786, in _is_namedtuple_instance
if len(bases) != 1 or bases[0] != tuple:
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/polyfills/__init__.py", line 244, in cmp_ne
return not cmp_eq(a, b)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/polyfills/__init__.py", line 234, in cmp_eq
result = a.__eq__(b)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
python test/dynamo/test_decorators.py DecoratorTests.test_nonstrict_trace_pre_existing_custom_class
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_decorators.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_decorators.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,886,289,057
|
Excessive memory usage during compilation start up for (atleast some) in place ops
|
amouldon
|
closed
|
[
"triaged",
"oncall: pt2"
] | 4
|
NONE
|
### 🐛 Describe the bug
When using torch.compile with this (no gradients/backwards involved):
```
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.accum_tensor = torch.zeros((64,2048, 2048), device='cuda')
def forward(self, x):
self.accum_tensor.mul_(torch.rand_like(self.accum_tensor)) #.add_() behaves the same
return self.accum_tensor
```
memory usage will successfully be halved IF the compilation goes through, but I encounter OOM errors even when accum_tensor is very small. With 24G vram I cannot even use the above size of 1GB without OOMing on current 2.6 and nightly. It was significantly better on 2.3.1, as I could use size (256,2048, 2048) before OOMing, but still not great considering eager allows me to use up nearly half of my vram on tensor size. (though for my use case, being able to use sizes that worked in 2.3.1 should be adequate)
Tested with both .add_ and .mul_ and results were same. With .copy_, but the threshold for OOMing is raised to size (128,2048, 2048) for me, and once again eager is much more forgiving.
### Error logs
def forward(self, x):
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1184, in forward
return compiled_fn(full_args)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 323, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 672, in inner_fn
outs = compiled_fn(args)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 490, in wrapper
return compiled_fn(runtime_args)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 466, in __call__
return self.current_callable(inputs)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1208, in run
return compiled_fn(new_inputs)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 398, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 428, in cudagraphify
return manager.add_function(
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2253, in add_function
return fn, fn(inputs)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1947, in run
out = self._run(new_inputs, function_id)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2055, in _run
out = self.run_eager(new_inputs, function_id)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2219, in run_eager
return node.run(new_inputs)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 643, in run
out = self.wrapped_function.model(new_inputs)
File "/tmp/torchinductor_amouldon/bk/cbko7vp5b7w7xmc7kzxawhzx6nrzjei5e2eou5rcpbmanyx6s3at.py", line 99, in call
triton_poi_fused_add_rand_like_0.run(buf0, arg0_1, arg0_1, 0, 268435456, grid=grid(268435456), stream=stream0)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1034, in run
self.autotune_to_one_config(*args, grid=grid, **kwargs)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 911, in autotune_to_one_config
timings = self.benchmark_all_configs(*args, **kwargs)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 885, in benchmark_all_configs
timings = {
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 886, in <dictcomp>
launcher: self.bench(launcher, *args, **kwargs)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 763, in bench
cpu_copies = self.copy_args_to_cpu_if_needed(*args, **kwargs)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 822, in copy_args_to_cpu_if_needed
maybe_copy(self.fn.arg_names[i], arg)
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 809, in maybe_copy
cpu_arg = torch.empty_strided(
File "/home/amouldon/venvlinux1/lib/python3.10/site-packages/torch/utils/_device.py", line 104, in __torch_function__
return func(*args, **kwargs)
RuntimeError: CUDA error: out of memory
### Versions
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 5600X 6-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
BogoMIPS: 7399.97
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ibrs ibpb stibp
vmmcall fsgsbase bmi1 avx2 smep bmi2 erms rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat umip vaes vpclmulqdq rdpid fsrm
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 32 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchmetrics==1.4.1
[pip3] torchtune==0.2.0
[pip3] torchvision==0.21.0+cu126
[pip3] torchviz==0.0.2
[pip3] triton==3.2.0
cc @chauhang @penguinwu
| true
|
2,886,279,974
|
Revert D70262395
|
wdvr
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 7
|
CONTRIBUTOR
|
Summary:
This reverts #147804 due to internal revert.
---
This diff reverts D70262395
Reviewed By: RossMcKenzie
Differential Revision: D70318024
@diff-train-skip-merge
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan
| true
|
2,886,279,237
|
[Don't merge]Upgrade submodule oneDNN to v3.7 (#147498)(Z7)
|
xuhancn
|
open
|
[
"module: mkldnn",
"open source",
"Stale",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,886,248,356
|
Specifying device_id in init_process_group causes tensor parallel + pipeline parallel to fail
|
seanxwzhang
|
open
|
[
"oncall: distributed",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
When specifying `device_id` in `init_process_group`, my distributed training script (which uses tensor parallel and pipeline parallel) will either hang indefinitely or fail without meaningful error message.
```python
import os
from typing import Optional
import torch
import torch.nn.functional as F
import gc
import torch.distributed as dist
import torch.distributed.nn.functional as dist_F
from torch.distributed.pipelining.stage import PipelineStage
from torch.distributed.pipelining.microbatch import TensorChunkSpec, sum_reducer, _Replicate
from torch.distributed.pipelining.schedules import ScheduleGPipe
from torch.distributed.device_mesh import init_device_mesh
class MyLinear(torch.nn.Module):
def __init__(self, group: dist.ProcessGroup):
super().__init__()
self.linear = torch.nn.Linear(10, 10)
self.group = group
def forward(self, x):
o = self.linear(x)
o = dist_F.all_reduce(o, group=self.group)
return o
def main():
local_rank = int(os.environ.get("LOCAL_RANK", "0"))
# will hang if specifying device_id
dist.init_process_group(backend='nccl', device_id=torch.device(f'cuda:{local_rank}'))
# this is completly fine
# dist.init_process_group(backend='nccl')
rank = dist.get_rank()
world_size = dist.get_world_size()
pp_size = 2
tp_size = world_size // pp_size
device_mesh = init_device_mesh("cuda", (1, pp_size, tp_size), mesh_dim_names=("dp", "pp", "tp"))
torch.cuda.set_device(rank) # world_size is smaller than 8, so this works
pp_mesh = device_mesh["pp"]
tp_mesh = device_mesh["tp"]
tp_rank = tp_mesh.get_rank() % tp_size
pp_rank = pp_mesh.get_rank() // tp_size
print(f"rank {rank} tp_rank {tp_rank} pp_rank {pp_rank}, pp_size {pp_size}, tp_size {tp_size}")
with torch.device("cuda"):
model = MyLinear(tp_mesh.get_group("tp"))
stage = PipelineStage(
model,
stage_index=pp_rank,
num_stages=pp_size,
device=torch.device(f"cuda:{rank}"),
group=pp_mesh.get_group("pp")
)
schedule = ScheduleGPipe(
stage,
1,
loss_fn=lambda x, y: ((x - y)**2).sum(),
)
torch.manual_seed(52)
x = torch.randn((8, 10), device='cuda')
y = torch.randn((8, 10), device='cuda')
targets, losses = (y, []) if pp_rank == pp_size - 1 else (None, None)
if pp_rank == 0:
schedule.step(x, target=targets, losses=losses)
else:
schedule.step(target=targets, losses=losses)
print(f"rank {rank} losses {losses}")
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
Running the above script with the following launch command on a node with 8 H100
```
torchrun --nnodes=1 --nproc-per-node=4 test.py
```
gives
```
W0228 05:31:00.634000 326201 site-packages/torch/distributed/run.py:792]
W0228 05:31:00.634000 326201 site-packages/torch/distributed/run.py:792] *****************************************
W0228 05:31:00.634000 326201 site-packages/torch/distributed/run.py:792] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0228 05:31:00.634000 326201 site-packages/torch/distributed/run.py:792] *****************************************
rank 2 tp_rank 0 pp_rank 1, pp_size 2, tp_size 2
rank 3 tp_rank 1 pp_rank 1, pp_size 2, tp_size 2
rank 1 tp_rank 1 pp_rank 0, pp_size 2, tp_size 2
rank 0 tp_rank 0 pp_rank 0, pp_size 2, tp_size 2
```
And it hangs indefinitely. Note that once `device_id` is not specified, everything works. I'm not sure where the issue is but this behavior doesn't seem to be expected.
Turning on `NCCL_DEBUG=INFO` gives [this](https://gist.github.com/seanxwzhang/a73e1c834e8aa5b32b390ffea4dacfc6)
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2801.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31
NUMA node1 CPU(s): 32-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchdata==0.11.0
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchdata 0.11.0 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,886,225,041
|
[inductor][cutlass] Environment variables for allow/denylist
|
bertmaher
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148161
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,886,194,107
|
draft
|
pianpwk
|
open
|
[
"Stale",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,886,190,695
|
Remove unnecessary tensor clone
|
cyyever
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"open source",
"Merged",
"NNC",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ciflow/mps"
] | 7
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,886,139,258
|
Replace unimplemented with unimplemented_v2 for dynamo
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148158
torch/_dynamo/variables/constant.py
https://github.com/pytorch/pytorch/issues/147913
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,886,135,400
|
Use Python 3.9 typing
|
cyyever
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: dataloader",
"topic: not user facing",
"ciflow/inductor",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,886,114,938
|
[MPS][Complex] Conjugations are broken
|
KexinFeng
|
closed
|
[
"triaged",
"module: complex",
"module: correctness (silent)",
"module: mps"
] | 3
|
NONE
|
## Intro
Basically, this issue is the incorrect result of complex unit `1j` computation. It maybe a trivial bug but it **completely cancels** all the complex number support on MPS. It would be great to have it solved.
### 🐛 Describe the bug
The cpu result is real and correct while the apple mps result is complex which is wrong.
```python
import torch
# Create a random matrix on MPS
R = torch.tensor([[ 0.6047+1.1093j]], device='mps')
R_cpu = R.to("cpu") # Copy to CPU
# Compute R^H R on both devices
mps_result = R.T.conj() @ R
cpu_result = R_cpu.T.conj() @ R_cpu
print("MPS Result:", mps_result)
print("CPU Result:", cpu_result)
torch.testing.assert_close(mps_result.to('cpu'), cpu_result)
```
# Possible reason
The following investigation shows that the complex unit `1j` is not computed correctly, in the sense that `1j*(-1j) -> -1`
```
# Create a random matrix on MPS
R = torch.tensor([[ 1j]], device='mps')
R_cpu = R.to("cpu") # Copy to CPU
# Compute R^H R on both devices
mps_result = R.T.conj() @ R
cpu_result = R_cpu.T.conj() @ R_cpu
print("MPS Result:", mps_result)
print("CPU Result:", cpu_result)
```
which yields
```
MPS Result: tensor([[-1.+0.j]], device='mps:0')
CPU Result: tensor([[1.+0.j]])
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Nov 11 2024, 03:15:38) [Clang 16.0.0 (clang-1600.0.26.6)] (64-bit runtime)
Python platform: macOS-15.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Pro
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[conda] Could not collect
```
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,886,089,665
|
[Break XPU][Inductor UT] Avoid custom op registration conflicts in test_auto_functionalize.py.
|
etaf
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147727
* #148178
* __->__ #148155
Fix #148148
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,886,085,960
|
[MPS] Implement linear1d as shader
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"topic: performance",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148187
* __->__ #148154
And get rid of MPS call, as for some reason implementation via MPSGraph
API call is 100x+ times slower that Metal shader, at least according to
the following benchmark
```python
import torch
import time
import subprocess
def benchmark(device, dtype):
# Create example inputs
x = torch.testing.make_tensor(3, 5, 65536, device=device, dtype=dtype)
sf = .5
# Check output
y = torch.nn.functional.interpolate(x, scale_factor=sf, mode="linear")
z = torch.nn.functional.interpolate(x.cpu(), scale_factor=sf, mode="linear")
outputs_match = torch.allclose(y.cpu(), z)
if not outputs_match:
atol = (y.cpu() - z).abs().max()
rtol = ((y.cpu() - z)[z!=0]/z[z!=0]).abs().max()
print(f"atol={atol} rtol={rtol}")
# Measure time manually
start_time = time.time() * 1000
for _ in range(1000):
y = torch.nn.functional.interpolate(x, scale_factor=sf, mode="linear")
torch.mps.synchronize
end_time = time.time() * 1000
manual_delta = (end_time - start_time)
average_time = f"{manual_delta:6.1f}"
return "True " if outputs_match else "False", average_time
outputs_match_list = []
average_time_list = []
for device in ["mps", "cpu"]:
for dtype in [torch.float32, torch.float16, torch.bfloat16]:
outputs_match, average_time = benchmark(device, dtype)
outputs_match_list.append(str(outputs_match))
average_time_list.append(average_time)
brand_string = subprocess.check_output(['sysctl', '-n', 'machdep.cpu.brand_string']).decode("utf-8").strip()
print(f"\nBenchmarking Results (collected on {brand_string}):")
print("-"*40)
print("Device : MPS | CPU")
print("Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16 ")
print(f"Outputs Match : ", " | ".join(outputs_match_list))
print(f"Average Time (us) :", " |".join(average_time_list))
```
Benchmark results after the change
```
Benchmarking Results (collected on Apple M2 Pro):
----------------------------------------
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16
Outputs Match : True | True | True | True | True | True
Average Time (us) : 2.5 | 2.1 | 2.2 | 161.4 | 115.0 | 161.1
```
And before the change
```
Benchmarking Results (collected on Apple M2 Pro):
----------------------------------------
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16
Outputs Match : True | True | True | True | True | True
Average Time (us) : 354.0 | 336.0 | 332.4 | 145.5 | 114.7 | 148.3
```
Fixes https://github.com/pytorch/pytorch/issues/144245
| true
|
2,886,075,789
|
[inductor] [cuda] [MultiheadAttention] `nn.MultiheadAttention-torch.reciprocal` outputs a big difference with eager
|
shaoyuyoung
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom description**: It seems that cuda outputs a bigger difference (compared with CPU).
**device backend**: only triton
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.attention = torch.nn.MultiheadAttention(embed_dim=512, num_heads=8)
def forward(self, x):
x, _ = self.attention(x, x, x)
x = torch.reciprocal(x)
return x
model = Model().eval().cuda()
x = torch.randn(10, 1, 512).cuda()
inputs = [x]
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
print(torch.allclose(output, c_output, 1e-3, 1e-3, equal_nan=True))
print(torch.max(torch.abs(output - c_output)))
```
### Error logs
**triton**
```
False
tensor(475.4219, device='cuda:0')
```
**CPP**
```
True
tensor(16.4922)
```
### Versions
nightly 20250225
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,886,053,122
|
[inductor] `F.fractional_max_pool2d` throws `LoweringException: ZeroDivisionError: division by zero` while eager passes the check
|
shaoyuyoung
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: `F.fractional_max_pool2d` throws `LoweringException: ZeroDivisionError: division by zero` while eager passes the check.
**device backend**: both triton and CPP
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
x = F.fractional_max_pool2d(x, kernel_size=3, output_size=(1, 1))
return x
model = Model()
x = torch.randn(1, 1, 6, 6).cuda()
inputs = [x]
def run_test(model, inputs, backend):
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(output)
print(f"succeed on {backend}")
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'inductor')
```
### Error logs
```
tensor([[[[1.4218]]]], device='cuda:0')
succeed on eager
LoweringException: ZeroDivisionError: division by zero
target: aten.fractional_max_pool2d.default
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cuda:0', torch.float32, size=[1, 1, 6, 6], stride=[36, 36, 6, 1]))
))
args[1]: [3, 3]
args[2]: [1, 1]
args[3]: TensorBox(StorageBox(
MultiOutput(
python_kernel_name=None,
name=buf1,
layout=FixedLayout('cuda:0', torch.float32, size=[1, 1, 2], stride=[2, 2, 1]),
inputs=[FallbackKernel(
python_kernel_name='torch.ops.aten.rand.default',
name=buf0,
layout=MultiOutputLayout(device=device(type='cuda', index=0)),
inputs=[],
constant_args=(1, 1, 2, torch.float32, device(type='cuda', index=0), False),
kwargs={},
output_view=None,
python_kernel_name=torch.ops.aten.rand.default,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=['dtype', 'layout', 'device', 'pin_memory'],
op_overload=aten.rand.default,
arg_properties=[{'name': 'size', 'type': List[int], 'default_value': None}],
kwarg_properties=None,
unbacked_bindings=None,
mutation_outputs=[],
origin_node=rand,
origins=OrderedSet([rand])
)],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=None,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=(),
op_overload=None,
arg_properties=[{}],
kwarg_properties=None,
unbacked_bindings={},
mutation_outputs=[],
origin_node=rand,
origins=OrderedSet([rand])
)
))
```
### Versions
nightly 20250225
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,886,030,801
|
[pytree][fwAD] make `UnpackedDualTensor` a true namedtuple
|
XuehaiPan
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148151
* #113258
* #113257
* #147880
| true
|
2,886,014,767
|
[pytorch elastic] [RHEL] multiple processes call to dist.destroy_process_group() cause an RuntimeError: CUDA error: CUDA-capable device(s) is/are busy or unavailable
|
bbelgodere
|
open
|
[
"oncall: distributed",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
While working on another piece of code, we've run into an issue which causes an `RuntimeError: CUDA error: CUDA-capable device(s) is/are busy or unavailable` error to occur when at the end of a program when `dist.destroy_process_group()` is called by multiple processes.
Whichever rank calls the process first completes without issue but the second process to call it causes the CUDA error. I was able to replicate the bug using the [https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#initialize-ddp-with-torch-distributed-run-torchrun](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#initialize-ddp-with-torch-distributed-run-torchrun) example.
From talking to another developer this seems to run fine in ubuntu but we are seeing this error in RHEL 9.4
I modified the toy example to add a print statement after the dist.destroy_process_group() to highlight the issue. From the log below you see only rank 0 completed `Finished destroying process group basic on rank 0.`
```
import os
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel as DDP
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def demo_basic():
torch.cuda.set_device(int(os.environ["LOCAL_RANK"]))
dist.init_process_group("nccl")
rank = dist.get_rank()
print(f"Start running basic DDP example on rank {rank}.")
# create model and move it to GPU with id rank
device_id = rank % torch.cuda.device_count()
model = ToyModel().to(device_id)
ddp_model = DDP(model, device_ids=[device_id])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(device_id)
loss_fn(outputs, labels).backward()
optimizer.step()
print(f"Finished running basic DDP example on rank {rank}.")
dist.destroy_process_group()
print(f"Finished destroying process group basic on rank {rank}. ")
if __name__ == "__main__":
demo_basic()
```
## Output
```
torchrun --nproc_per_node=8 --rdzv_id=100 --rdzv_backend=c10d --rdzv_endpoint=$HOSTNAME:29400 elastic_ddp.py [32/5894]
W0228 02:09:36.142000 839387 site-packages/torch/distributed/run.py:792]
W0228 02:09:36.142000 839387 site-packages/torch/distributed/run.py:792] *****************************************
W0228 02:09:36.142000 839387 site-packages/torch/distributed/run.py:792] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0228 02:09:36.142000 839387 site-packages/torch/distributed/run.py:792] *****************************************
Start running basic DDP example on rank 0.
Start running basic DDP example on rank 1.
Start running basic DDP example on rank 7.
Start running basic DDP example on rank 3.
Start running basic DDP example on rank 2.
Start running basic DDP example on rank 6.
Start running basic DDP example on rank 5.
Start running basic DDP example on rank 4.
Finished running basic DDP example on rank 1.
Finished running basic DDP example on rank 4.Finished running basic DDP example on rank 7.
Finished running basic DDP example on rank 3.Finished running basic DDP example on rank 6.
Finished running basic DDP example on rank 2.Finished running basic DDP example on rank 5.
Finished running basic DDP example on rank 0.
[rank5]: Traceback (most recent call last):
[rank5]: File "/u/bmbelgod/elastic_ddp.py", line 42, in <module>
[rank5]: demo_basic()
[rank5]: File "/u/bmbelgod/elastic_ddp.py", line 38, in demo_basic
[rank5]: dist.destroy_process_group()
[rank5]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 2146, in destroy_process_group
[rank5]: _shutdown_backend(pg_to_shutdown)
[rank5]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1815, in _shutdown_backend
[rank5]: backend._shutdown()
[rank5]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:133, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
[rank5]: ncclUnhandledCudaError: Call to CUDA function failed.
[rank5]: Last error:
[rank5]: Cuda failure 'CUDA-capable device(s) is/are busy or unavailable'
[rank1]: Traceback (most recent call last):
[rank1]: File "/u/bmbelgod/elastic_ddp.py", line 42, in <module>
[rank1]: demo_basic()
[rank1]: File "/u/bmbelgod/elastic_ddp.py", line 38, in demo_basic
[rank1]: dist.destroy_process_group()
[rank1]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 2146, in destroy_process_group
[rank1]: _shutdown_backend(pg_to_shutdown)
[rank1]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1815, in _shutdown_backend
[rank1]: backend._shutdown()
[rank1]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:133, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
[rank1]: ncclUnhandledCudaError: Call to CUDA function failed.
[rank1]: Last error:
[rank1]: Cuda failure 'CUDA-capable device(s) is/are busy or unavailable'
[rank7]: Traceback (most recent call last):
[rank7]: File "/u/bmbelgod/elastic_ddp.py", line 42, in <module>
[rank7]: demo_basic()
[rank7]: File "/u/bmbelgod/elastic_ddp.py", line 38, in demo_basic
[rank7]: dist.destroy_process_group()
[rank7]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 2146, in destroy_process_group
[rank7]: _shutdown_backend(pg_to_shutdown)
[rank7]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1815, in _shutdown_backend
[rank7]: backend._shutdown()
[rank7]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:133, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
[rank7]: ncclUnhandledCudaError: Call to CUDA function failed.
[rank7]: Last error:
[rank7]: Cuda failure 'CUDA-capable device(s) is/are busy or unavailable'
[rank3]: Traceback (most recent call last):
[rank3]: File "/u/bmbelgod/elastic_ddp.py", line 42, in <module>
[rank3]: demo_basic()
[rank3]: File "/u/bmbelgod/elastic_ddp.py", line 38, in demo_basic
[rank3]: dist.destroy_process_group()
[rank3]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 2146, in destroy_process_group
[rank3]: _shutdown_backend(pg_to_shutdown)
[rank3]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1815, in _shutdown_backend
[rank3]: backend._shutdown()
[rank3]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:133, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
[rank3]: ncclUnhandledCudaError: Call to CUDA function failed.
[rank3]: Last error:
[rank3]: Cuda failure 'CUDA-capable device(s) is/are busy or unavailable'
[rank2]: Traceback (most recent call last):
[rank2]: File "/u/bmbelgod/elastic_ddp.py", line 42, in <module>
[rank2]: demo_basic()
[rank2]: File "/u/bmbelgod/elastic_ddp.py", line 38, in demo_basic
[rank2]: dist.destroy_process_group()
[rank2]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 2146, in destroy_process_group
[rank2]: _shutdown_backend(pg_to_shutdown)
[rank2]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1815, in _shutdown_backend
[rank2]: backend._shutdown()
[rank2]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:133, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
[rank2]: ncclUnhandledCudaError: Call to CUDA function failed.
[rank2]: Last error:
[rank2]: Cuda failure 'CUDA-capable device(s) is/are busy or unavailable'
[rank6]: Traceback (most recent call last):
[rank6]: File "/u/bmbelgod/elastic_ddp.py", line 42, in <module>
[rank6]: demo_basic()
[rank6]: File "/u/bmbelgod/elastic_ddp.py", line 38, in demo_basic
[rank6]: dist.destroy_process_group()
[rank6]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 2146, in destroy_process_group
[rank6]: _shutdown_backend(pg_to_shutdown)
[rank6]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1815, in _shutdown_backend
[rank6]: backend._shutdown()
[rank6]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:133, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
[rank6]: ncclUnhandledCudaError: Call to CUDA function failed.
[rank6]: Last error:
[rank6]: Cuda failure 'CUDA-capable device(s) is/are busy or unavailable'
[rank4]: Traceback (most recent call last):
[rank4]: File "/u/bmbelgod/elastic_ddp.py", line 42, in <module>
[rank4]: demo_basic()
[rank4]: File "/u/bmbelgod/elastic_ddp.py", line 38, in demo_basic
[rank4]: dist.destroy_process_group()
[rank4]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 2146, in destroy_process_group
[rank4]: _shutdown_backend(pg_to_shutdown)
[rank4]: File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1815, in _shutdown_backend
[rank4]: backend._shutdown()
[rank4]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:133, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
[rank4]: ncclUnhandledCudaError: Call to CUDA function failed.
[rank4]: Last error:
[rank4]: Cuda failure 'CUDA-capable device(s) is/are busy or unavailable'
Finished destroying process group basic on rank 0.
W0228 02:10:00.970000 839387 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 839461 closing signal SIGTERM
W0228 02:10:00.970000 839387 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 839462 closing signal SIGTERM
W0228 02:10:00.970000 839387 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 839463 closing signal SIGTERM
W0228 02:10:00.970000 839387 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 839464 closing signal SIGTERM
W0228 02:10:00.970000 839387 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 839466 closing signal SIGTERM
W0228 02:10:00.970000 839387 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 839467 closing signal SIGTERM
E0228 02:10:03.477000 839387 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 5 (pid: 839465) of binary: /u/bmbelgod/.conda/envs/nvidia-resiliency-ext/bin/python
Traceback (most recent call last):
File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/bin/torchrun", line 8, in <module>
sys.exit(main())
^^^^^^
File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/run.py", line 918, in main
run(args)
File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/run.py", line 909, in run
elastic_launch(
File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/u/bmbelgod/.conda/envs/nvidia-resiliency-ext/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
elastic_ddp.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-02-28_02:10:00
host : **********************
rank : 5 (local_rank: 5)
exitcode : 1 (pid: 839465)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
### Versions
## output of Pytorch collect env
```
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.4 (Plow) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.42.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 100%
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,886,010,081
|
Enable kineto for XPU
|
xuhancn
|
closed
|
[
"open source",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,885,979,402
|
[Inductor UT] RuntimeError: Tried to register an operator with the same name and overload name multiple times.
|
etaf
|
closed
|
[] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
When running python test/inductor/test_auto_functionalize.py in latest PyTorch main (6ccbff1450bb3936636377d3910906f5666ddcfa), we can find some case failures like:
```
======================================================================
ERROR: test_recompile (__main__.AutoFunctionalizeTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_utils.py", line 3155, in wrapper
method(*args, **kwargs)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_auto_functionalize.py", line 1566, in test_recompile
torch.library.define(
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/functools.py", line 889, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "/home/xinanlin/xinanlin/pytorch/torch/library.py", line 531, in define
lib.define(name + schema, alias_analysis="", tags=tags)
File "/home/xinanlin/xinanlin/pytorch/torch/library.py", line 172, in define
result = self.m.define(schema, alias_analysis, tuple(tags))
RuntimeError: Tried to register an operator (mylib::foo(Tensor(a!) x, Tensor(b!) y) -> ()) with the same name and overload name multiple times. Each overload's schema should only be registered with a single call to def(). Duplicate registration: registered at /dev/null:135. Original registration: registered at /dev/null:203
To execute this test, run the following from the base repo dir:
python test/inductor/test_auto_functionalize.py AutoFunctionalizeTests.test_recompile
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
======================================================================
ERROR: test_slice (__main__.AutoFunctionalizeTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_utils.py", line 3155, in wrapper
method(*args, **kwargs)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_auto_functionalize.py", line 1254, in test_slice
torch.library.define(
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/functools.py", line 889, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "/home/xinanlin/xinanlin/pytorch/torch/library.py", line 531, in define
lib.define(name + schema, alias_analysis="", tags=tags)
File "/home/xinanlin/xinanlin/pytorch/torch/library.py", line 172, in define
result = self.m.define(schema, alias_analysis, tuple(tags))
RuntimeError: Tried to register an operator (mylib::foo(Tensor(a!) x, Tensor(b!) y) -> ()) with the same name and overload name multiple times. Each overload's schema should only be registered with a single call to def(). Duplicate registration: registered at /dev/null:135. Original registration: registered at /dev/null:203
To execute this test, run the following from the base repo dir:
python test/inductor/test_auto_functionalize.py AutoFunctionalizeTests.test_slice
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
======================================================================
ERROR: test_slice_dynamic (__main__.AutoFunctionalizeTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_utils.py", line 3155, in wrapper
method(*args, **kwargs)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_auto_functionalize.py", line 1357, in test_slice_dynamic
self.test_slice(_dynamic=True)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_auto_functionalize.py", line 1254, in test_slice
torch.library.define(
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/functools.py", line 889, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "/home/xinanlin/xinanlin/pytorch/torch/library.py", line 531, in define
lib.define(name + schema, alias_analysis="", tags=tags)
File "/home/xinanlin/xinanlin/pytorch/torch/library.py", line 172, in define
result = self.m.define(schema, alias_analysis, tuple(tags))
RuntimeError: Tried to register an operator (mylib::foo(Tensor(a!) x, Tensor(b!) y) -> ()) with the same name and overload name multiple times. Each overload's schema should only be registered with a single call to def(). Duplicate registration: registered at /dev/null:135. Original registration: registered at /dev/null:203
To execute this test, run the following from the base repo dir:
python test/inductor/test_auto_functionalize.py AutoFunctionalizeTests.test_slice_dynamic
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
======================================================================
ERROR: test_split (__main__.AutoFunctionalizeTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_utils.py", line 3155, in wrapper
method(*args, **kwargs)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_auto_functionalize.py", line 1136, in test_split
torch.library.define(
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/functools.py", line 889, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "/home/xinanlin/xinanlin/pytorch/torch/library.py", line 531, in define
lib.define(name + schema, alias_analysis="", tags=tags)
File "/home/xinanlin/xinanlin/pytorch/torch/library.py", line 172, in define
result = self.m.define(schema, alias_analysis, tuple(tags))
RuntimeError: Tried to register an operator (mylib::foo(Tensor(a!) x, Tensor(b!) y) -> ()) with the same name and overload name multiple times. Each overload's schema should only be registered with a single call to def(). Duplicate registration: registered at /dev/null:135. Original registration: registered at /dev/null:203
To execute this test, run the following from the base repo dir:
python test/inductor/test_auto_functionalize.py AutoFunctionalizeTests.test_split
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
======================================================================
ERROR: test_split_dynamic (__main__.AutoFunctionalizeTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/xinanlin/xinanlin/pytorch/torch/testing/_internal/common_utils.py", line 3155, in wrapper
method(*args, **kwargs)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_auto_functionalize.py", line 1248, in test_split_dynamic
self.test_split(_dynamic=True)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_auto_functionalize.py", line 1136, in test_split
torch.library.define(
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/functools.py", line 889, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "/home/xinanlin/xinanlin/pytorch/torch/library.py", line 531, in define
lib.define(name + schema, alias_analysis="", tags=tags)
File "/home/xinanlin/xinanlin/pytorch/torch/library.py", line 172, in define
result = self.m.define(schema, alias_analysis, tuple(tags))
RuntimeError: Tried to register an operator (mylib::foo(Tensor(a!) x, Tensor(b!) y) -> ()) with the same name and overload name multiple times. Each overload's schema should only be registered with a single call to def(). Duplicate registration: registered at /dev/null:135. Original registration: registered at /dev/null:203
To execute this test, run the following from the base repo dir:
python test/inductor/test_auto_functionalize.py AutoFunctionalizeTests.test_split_dynamic
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 38 tests in 22.537s
FAILED (errors=5, skipped=1)
```
#### Root Cause
The failure is introduced by #147925 , which introduced `@torch.library.custom_op("mylib::foo"`.
Different with ` with torch.library._scoped_library("mylib", "FRAGMENT") as lib:`, the ``@torch.library.custom_op("mylib::foo"` is global library. So after the global library "mylib" is created and "mylib::foo" is defined, the foo in `torch.library._scoped_library("mylib"` will be conflict with the global one.
I have encountered this type of issue multiple times. I recommend using scoped library in tests or ensuring a unique name when using custom_op.
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+git6ccbff1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
| true
|
2,885,969,735
|
[Submodule][FlashAttention] Bump to 2.7.4
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148147
# Summary
This makes me happy
| true
|
2,885,966,027
|
Unify OpOverload._get_dispatch and HigherOrderOperator.dispatch
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: aotdispatch"
] | 0
|
CONTRIBUTOR
|
It's not clear to me why these have diverged
cc @chauhang @penguinwu @bdhirsh
| true
|
2,885,948,693
|
Fix code descriptions in the test package.
|
threewebcode
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 15
|
CONTRIBUTOR
|
The parameter and function description have something wrong and make them correct.
| true
|
2,885,942,166
|
Torch export does not preserve original edges between nodes
|
corehalt
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"oncall: export"
] | 5
|
NONE
|
### 🐛 Describe the bug
Please refer to this issue on Executorch side where the issue was originally reported:
https://github.com/pytorch/executorch/issues/8758
I thought the problem might be on Executorch side but the problem actually happens on torch.export side.
For quick reference, this code:
```
class SPPF(nn.Module):
"""Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher."""
def __init__(self, c1, c2, k=5):
"""
Initializes the SPPF layer with given input/output channels and kernel size.
This module is equivalent to SPP(k=(5, 9, 13)).
"""
super().__init__()
c_ = c1 // 2 # hidden channels
self.cv1 = Conv(c1, c_, 1, 1)
self.cv2 = Conv(c_ * 4, c2, 1, 1)
self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
def forward(self, x):
"""Forward pass through Ghost Convolution block."""
y = [self.cv1(x)]
y.extend(self.m(y[-1]) for _ in range(3)) # connecting each maxpool with the previous one as in the images
return self.cv2(torch.cat(y, 1))
```
is being exported in torchscript as intended:

but when exported with torch.export we get:
```
silu__25: "f32[1, 128, 20, 20]" = torch.ops.aten.silu_.default(conv2d_25); conv2d_25 = None
max_pool2d: "f32[1, 128, 20, 20]" = torch.ops.aten.max_pool2d.default(silu__25, [5, 5], [1, 1], [2, 2])
max_pool2d_1: "f32[1, 128, 20, 20]" = torch.ops.aten.max_pool2d.default(silu__25, [5, 5], [1, 1], [2, 2])
max_pool2d_2: "f32[1, 128, 20, 20]" = torch.ops.aten.max_pool2d.default(silu__25, [5, 5], [1, 1], [2, 2])
```
as you can see, all the max pool operators are using as input the same tensor, which is not the code intended (but torchscript reflected the logic right though).
### Versions
```
Versions of relevant libraries:
[pip3] executorch==0.6.0a0+4e3a8bd
[pip3] numpy==2.0.0
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxslim==0.1.48
[pip3] torch==2.7.0.dev20250131+cpu
[pip3] torchao==0.8.0+git11333ba2
[pip3] torchaudio==2.6.0.dev20250131+cpu
[pip3] torchsr==1.0.4
[pip3] torchvision==0.22.0.dev20250131+cpu
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,885,929,747
|
test_reference_numerics_normal fails with certain versions of numpy/scipy
|
nWEIdia
|
closed
|
[
"module: tests",
"triaged",
"module: numpy",
"module: scipy compatibility"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
We would encounter errors like the following:
` File "/opt/pytorch/pytorch/test/test_unary_ufuncs.py", line 247, in _test_reference_numeric
expected = op.ref(a, **numpy_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/testing/_internal/opinfo/definitions/sp
ref=lambda x: scipy.special.spherical_jn(0, x) if TEST_SCIPY else None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/scipy/special/_spherical_bessel.py", line 31,
return _lazywhere(z.real >= 0, (n, z),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/scipy/_lib/_util.py", line 153, in _lazywhere
temp2 = xp.asarray(f2(*(arr[ncond] for arr in arrays)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/scipy/special/_spherical_bessel.py", line 33,
f2=lambda n, z: f2(n, z, derivative))[()]
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/scipy/special/_spherical_bessel.py", line 21,
return fun(n, -z, derivative) * sign
^^
TypeError: The numpy boolean negative, the `-` operator, is not supported, use the `~` operat
To execute this test, run the following from the base repo dir:
python test/test_unary_ufuncs.py TestUnaryUfuncsCPU.test_reference_numerics_normal_speci`
### Versions
nightly should be able to reproduce this with numpy 1.26.4 and scipy 1.15.2
To reproduce: (e.g. run on a machine with H100)
pytest -v test/test_unary_ufuncs.py -k test_reference_numerics_normal_special_spherical_bessel_j0_cuda_bool
cc @mruberry @ZainRizvi @rgommers
| true
|
2,885,916,296
|
Remove outdated CUDA version check
|
cyyever
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
COLLABORATOR
|
Since Torch requires CUDA>=11, some checks can be removed.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,885,876,344
|
Applying online softmax patterns on joint_graph cause 1.2x peak memory regression for TB hf_T5_base model
|
shunting314
|
open
|
[
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Command
```time python benchmarks/dynamo/torchbench.py --backend inductor --amp --performance --only hf_T5_base --training```
tlparse for joint-graph pattern: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpk7zTmz/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100 . Peak memory 22.2GB.
tlparse for post-grad graph pattern: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmp9eCEvc/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100 . Peak memory 18.6GB.
It seems like the partitioner makes worse decisions if the pattern is applied before rather than after the partitioning.
This is related to PR https://github.com/pytorch/pytorch/pull/127011 . cc @chauhang @penguinwu @jansel @eellison I'll move the pattern to post grad to avoid the regression. Ultimately we probably should improve the partitioner. cc @Chillee as well
### Error logs
_No response_
### Versions
.
cc @chauhang @penguinwu
| true
|
2,885,864,466
|
Disable cudnn to avoid creating guards that denies exporting
|
yushangdi
|
open
|
[
"fb-exported",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Summary:
Fixes https://github.com/pytorch/pytorch/issues/147623
This code https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/Normalization.cpp#L504-L518 produces guards that raise ConstraintViolation error in batchnorm op.
We disable cudnn in export tracing to avoid creating such guards
Dependency: We need to land https://github.com/microsoft/onnxscript/pull/2085 first in onnxscript, and them bump the onnxscript version in https://github.com/pytorch/pytorch/pull/148388
Test Plan:
```
buck2 run mode/dev-nosan //caffe2/test:test_export -- -r bn_dynamic_shapes
```
Differential Revision: D70357703
| true
|
2,885,864,445
|
[canary] force_nn_module_property_static_shapes=False
|
bobrenjc93
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148139
As part of the dynamic shapes roadmap this half, we want to reduce the number of unrolled out flags. This is one that limits dynamism and doesn't seem to affect compile time or correctness. Let's flip it to False by default.
| true
|
2,885,864,391
|
[canary] force_parameter_static_shapes=False
|
bobrenjc93
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148139
* __->__ #148138
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,885,858,134
|
Add cufile to list of libraries to preload
|
atalman
|
closed
|
[
"Merged",
"release notes: releng",
"topic: bug fixes"
] | 3
|
CONTRIBUTOR
|
Fixes: https://github.com/pytorch/pytorch/issues/148120
Test with almalinux/9-base:latest :
```
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib64/python3.9/site-packages/torch/__init__.py", line 401, in <module>
from torch._C import * # noqa: F403
ImportError: libcufile.so.0: cannot open shared object file: No such file or directory
>>> exit()
[root@18b37257e416 /]# vi /usr/local/lib64/python3.9/site-packages/torch/__init__.py
[root@18b37257e416 /]# python3
Python 3.9.19 (main, Sep 11 2024, 00:00:00)
[GCC 11.5.0 20240719 (Red Hat 11.5.0-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
/usr/local/lib64/python3.9/site-packages/torch/_subclasses/functional_tensor.py:276: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
>>> torch.__version__
'2.7.0.dev20250227+cu126'
```
| true
|
2,885,850,636
|
Checks kv pair indexing in OrderedPreservingDictTest.test_range_insert
|
redwrasse
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 12
|
CONTRIBUTOR
|
`OrderedPreservingDictTest.test_range_insert` has an [unused loop variable `j`](https://github.com/pytorch/pytorch/blob/main/c10/test/util/ordered_preserving_dict_test.cpp#L186), I think taken from the [inspired project](https://github.com/pytorch/pytorch/blob/main/c10/test/util/ordered_preserving_dict_test.cpp#L165) testcase for range inserts, where it [checks kv pair indexing/order](https://github.com/Tessil/ordered-map/blob/master/tests/ordered_map_tests.cpp#L136) for the ordered dict.
This just adds in that functionality to the test case.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,885,836,943
|
Remove manylinux 2014 artifacts
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
1. Switch Magma build to Manylinux 2.28 base
2. Use manylinux 2.28 as default in populate_binary_env.sh
3. Remove manylinux 2014 docker builds
| true
|
2,885,831,144
|
add skips to test_notifies_oom and test_set_per_process_memory_fraction
|
Fuzzkatt
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Tests fail in NVIDIA internal CI since we do not support nvml on Jetson, but nvml is required for OOM reporting to work properly, so we are skipping the failing tests for now.
cc @nWEIdia @eqy
| true
|
2,885,790,600
|
[MPS] fix empty place holder error for smooth l1 loss
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"topic: bug fixes",
"module: mps",
"release notes: mps"
] | 6
|
COLLABORATOR
|
Fixes #123171
And parametrizes the tests for it
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.