id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,807,880,256
|
Fix IdentationError of code example
|
pytorchbot
|
closed
|
[
"open source"
] | 3
|
COLLABORATOR
|
I found there is IndentationError when try to copy paste the example of inference with torch.compile
fix the format in this pr
| true
|
2,807,870,263
|
Only include RMSNorm.h in layer_norm.cpp for MPS
|
manuelcandales
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Test Plan: CI
Differential Revision: D68578213
| true
|
2,807,867,465
|
inductor.config.descriptive_names = False is not actually supported
|
exclamaforte
|
closed
|
[
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"topic: deprecation",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Summary:
This config is not supported (it throws an error when set), and doesn't really make sense imo.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,807,855,167
|
use_const_ref_for_mutable_tensors doesn't work with out= overloads
|
ezyang
|
open
|
[
"triaged",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```
import torch
from torch._subclasses.fake_tensor import FakeTensorMode
from torch._dispatch.python import enable_python_dispatcher
from torch.fx.experimental.symbolic_shapes import ShapeEnv, DimDynamic, SymNode
from torch import SymInt
import torch._dynamo
def f():
shape_env = ShapeEnv()
s0 = 5
s1 = 6
s2 = 7
s3 = 3
s4 = 10
s5 = 2
x = torch.randn(s0, s1, s2)
out = torch.randn(s0, s3, s4)
kwargs = {
's': (s3, s4),
'dim': (1, s5),
'norm': 'ortho',
}
from torch.fx.experimental.proxy_tensor import make_fx
r = torch._C._fft.fft_hfft2(x, **kwargs, out=out)
assert r.shape == out.shape, r.shape
#print("real")
#f()
print("fake")
with FakeTensorMode():
f()
```
First, we observe that fft_hfft2 uses `use_const_ref_for_mutable_tensors`:
```
- func: fft_hfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!)
use_const_ref_for_mutable_tensors: True
python_module: fft
variants: function
dispatch:
CompositeImplicitAutograd: fft_hfft2_symint_out
```
This means the signature of fft_hfft2_symint_out is:
```
const Tensor& fft_hfft2_symint_out(
const Tensor& self, at::OptionalSymIntArrayRef s, IntArrayRef dim,
std::optional<std::string_view> norm, const Tensor& out) {
```
Now, consider how we implement this signature in aten/src/ATen/core/boxing/impl/boxing.h :
```
// 3.5. In-process migration to make in-place ops take and return
// const references instead.
template <class... OtherArgs>
struct BoxedKernelWrapper<
const at::Tensor&(const at::Tensor&, OtherArgs...),
std::enable_if_t<can_box_all<OtherArgs...>::value, void>> {
static const at::Tensor& call(
const BoxedKernel& boxed_kernel_func,
const OperatorHandle& opHandle,
DispatchKeySet dispatchKeySet,
const at::Tensor& outArg,
OtherArgs... otherArgs) {
torch::jit::Stack stack = boxArgs(outArg, otherArgs...);
boxed_kernel_func.callBoxed(opHandle, dispatchKeySet, &stack);
TORCH_INTERNAL_ASSERT_DEBUG_ONLY(
stack.size() == 1,
"Boxed kernel was expected to return a single value on the stack, ",
"but instead returned ",
stack.size(),
" values.");
return outArg;
}
};
```
Our signature will match the boxing rule for inplace arguments, because it returns a const Tensor&, and the first argument is a const Tensor&. This means we will return self, rather than out, as the return argument.
I think it is fundamentally impossible to solve this problem as there as an ambiguity as to whether or not the function signature is an out signature or an inplace one, we don't specify this in the function signature. A dedicated "Out" struct that wraps tensor reference could help. But an easy WAR is to stop using the modern const Tensor& style for these out functions.
### Versions
main
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225
| true
|
2,807,822,889
|
General Changes for multi accelerators
|
rahulsingh-intel
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"release notes: distributed (fsdp)"
] | 36
|
CONTRIBUTOR
|
Intend to generailze the framework for multiple accelerators.
Major changes includes:
> Add TEST_CUDA & TEST_HPU condition for generalization at common place.
> Move ".cuda()" to ".to(device_type)" call
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,807,811,876
|
Fix allow_mutation_on_saved_tensors for inplace foreach
|
soulitzer
|
closed
|
[
"Merged",
"release notes: autograd",
"topic: bug fixes"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145399
* #145533
* #145531
* __->__ #145520
| true
|
2,807,806,834
|
[dynamo][trace-rules-cleanup] Remove functools from the Builtins skiplist
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145559
* #145558
* #145547
* __->__ #145519
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,807,787,494
|
[ROCM MI300 skips for flaky unit tests
|
jataylo
|
closed
|
[
"module: rocm",
"open source",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 6
|
COLLABORATOR
|
Temporary skips to deal with the current CI disable issues due to MI300, this will allow us to run these tests again on MI200 CI.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,807,766,045
|
Document dispatch trace build flag
|
albanD
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Ok, the build flag seems to have been broken for a while since the function it calls doesn't exist anymore.
Repurposed it to enable dispatcher printing (which requires a full (and slow) debug build otherwise).
| true
|
2,807,725,961
|
Make debugging flaky tests easier by having relevant logs in one place
|
janeyx99
|
closed
|
[
"triaged",
"module: devx"
] | 2
|
CONTRIBUTOR
|
## Problem
@mikaylagawarecki and I were looking into https://github.com/pytorch/pytorch/issues/81732, a test that has been flaky for a while.
Following the "recent examples" of the issue body leads us to https://hud.pytorch.org/flakytest?name=test_non_contiguous_tensors_nn_ConvTranspose1d_cuda_complex32&suite=TestModuleCUDA&file=test_modules.py
where thankfully there are many examples of the test being flaky.
The intuitive next step is to search for the test failure in the logs, so I search for the test name, to get this awful loop of this test name ~50 times in one line:
<img width="1701" alt="Image" src="https://github.com/user-attachments/assets/cc12052b-c8f1-45ab-a6dd-8d539cc9da6a" />
Where are the logs? Where are the failures?
I ask @clee2000 who tells me that I have to look into the artifacts, which requires the following steps:
1. Scrolling above that line to see what the artifact name was:
<img width="1621" alt="Image" src="https://github.com/user-attachments/assets/d030d47e-4165-4039-91bc-3197b27d7848" />
2. Remember that somewhere. Now click on the "Commit" link under the workflow to get to the commit page, which has the artifacts.
<img width="181" alt="Image" src="https://github.com/user-attachments/assets/28dddb6c-57f2-4836-a24a-8d216de283a6" />
3. Oh shoot did you forget to remember what job the artifacts was saved under? Go back to that tab and remember the job name: `linux-focal-cuda11.8-py3.10-gcc9-debug / test (default, 5, 7, linux.4xlarge.nvidia.gpu, oncall:debug-build, rerun_disabled_tests)`. Then I searched for that on the commit page.
<img width="539" alt="Image" src="https://github.com/user-attachments/assets/781cd7b5-b408-4369-afb9-817946abed77" />
4. Conveniently, I already knew about the "Show artifacts" button which revealed the above artifacts, of which I download the logs, as those were what we wanted.
5. Open the logfile name from earlier and then start reading the logs.
## Solution:
How come the original logviewer had no logs for the failures and reruns like other jobs normally would have? If the logs were in the logviewer on the flaky test page, steps 1-5 above would not be necessary.
cc @ZainRizvi @kit1980 @huydhn @clee2000
| true
|
2,807,703,779
|
[inductor][5/N] triton support post-#5512, fix 1 and None handling
|
davidberard98
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145515
This fixes handling for "1" and "None" args with new Triton versions. TL;DR: triton_meta["constants"] (which is passed to ASTSource) should be a map of {"kwarg_name": constant_value} for values which are tl.constexpr, or have a value of 1 or None (i.e. "specialized" constants). For constant args, triton_meta["signature"][arg_name] should be "constexpr" (even for specialized constants).
Note: This adds support for Triton versions after 5512; but not for versions in between 5220 and 5512 (i.e. `TritonAttrsDescriptorVersion.V3_BACKENDS_TUPLE`). There's a completely different format for constants/signature in the commit range in between.
To test: I ran `test_torchinductor.py` and `test_triton_kernels.py` with the main branch of triton (~jan 27). The only failing tests are aoti-related tests (which need to be fixed as a follow-up), and test_mutable_custom_op_fixed_layout2_cuda (which is failing with or without the new triton version on my machine); additionally, the split-scan/split-reduction kernels rely on https://github.com/triton-lang/triton/pull/5723.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,807,668,789
|
Can't properly implement backward method for custom op in C++ when the op takes List of tensors as argument
|
borisfom
|
closed
|
[
"module: cpp",
"triaged",
"module: custom-operators"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We have to define our custom ops in C++ because I need to run them in C++ (see https://github.com/pytorch/pytorch/issues/143773).
I was able to port most of our custom ops from Python to C++ successfully, except for one case when operation takes inputs as a list of tensors. Below is a short repro illustrating the problem. It crashes with:
terminate called after throwing an instance of 'c10::Error'
what(): element 0 of tensors does not require grad and does not have a grad_fn
Is there a way around it in C++ or do I have to change my custom op to take unrolled list as args ? The list have an optional entry, too.
I was following Perplexity advice that the nested gradient list should be returned as unrolled in this case (it also wrote me the code below).
But, it looks like we do not even get to backward being called in the first place (no printout for needs_grad).
Is there another way to do that? @zou3519 @xmfan @yushangdi @desertfire
Same operation defined in Python, works fine with backward returning nested list of gradients:
```
#include <torch/torch.h>
#include <vector>
#include <iostream>
class CustomFunction : public torch::autograd::Function<CustomFunction> {
public:
// Forward pass
static torch::Tensor forward(torch::autograd::AutogradContext *ctx,
const std::vector<torch::Tensor>& inputs) {
// Save input tensors for backward computation
ctx->save_for_backward(inputs);
// Example operation: sum all tensors in the list
torch::Tensor output = torch::zeros_like(inputs[0]);
for (const auto& tensor : inputs) {
output += tensor;
}
return output;
}
// Backward pass
static torch::autograd::variable_list backward(torch::autograd::AutogradContext *ctx,
torch::autograd::variable_list grad_outputs) {
// Retrieve saved tensors
auto saved_inputs = ctx->get_saved_variables();
std::cerr<<"Needs grad: "<<ctx->needs_input_grad(0)<<", "<<ctx->needs_input_grad(1)<<std::endl;
// Compute gradients for each input tensor
torch::Tensor grad_output = grad_outputs[0];
torch::autograd::variable_list grad_inputs;
for (const auto& tensor : saved_inputs) {
grad_inputs.push_back(grad_output.clone());
}
return grad_inputs; // Return as a flat variable_list
}
};
int main() {
// Create input tensors
auto t1 = torch::tensor({1.0}, torch::requires_grad());
auto t2 = torch::tensor({2.0}, torch::requires_grad());
std::vector<torch::Tensor> inputs = {t1, t2};
// Forward pass
auto output = CustomFunction::apply(inputs);
// Backward pass
output.backward();
// Print gradients
for (size_t i = 0; i < inputs.size(); ++i) {
std::cout << "Gradient for input " << i << ": " << inputs[i].grad() << std::endl;
}
return 0;
}
### Versions
Pytorch nightly.
cc @jbschlosser @zou3519 @bdhirsh @penguinwu @yf225 @chauhang
| true
|
2,807,662,721
|
Don't fail if fresh_inductor_cache fails to clean up its tmp dir.
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145513
Summary: I see we have a test failure due to an error removing the tmp dir: https://github.com/pytorch/pytorch/issues/141761. Seems like we should not raise an exception for this case in general. Also, let's clean up the exception handling related to windows. The comment makes it sound like we want to specifically ignore failures cleaning up, but the current impl is swallowing all exceptions.
Fixes #141761
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,807,649,485
|
Add missing autoreleasepool around runUniqueGraph to prevent leaks
|
jhavukainen
|
closed
|
[
"module: memory usage",
"triaged",
"open source",
"Merged",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 5
|
COLLABORATOR
|
References were held onto longer than needed. Added autoreleasepool around the runUniqueGraph to allow the memory to be freed.
Fixes #145151
cc @kulinseth @albanD @malfet @DenisVieriu97
| true
|
2,807,646,776
|
Activation Checkpointing composability with split backward computation
|
H-Huang
|
open
|
[
"oncall: distributed",
"module: activation checkpointing",
"triaged",
"module: pipelining"
] | 1
|
MEMBER
|
Activation checkpointing avoids saving intermediate tensors in order to save memory. It does so by recomputing the forward pass on demand to obtain the intermediate values required for gradient computation during backward.
For pipelining, we are splitting up the backward computation into `stage_backward_input` and `stage_backward_weight` (https://github.com/pytorch/pytorch/blob/main/torch/distributed/pipelining/_backward.py) which represent the derivative with respect to input and the derivative with respect to weights, respectively. This is mentioned in more detail in the [ZeroBubble paper](https://arxiv.org/pdf/2401.10241). We are computing backwards for each parameter group which leads to repeated forward computation that is impacting performance.
## Minimal example
```python
import torch
from torch.utils.checkpoint import checkpoint
a = torch.tensor(1., requires_grad=True)
b = torch.tensor(1., requires_grad=True)
def fn(x, z):
print("computing forward")
y = x.sin().exp() * z
return y, y.sin().exp()
# compute forward under AC
y, out = checkpoint(fn, a, b, use_reentrant=False)
# 1. compute wrt y and x
da, dy = torch.autograd.grad(out, inputs=(a, y), retain_graph=True)
# 2. compute wrt z using y's grad
db = torch.autograd.grad((y,), inputs=(b,), grad_outputs=(dy,))
```
Output:
```
computing forward
computing forward
computing forward
```
## torch.distributed.pipelining minimal example
```python
import torch
from torch.distributed.pipelining._backward import stage_backward_input, stage_backward_weight
import torch.nn as nn
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import checkpoint_wrapper
def example(checkpoint):
torch.manual_seed(0)
x = torch.tensor((1.,), requires_grad=True)
class SimpleModel(nn.Module):
def __init__(self, num_layers=5):
super().__init__()
self.layers = nn.Sequential(*[nn.Linear(1, 1) for _ in range(num_layers)])
def forward(self, x):
print("computing forward")
return self.layers(x)
model = SimpleModel()
if checkpoint:
model = checkpoint_wrapper(model)
y = model(x)
loss = y.mean()
dx, param_groups = stage_backward_input([loss], output_grads=None, input_values=[x], weights=model.parameters())
print(f"{len(param_groups)=}")
stage_backward_weight(model.parameters(), param_groups=param_groups)
print("without AC:", "=" * 25)
example(checkpoint=False)
print("with AC:", "=" * 25)
example(checkpoint=True)
```
Output:
```
without AC: =========================
computing forward
len(param_groups)=5
with AC: =========================
computing forward
computing forward
len(param_groups)=5
computing forward
computing forward
computing forward
computing forward
computing forward
```
## Potential solution
@soulitzer mentioned potentially adding a flag to AC to cache the recomputed intermediates. Still need discussion on how the API would look like for that and how we could explictly free the intermediates when we decide we are done.
For pipelining, saving the intermediates would increase the lifespan in memory for the time between `backward_input` and `backward_weight`, but this should still help peak memory. Ideally we could have a situation where forward only needs to be called once for `backward_input` and once for `backward_weight`.
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @soulitzer
| true
|
2,807,637,955
|
Add istft option to align window for center = false
|
jackzhxng
|
closed
|
[
"Stale",
"release notes: onnx"
] | 2
|
CONTRIBUTOR
|
Following up from https://github.com/pytorch/pytorch/pull/145324, also add the `align_to_window` parameter for the inverse short fourier transform op.
_PENDING: stft round trip tests for center = false and align_window = true_
Pr chain:
- [Advance past fc window for stft center #145437](https://github.com/pytorch/pytorch/pull/145437)
- [Add stft option to align window for center = false #145324](https://github.com/pytorch/pytorch/pull/145324)
- -> [Add istft option to align window for center = false](https://github.com/pytorch/pytorch/pull/145510)
| true
|
2,807,601,072
|
[dynamo][guards] Log guard latency to tlparse
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145132
* __->__ #145509
Example

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,807,587,301
|
[ROCm] Bump AOTriton to 0.8.2b
|
xinyazhang
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/rocm"
] | 6
|
COLLABORATOR
|
We received reports AOTriton kernels mishandles the bias pointer and it causes NaN during fine-tuning llama3.2-11b vision model. This PR will fix the problem.
Note: this AOTriton 0.8.1b adds head dimension 512 support and thus the binary size increases, but it is considered experimental and will not be enabled right now.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,807,578,078
|
[c10d] fix memory leak on shutdown
|
c-p-i-o
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Summary:
Fix memory leak on shutdown when socket is closed.
We still need to free the buffer to make valgrind happy.
Test Plan:
Use `mtiavm`.
Repro steps provided by cristianlume.
on window 1:
```
vm ssh --vm=0 -- $(buck run @//neteng/ai/rdma_gen/mode/owl //neteng/ai/rdma_gen:rdma_gen --emit-shell) --rdma_mode=mtiav1 --num_ranks=2
```
on window 2:
```
vm ssh --vm=1 -- $(buck run @//neteng/ai/rdma_gen/mode/owl //neteng/ai/rdma_gen:rdma_gen --emit-shell) --rdma_mode=mtiav1 --num_ranks=2 --rank=1 --store_host=172.16.1.1
```
without the fix:
```
==8766==ERROR: LeakSanitizer: detected memory leaks
```
With fix, no leak
Differential Revision: D68566104
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,807,575,914
|
serde and_ operator
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145506
Differential Revision: [D68565887](https://our.internmc.facebook.com/intern/diff/D68565887/)
| true
|
2,807,550,069
|
Revert "Reverting the PR adding Kleidiai-based int4 kernels (#145392)"
|
nikhil-arm
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"arm priority"
] | 4
|
COLLABORATOR
|
https://github.com/pytorch/pytorch/pull/134124 was reverted by https://github.com/pytorch/pytorch/pull/145392 due to KleidiAI clone issue.
1. This reverts commit 0940eb6d44f3cf69dd840db990245cbe1f78e770 (https://github.com/pytorch/pytorch/pull/145392 )and Fixes KleidiAI mirror issue.
2. KleidiAI is now cloned from github mirror instead of arm gitlab
Change-Id: I7d6eee7214cd117d3057d615936fcc3ee6052fa2
Fixes https://github.com/pytorch/pytorch/issues/145273
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,807,528,915
|
Resolve zip file permission issue when uploading artifacts on ROCm MI300 CI runners
|
amdfaa
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/unstable",
"ciflow/rocm",
"ci-no-td"
] | 10
|
CONTRIBUTOR
|
E.g.: https://github.com/pytorch/pytorch/actions/runs/13500418791/job/37719437613#step:19:120
```
Beginning upload of artifact content to blob storage
Error: An error has occurred while creating the zip file for upload
Error: EACCES: permission denied, open '/home/runner/_work/pytorch/pytorch/test/test-reports/backends.xeon.test_launch_1.1_22ba1133f3fcd140_.log'
/home/runner/_work/_actions/actions/upload-artifact/v4/dist/upload/index.js:3459
throw new Error('An error has occurred during zip creation for the artifact');
^
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,807,528,896
|
Missing autorelease in lstm_mps caused a ton of leaked memory
|
jhavukainen
|
closed
|
[
"open source",
"Merged",
"release notes: mps",
"ciflow/mps"
] | 4
|
COLLABORATOR
|
The dictionary held onto the new MPSGraphTensorData objects and MPSNDArrays. Regression caused by https://github.com/pytorch/pytorch/pull/95137
Fixes #145374
| true
|
2,807,522,519
|
[Submodule] Add flash as third-party submodule [Prep for later PRs]
|
drisspg
|
closed
|
[
"module: cuda",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145502
# Context
Prototyped here: https://github.com/pytorch/pytorch/pull/144120, we are going to make flash-attention a 3rd party submodule. We will then use the c++ sources and include into our build of libtorch.so
This requires various changes to work including external and internal changes. Since these require internal changes we need to co-dev and in the co-dev environment I haven't found a way to sync submodule changes + internal only changes.
This is unused for now
cc @ptrblck @msaroufim @eqy
| true
|
2,807,520,186
|
pip failure when trying to download nightly whl from pytorch.download.org : ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE
|
atalman
|
open
|
[
"module: binaries",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This is the failure on Jan 23, 2025:
https://github.com/pytorch/pytorch/actions/runs/12926060185/job/36048291540#step:4:124
Log:
```
Downloading https://download.pytorch.org/whl/nightly/cpu/torch-2.7.0.dev20250123%2Bcpu-cp39-cp39-manylinux_2_28_x86_64.whl (176.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 176.2/176.2 MB 206.5 MB/s eta 0:00:00
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
unknown package:
Expected sha256 2c2c2e6fcafccf6ff8bf91da2508be85dc35102eabbff47f03c1932f37a2c8d4
Got 70186982b4dce3957a6a40ef722ba86[122](https://github.com/pytorch/pytorch/actions/runs/12926060185/job/36048291540#step:4:123)0996e37a63d4b11511549b09228ba7
```
Looks like its caused by conflict of sha256 value that we publish in index :
```
<a href="/whl/nightly/cpu/torch-2.7.0.dev20250123%2Bcpu-cp39-cp39-manylinux_2_28_x86_64.whl#sha256=2c2c2e6fcafccf6ff8bf91da2508be85dc35102eabbff47f03c1932f37a2c8d4" data-dist-info-metadata="sha256=072835b8d02e10e4e450c524497d0217711bc0880c33343c0f2779204cf4da8b" data-core-metadata="sha256=072835b8d02e10e4e450c524497d0217711bc0880c33343c0f2779204cf4da8b">torch-2.7.0.dev20250123+cpu-cp39-cp39-manylinux_2_28_x86_64.whl</a>
```
Looks like real file sha256 is 70186982b4dce3957a6a40ef722ba86 however sha256 included by our index is: 2c2c2e6fcafccf6ff8bf91da2508be85dc35102eabbff47f03c1932f37a2c8d4
This was mitigated by reverting
https://github.com/pytorch/pytorch/pull/144887 and https://github.com/pytorch/test-infra/pull/6172
Landing PR to exclude sha256 from nightly index:
https://github.com/pytorch/test-infra/pull/6213
Following additional test are required before we reland this change and before release 2.6:
1. Followup and make sure we can reintroduce sha256 index calculation correctly
2. Test current configuration production before release 2.6 and make sure we don't introduce breakage (since sha256 information is included in the production)
### Versions
2.7.0
cc @seemethere @malfet @osalpekar
| true
|
2,807,503,343
|
`torch._inductor.aoti_compile_and_package` fails when using dynamic shapes (PyTorch 2.6.0 RC)
|
dt-synth
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"oncall: export",
"module: aotinductor"
] | 7
|
NONE
|
### 🐛 Describe the bug
When I try to aot compile a simple example with dynamic shapes, it fails due to invalid generated C++ code.
Example:
```python
import torch
class Model(torch.nn.Module):
def forward(self, x):
return x + 1
model = Model().eval()
input = torch.randn(10)
dim = torch.export.Dim("dim_0")
dim_even = 2 * dim
exported_program = torch.export.export(
model,
args=(input,),
dynamic_shapes=({0: dim_even},),
)
torch._inductor.aoti_compile_and_package(exported_program)
```
Error message:
```
/tmp/torchinductor_danielthul/ctokdncd3lqh3uklcegfj2fanld6jbwqtpmk7s2lpcf74fkadoy4/c3tltwsgny45t3ykmpa6l7hab3gk6ekgaylyxcimtihuvyuynwut.cpp: In member function ‘void torch::aot_inductor::AOTInductorModel::run_impl(AtenTensorOpaque**, AtenTensorOpaque**, torch::aot_inductor::DeviceStreamType, AOTIProxyExecutorHandle)’:
/tmp/torchinductor_danielthul/ctokdncd3lqh3uklcegfj2fanld6jbwqtpmk7s2lpcf74fkadoy4/c3tltwsgny45t3ykmpa6l7hab3gk6ekgaylyxcimtihuvyuynwut.cpp:470:39: error: ‘s1’ was not declared in this scope; did you mean ‘y1’?
470 | const int64_t int_array_0[] = {2L*s1, };
| ^~
| y1
[...]
torch._inductor.exc.CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_danielthul/ctokdncd3lqh3uklcegfj2fanld6jbwqtpmk7s2lpcf74fkadoy4/c3tltwsgny45t3ykmpa6l7hab3gk6ekgaylyxcimtihuvyuynwut.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_AVX2 -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fexcess-precision=fast -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -fno-tree-loop-vectorize -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -fopenmp -I/home/danielthul/.local/share/uv/python/cpython-3.11.10-linux-x86_64-gnu/include/python3.11 -I/home/danielthul/dev/example-project/.venv/lib/python3.11/site-packages/torch/include -I/home/danielthul/dev/example-project/.venv/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -I/home/danielthul/dev/example-project/.venv/lib/python3.11/site-packages/torch/include/TH -I/home/danielthul/dev/example-project/.venv/lib/python3.11/site-packages/torch/include/THC -mavx2 -mfma -mf16c -D_GLIBCXX_USE_CXX11_ABI=0 -c -o /tmp/torchinductor_danielthul/ctokdncd3lqh3uklcegfj2fanld6jbwqtpmk7s2lpcf74fkadoy4/c3tltwsgny45t3ykmpa6l7hab3gk6ekgaylyxcimtihuvyuynwut.o
```
This is the auto-generated C++ function that fails to compile:
```c++
void AOTInductorModel::run_impl(
AtenTensorHandle*
input_handles, // array of input AtenTensorHandle; handles
// are stolen; the array itself is borrowed
AtenTensorHandle*
output_handles, // array for writing output AtenTensorHandle; handles
// will be stolen by the caller; the array itself is
// borrowed
DeviceStreamType stream,
AOTIProxyExecutorHandle proxy_executor
) {
auto inputs = steal_from_raw_handles_to_raii_handles(input_handles, 1);
auto arg0_1 = std::move(inputs[0]);
inputs.clear();
auto& kernels = static_cast<AOTInductorModelKernels&>(*this->kernels_.get());
const int64_t int_array_0[] = {2L*s1, };
static constexpr int64_t int_array_1[] = {1L, };
AtenTensorHandle buf0_handle;
AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_empty_strided(1, int_array_0, int_array_1, cached_torch_dtype_float32, cached_torch_device_type_cpu, this->device_idx_, &buf0_handle));
RAIIAtenTensorHandle buf0(buf0_handle);
cpp_fused_add_0((const float*)(arg0_1.data_ptr()), (float*)(buf0.data_ptr()), s1);
arg0_1.reset();
output_handles[0] = buf0.release();
} // AOTInductorModel::run_impl
```
It tries to use the dynamic shape `s1` (which corresponds to `dim` from the Python code), but `s1` is never declared in the C++ code.
I'm using the latest build of PyTorch 2.6 (installed using `uv pip install torch==2.6.0+cu124 --index-url https://download.pytorch.org/whl/test`)
Update: also fails on PyTorch 2.7.0.dev20250123+cu124
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 16 2024, 04:38:48) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-6.5.0-1024-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L40S
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0+cu124
[pip3] torch-tb-profiler==0.4.3
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78
| true
|
2,807,461,442
|
Remove truncated normal initialization for 16-bit (and lower) tensors
|
hjlee1371
|
closed
|
[
"module: nn",
"triaged",
"open source",
"Stale"
] | 4
|
NONE
|
Fixes #145498
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,807,460,897
|
Unexpected behavior of `torch.nn.init.trunc_normal` with bf16 tensors
|
hjlee1371
|
open
|
[
"module: distributions",
"module: nn",
"module: cpu",
"triaged",
"module: random"
] | 5
|
NONE
|
### 🐛 Describe the bug
Hi, I found that `torch.nn.init.trunc_normal` produces unexpected output for bf16 tensors due to the precision issue. When running the following code,
```python
import torch
fp32_tensor = torch.empty(100000)
bf16_tensor = torch.empty(100000).bfloat16()
torch.nn.init.trunc_normal_(fp32_tensor, std=0.01)
torch.nn.init.trunc_normal_(bf16_tensor, std=0.01)
casted_bf16_tensor = fp32_tensor.bfloat16()
print("# mean=0, std=0.01, a=-2, b=2")
print("fp32")
print(fp32_tensor.mean())
print(fp32_tensor.std())
print("bf16")
print(bf16_tensor.mean())
print(bf16_tensor.std())
print("casted bf16")
print(casted_bf16_tensor.mean())
print(casted_bf16_tensor.std())
```
it prints as follows.
```
# mean=0, std=0.01, a=-2, b=2
fp32
tensor(-6.6384e-07)
tensor(0.0100)
bf16
tensor(-0.0081, dtype=torch.bfloat16)
tensor(0.1270, dtype=torch.bfloat16)
casted bf16
tensor(-6.8918e-07, dtype=torch.bfloat16)
tensor(0.0100, dtype=torch.bfloat16)
```
https://github.com/pytorch/pytorch/blob/bf4f8919df8ee88e356b407bb84ed818ebfb407b/torch/nn/init.py#L38-L59
I found that this issue originates from the bf16 uniform sampling. For sufficiently small stds (which is typical for recent large models), `l` and `u` become 0 and 1. In this case, `trunc_normal_` draws bf16 numbers from a uniform distribution (0, 1). However, because bf16 has only 7 mantissa bits, a nontrivial portion of them becomes zero, produces -inf after `erfinv` and leading to truncation to `a`, which shifts the distribution. You can confirm this through the following code. When using a large negative `a`, the shift becomes significantly problematic.
```python
import torch
fp32_tensor = torch.empty(100000)
bf16_tensor = torch.empty(100000).bfloat16()
torch.nn.init.trunc_normal_(fp32_tensor, std=0.01, a=-100, b=100)
torch.nn.init.trunc_normal_(bf16_tensor, std=0.01, a=-100, b=100)
casted_bf16_tensor = fp32_tensor.bfloat16()
print("# mean=0, std=0.01, a=-100, b=100")
print("fp32")
print(fp32_tensor.mean())
print(fp32_tensor.std())
print("bf16")
print(bf16_tensor.mean())
print(bf16_tensor.std())
print("casted bf16")
print(casted_bf16_tensor.mean())
print(casted_bf16_tensor.std())
torch.nn.init.uniform_(bf16_tensor)
print("\n# ratio of zeros for bf16 tensor w/ uniform sampling")
print(bf16_tensor.eq(0).sum() / bf16_tensor.numel())
```
```
# mean=0, std=0.01, a=-100, b=100
fp32
tensor(7.5750e-05)
tensor(0.0100)
bf16
tensor(-0.3789, dtype=torch.bfloat16)
tensor(6.1250, dtype=torch.bfloat16)
casted bf16
tensor(7.5817e-05, dtype=torch.bfloat16)
tensor(0.0100, dtype=torch.bfloat16)
# ratio of zeros for bf16 tensor w/ uniform sampling
tensor(0.0040)
```
Note that CUDA implementation of bf16 `erfinv` is removed in https://github.com/pytorch/pytorch/issues/57707 due to precision issue. See also https://github.com/jax-ml/jax/discussions/13798.
### Versions
PyTorch version: 2.4.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.10 (main, Sep 7 2024, 18:35:41) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 5881.0000
CPU min MHz: 400.0000
BogoMIPS: 8982.90
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.4.2.65
[pip3] nvidia-cuda-cupti-cu12==12.4.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.99
[pip3] nvidia-cuda-runtime-cu12==12.4.99
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.0.44
[pip3] nvidia-curand-cu12==10.3.5.119
[pip3] nvidia-cusolver-cu12==11.6.0.99
[pip3] nvidia-cusparse-cu12==12.3.0.142
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.4.99
[pip3] nvidia-nvtx-cu12==12.4.99
[pip3] torch==2.4.1+cu124
[pip3] torchaudio==2.4.1+cu124
[pip3] torchvision==0.19.1+cu124
[pip3] triton==3.0.0
[conda] Could not collect
cc @fritzo @neerajprad @alicanb @nikitaved @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @pbelevich
| true
|
2,807,452,155
|
qmul.cpp:34:10: error: redefinition of 'xnn_binary_params' 34 | struct xnn_binary_params { | ^
|
dbl001
|
closed
|
[
"module: build",
"topic: build"
] | 1
|
NONE
|
### 🐛 Describe the bug
I am getting errors building pytorch on MacOS 15.2 on a 2021 iMac 27": qmul.cpp:34:10: error: redefinition of 'xnn_binary_params'
34 | struct xnn_binary_params {
| ^
I got these error before and made changes to work around the issue(s) in: qmul.cpp, AveragePooling.cpp, Activation.cpp.
```
export CLANG=/usr/bin/clang
export CC=/usr/bin/clang
export OBJC=/usr/bin/clang
export CC_FOR_BUILD=/usr/bin/clang
export OBJC_FOR_BUILD=/usr/bin/clang
export CXX=/usr/bin/clang++
```
We tracked the changes to:
" qmul.cpp lost its binary operation handling."
$ git show cca34be584467a622a984a0421d886fb26f7dda7 -- aten/src/ATen/native/quantized/cpu/qmul.cpp
Here's the previous version:
git show c5cc5684bc0:aten/src/ATen/native/quantized/cpu/qmul.cpp
This is what was added to: qmul.cpp, AveragePooling.cpp, Activation.cpp, to get the code to compile
qmul.cpp
```
#define TORCH_ASSERT_ONLY_METHOD_OPERATORS
#include <ATen/core/Tensor.h>
// Keep all existing includes
extern "C" {
struct xnn_binary_params {
double output_min;
double output_max;
};
enum xnn_binary_operator {
xnn_binary_invalid = -1,
xnn_binary_add,
xnn_binary_subtract,
xnn_binary_multiply,
xnn_binary_divide
};
enum xnn_status xnn_define_binary(
xnn_subgraph_t subgraph,
enum xnn_binary_operator type,
const struct xnn_binary_params* params,
uint32_t input1_id,
uint32_t input2_id,
uint32_t output_id,
uint32_t flags);
}
namespace at::native {
```
Activation.cpp
```
#ifdef USE_XNNPACK
#include <ATen/native/xnnpack/Common.h>
#include <ATen/native/xnnpack/Engine.h>
#include <ATen/native/utils/Factory.h>
#include <xnnpack.h>
extern "C" {
enum xnn_unary_operator {
xnn_unary_invalid = -1,
xnn_unary_hardswish
};
enum xnn_status xnn_define_unary(
xnn_subgraph_t subgraph,
enum xnn_unary_operator type,
const void* parameters,
uint32_t input_id,
uint32_t output_id,
uint32_t flags);
}
namespace at::native::xnnpack {
```
AveragePooling.cpp
```
#ifdef USE_XNNPACK
#include <ATen/native/utils/Factory.h>
#include <ATen/native/xnnpack/Common.h>
#include <ATen/native/xnnpack/Engine.h>
#include <ATen/native/xnnpack/Pooling.h>
#include <xnnpack.h>
extern "C" {
enum xnn_reduce_operator {
xnn_reduce_invalid = -1,
xnn_reduce_sum,
xnn_reduce_mean
};
enum xnn_status xnn_define_static_reduce(
xnn_subgraph_t subgraph,
enum xnn_reduce_operator reduce_operator_type,
size_t num_reduction_axes,
const int64_t* reduction_axes,
uint32_t input_id,
uint32_t output_id,
uint32_t flags);
}
namespace at::native::xnnpack {
```
### Error logs
```
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qmul.cpp.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -DAT_PER_OPERATOR_HEADERS -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -DXNN_LOG_LEVEL=0 -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/Users/davidlaxer/pytorch/build/aten/src -I/Users/davidlaxer/pytorch/aten/src -I/Users/davidlaxer/pytorch/build -I/Users/davidlaxer/pytorch -I/Users/davidlaxer/pytorch/cmake/../third_party/benchmark/include -I/Users/davidlaxer/pytorch/third_party/onnx -I/Users/davidlaxer/pytorch/build/third_party/onnx -I/Users/davidlaxer/pytorch/nlohmann -I/Users/davidlaxer/pytorch/torch/csrc/api -I/Users/davidlaxer/pytorch/torch/csrc/api/include -I/Users/davidlaxer/pytorch/caffe2/aten/src/TH -I/Users/davidlaxer/pytorch/build/caffe2/aten/src/TH -I/Users/davidlaxer/pytorch/build/caffe2/aten/src -I/Users/davidlaxer/pytorch/build/caffe2/../aten/src -I/Users/davidlaxer/pytorch/torch/csrc -I/Users/davidlaxer/pytorch/third_party/miniz-3.0.2 -I/Users/davidlaxer/pytorch/third_party/kineto/libkineto/include -I/Users/davidlaxer/pytorch/third_party/kineto/libkineto/src -I/Users/davidlaxer/pytorch/third_party/cpp-httplib -I/Users/davidlaxer/pytorch/aten/src/ATen/.. -I/Users/davidlaxer/pytorch/third_party/FXdiv/include -I/Users/davidlaxer/pytorch/c10/.. -I/Users/davidlaxer/pytorch/third_party/pthreadpool/include -I/Users/davidlaxer/pytorch/third_party/cpuinfo/include -I/Users/davidlaxer/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/include -I/Users/davidlaxer/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/src -I/Users/davidlaxer/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/deps/clog/include -I/Users/davidlaxer/pytorch/third_party/NNPACK/include -I/Users/davidlaxer/pytorch/third_party/ittapi/src/ittnotify -I/Users/davidlaxer/pytorch/third_party/FP16/include -I/Users/davidlaxer/pytorch/third_party/tensorpipe -I/Users/davidlaxer/pytorch/build/third_party/tensorpipe -I/Users/davidlaxer/pytorch/third_party/tensorpipe/third_party/libnop/include -I/Users/davidlaxer/pytorch/third_party/fmt/include -I/Users/davidlaxer/pytorch/build/third_party/ideep/mkl-dnn/include -I/Users/davidlaxer/pytorch/third_party/ideep/mkl-dnn/src/../include -I/Users/davidlaxer/pytorch/third_party/flatbuffers/include -isystem /Users/davidlaxer/pytorch/build/third_party/gloo -isystem /Users/davidlaxer/pytorch/cmake/../third_party/gloo -isystem /Users/davidlaxer/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /Users/davidlaxer/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /Users/davidlaxer/pytorch/cmake/../third_party/googletest/googletest/include -isystem /Users/davidlaxer/pytorch/torch/include -isystem /Users/davidlaxer/pytorch/third_party/XNNPACK/include -isystem /Users/davidlaxer/pytorch/third_party/ittapi/include -isystem /Users/davidlaxer/pytorch/cmake/../third_party/eigen -isystem /usr/local/include -isystem /Users/davidlaxer/pytorch/third_party/ideep/include -isystem /Users/davidlaxer/pytorch/INTERFACE -isystem /Users/davidlaxer/pytorch/third_party/nlohmann/include -isystem /Users/davidlaxer/pytorch/build/include -march=core2 -mtune=haswell -mssse3 -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem /Users/davidlaxer/anaconda3/envs/AI-Feynman/include -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=braced-scalar-init -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wvla-extension -Wsuggest-override -Wnewline-eof -Winconsistent-missing-override -Winconsistent-missing-destructor-override -Wno-pass-failed -Wno-error=old-style-cast -Wconstant-conversion -Qunused-arguments -fcolor-diagnostics -faligned-new -fno-math-errno -fno-trapping-math -Werror=format -DUSE_MPS -Wno-missing-braces -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk -fPIC -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-missing-field-initializers -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wunused-function -Wunused-variable -Wunused-private-field -Wextra-semi -Wno-error=extra-semi -fvisibility=hidden -O2 -Wmissing-prototypes -Werror=missing-prototypes -Xpreprocessor -fopenmp -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qmul.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qmul.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qmul.cpp.o -c /Users/davidlaxer/pytorch/aten/src/ATen/native/quantized/cpu/qmul.cpp
/Users/davidlaxer/pytorch/aten/src/ATen/native/quantized/cpu/qmul.cpp:34:10: error: redefinition of 'xnn_binary_params'
34 | struct xnn_binary_params {
| ^
/Users/davidlaxer/pytorch/torch/include/xnnpack.h:1086:8: note: previous definition is here
1086 | struct xnn_binary_params {
| ^
/Users/davidlaxer/pytorch/aten/src/ATen/native/quantized/cpu/qmul.cpp:38:8: error: redefinition of 'xnn_binary_operator'
38 | enum xnn_binary_operator {
| ^
/Users/davidlaxer/pytorch/torch/include/xnnpack.h:1063:6: note: previous definition is here
1063 | enum xnn_binary_operator {
| ^
2 errors generated.
```
### Versions
```
% python3 collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (x86_64)
GCC version: Could not collect
Clang version: 14.0.6
CMake version: version 3.29.4
Libc version: N/A
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:53:34) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-15.2-x86_64-i386-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz
Versions of relevant libraries:
[pip3] configmypy==0.2.0
[pip3] flake8==7.1.1
[pip3] mypy==1.14.1
[pip3] mypy_extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.14.0
[pip3] onnxruntime==1.20.1
[pip3] optree==0.14.0
[pip3] torch==2.6.0a0+git161425f
[pip3] torch-struct==0.5
[pip3] torchaudio==2.5.0a0+332760d
[pip3] torchtext==0.17.2
[pip3] torchvision==0.22.0a0+d3beb52
[conda] _tflow_select 2.3.0 mkl
[conda] mkl 2023.2.2 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h968bf8e_1 conda-forge
[conda] mkl_fft 1.3.11 py310hc65c713_0 conda-forge
[conda] mkl_random 1.2.8 py310hf23be7c_1 conda-forge
[conda] numpy 1.26.4 py310h4bfa8fc_0 conda-forge
[conda] optree 0.14.0 py310hf166250_0 conda-forge
[conda] torch 2.6.0a0+git161425f pypi_0 pypi
[conda] torch-struct 0.5 pypi_0 pypi
[conda] torchaudio 2.5.0a0+332760d pypi_0 pypi
[conda] torchtext 0.17.2 pypi_0 pypi
[conda] torchvision 0.22.0a0+d3beb52 pypi_0 pypi
```
cc @malfet @seemethere @chauhang @penguinwu
| true
|
2,807,370,630
|
Link to `third_party/eigen` git submodule is broken
|
BwL1289
|
closed
|
[
"module: third_party"
] | 1
|
NONE
|
### 🐛 Describe the bug
The link to `third_party/eigen @ 3147391` git submod is [broken](https://github.com/pytorch/pytorch/tree/bf4f8919df8ee88e356b407bb84ed818ebfb407b/third_party/eigen).
Which begs a larger question: if building `PyTorch` from source, is `eigen` really required if it's not being used as a `BLAS` backend?
This [block](https://github.com/pytorch/pytorch/tree/bf4f8919df8ee88e356b407bb84ed818ebfb407b/cmake/Dependencies.cmake#L821C1-L837C50) in `Dependencies.cmake` suggests that it's built unconditionally, but I don't see any references to eigen header files throughout the repo except for ones linked from `pybind11` and, as mentioned, the link pytorch uses to vendor it is broken.
Let me know if I'm missing something obvious. I would prefer not building `eigen` at all if possible.

### Versions
N/A
| true
|
2,807,355,377
|
[compiled_autograd] Rename interface to pyinterface
|
zou3519
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 10
|
CONTRIBUTOR
|
Summary: interface is a reserved word in some MSVC variants.
Test Plan: build
Differential Revision: D68561379
| true
|
2,807,335,613
|
[inductor] Make triton kernel autotune config defaults backward-compatible
|
bertmaher
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145494
If a model was torch.packaged using triton<=3.1, any user-defined
autotuned kernels will have reps/warmups burned in with the old defaults
(100/25). If this model is loaded with triton>=3.2, inductor's checks for
unsupported non-default autotune args will fail, because triton.Autotuner's
defaults for these parameters has changed to `None`. Let's explicitly support
those values for backward compatibility with these older models.
Differential Revision: [D68561014](https://our.internmc.facebook.com/intern/diff/D68561014/)
| true
|
2,807,331,172
|
Add a lint rule to avoid the word `interface` in C++
|
zou3519
|
open
|
[
"module: cpp",
"module: lint",
"triaged"
] | 2
|
CONTRIBUTOR
|
This is a reserved word in some msvc implementations internally
cc @jbschlosser
| true
|
2,807,263,087
|
[BE]: Fix OrderedSet equality oversight
|
Skylion007
|
closed
|
[
"open source",
"topic: bug fixes"
] | 2
|
COLLABORATOR
|
Test to see if #145489 even causes behavior difference in test suite
| true
|
2,807,256,201
|
Cannot print symbolic tensors from C++
|
ezyang
|
open
|
[
"module: cpp",
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Steps to reproduce:
```
std::cerr << tensor;
```
where tensor is a Tensor with symbolic sizes (e.g., a fake tensor with a ShapeEnv and symbolic shapes).
It fails with:
```
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
Exception raised from throw_cannot_call_with_symbolic at /data/users/ezyang/a/pytorch/c10/core/TensorImpl.cpp:291 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x7f
43c298b6a8 in /data/users/ezyang/a/pytorch/torch/lib/libc10.so)
frame #1: c10::TensorImpl::throw_cannot_call_with_symbolic(char const*) const + 0x8d (0x7f43c291f651 in /data/users/ezyang/a/pytorch/torch/lib
/libc10.so)
frame #2: <unknown function> + 0x7253f (0x7f43c296453f in /data/users/ezyang/a/pytorch/torch/lib/libc10.so)
frame #3: at::print(std::ostream&, at::Tensor const&, long) + 0x16c0 (0x7f43b15c7ad0 in /data/users/ezyang/a/pytorch/torch/lib/libtorch_cpu.so
)
frame #4: <unknown function> + 0x1a1fbd0 (0x7f43b1a1fbd0 in /data/users/ezyang/a/pytorch/torch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0x1a25b55 (0x7f43b1a25b55 in /data/users/ezyang/a/pytorch/torch/lib/libtorch_cpu.so)
frame #6: at::native::fft_hfftn_symint_out(at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::OptionalArrayRef<long>, std::optional<s
td::basic_string_view<char, std::char_traits<char> > >, at::Tensor const&) + 0x44 (0x7f43b1
```
### Versions
main
cc @jbschlosser @chauhang @penguinwu @bobrenjc93
| true
|
2,807,255,061
|
cpp_wrapper: Move #includes to per-device header files
|
benjaminglass1
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/xpu"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145490
This prepares us for the next PR in the stack, where we introduce pre-compiled per-device header files to save compilation time.
Re-lands #145083 after merge conflicts.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,807,244,641
|
OrderedSet is backed by normal Dict, does not check ordering in equality
|
Skylion007
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
One concering thing I noticed is that OrderedDict is backed by a normal dictionary and not an OrderedDictionary. In recent versions of Python, these are effectively same with one important difference, OrderedDict equality requires that the elements are in the same order, while our OrderedSet does not. I am not sure which is the preferred behavior, but I thought I should bring this to folks attention. If we want it to match the semantics of OrderedDict, it should also double check the order matches when comparing two ordered sets.
### Versions
Latest on Master as of 2025/1/23
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,807,214,200
|
[torchbench] Add meta function for _cudnn_rnn_flatten_weight
|
IvanKobzarev
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145488
https://github.com/pytorch/pytorch/issues/144989
This fixes tts_angular model on torchbench for `--export-aot-inductor`
I put meta function in cpp, as shape calculation requires cudnn API calls.
I've extracted shape calculation to be used in implementation as this logic has some non-trivial actions and comments.
```
└─ $ python benchmarks/dynamo/torchbench.py --only tts_angular --accuracy --no-translation-validation --inference --bfloat16 --export-aot-inductor --disable-cudagraphs --device cuda
loading model: 0it [00:00, ?it/s]WARNING:common:Model tts_angular does not support bfloat16, running with amp instead
loading model: 0it [00:01, ?it/s]
WARNING:common:Model tts_angular does not support bfloat16, running with amp instead
cuda eval tts_angular
WARNING:common:Model tts_angular does not support bfloat16, running with amp instead
pass
```
| true
|
2,807,171,298
|
Remove unnecessary "special linking" for `BLAS_LIBRARIES`
|
mgorny
|
closed
|
[
"module: build",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Remove the "special linking" that involves listing `BLAS_LIBRARIES` thrice if `TH_BINARY_BUILD` is set, as it should not be any different from listing it just once.
The code seems to date back to commit cfcf2af95f91a88ec61cbcac8b30a718e7332aa5. The original code already listed `BLAS_LIBRARIES` thrice, but it provided no explanation for doing that — and without `TH_BINARY_BUILD`, BLAS was not linked at all. The current version seems to originate in d6a8d28d6529a4f0b80a8c046ca9c36ca6c8b347 — and it already provided an `ELSE` clause listing `BLAS_LIBRARIES` only once. From this, I suspect that it is probably an unnecessary leftover.
cc @malfet @seemethere
| true
|
2,807,155,059
|
feat: add SVE dispatch for non-FBGEMM qembeddingbag
|
Sqvid
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"module: arm",
"release notes: quantization"
] | 6
|
CONTRIBUTOR
|
Adds an accelerated kernel for `quantized::embedding_bag_byte` and integrates it with the dispatch mechanism.
The bulk of the SVE code has already been seen before and can be found here: https://github.com/pytorch/pytorch/pull/139753.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,806,994,623
|
torch.jit.trace wrong function mapping: > maps to aten::lt
|
vkovinicTT
|
closed
|
[] | 2
|
NONE
|
### 🐛 Bug description
When running `torch.jit.trace()` on `nn.Module` sign '>' gets mapped to operation `aten::lt` instead of `aten::gt`.
```python
import torch
input_tensor = torch.zeros(1, 18, dtype=torch.float32)
class GT(torch.nn.Module):
def __init__(self):
super().__init__()
self.threshold = torch.nn.Parameter(torch.tensor(0.5), requires_grad=False)
def forward(self, x):
# Create simple mask
mask = x > self.threshold # there is a bug where sign `>` is mapped to `aten::lt` instead of `aten::gt`
return mask
model = GT()
traced_model = torch.jit.trace(model, input_tensor)
print(traced_model.graph)
```
And here is the output:
```
graph(%self : __torch__.___torch_mangle_5.GT,
%x : Float(1, 18, strides=[18, 1], requires_grad=0, device=cpu)):
%threshold : Tensor = prim::GetAttr[name="threshold"](%self)
%5 : Bool(1, 18, strides=[18, 1], requires_grad=0, device=cpu) = aten::lt(%threshold, %x) # <ipython-input-11-0e91e45efa83>:12:0
return (%5)
```
### Versions
```
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4400.24
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] nvtx==0.2.10
[pip3] optree==0.13.1
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.5.1+cu121
[pip3] torchaudio==2.5.1+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu121
[pip3] triton==3.1.0
[conda] Could not collect
```
| true
|
2,806,917,728
|
Add a test for onnx exporter: export in a file
|
xadupre
|
closed
|
[
"open source",
"topic: not user facing"
] | 2
|
COLLABORATOR
| null | true
|
2,806,906,943
|
emphasized two words in the code
|
shaymolcho
|
closed
|
[
"open source",
"topic: not user facing"
] | 3
|
NONE
|
I emphasized two words in the code to maintain a consistent text format.
Fixes #ISSUE_NUMBER
| true
|
2,806,874,593
|
[ca][bug_fix] Fix ref counting of objects in the set_autograd_compiler function.
|
BartlomiejStemborowski
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"module: compiled autograd"
] | 6
|
CONTRIBUTOR
|
PR#141153 exposed the option to collect sizes as dynamic. After this
change, the function set_autograd_compiler returns PyTuple object which
is populated using PyTuple_SET_ITEM function. Yet, that function steals
reference to the object and doesn't INCREF it. So currently we are
missing INCREF on prior_compiler when it is Py_None and INCREF on
prior_dynamic which is either Py_False or Py_True. This bug may lead to
the possible memory corruption.
@xmfan @jansel @albanD
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan @yf225
| true
|
2,806,829,033
|
[custom ops] [2.7 nightly] custom ops with typing.List breaks when importing annotations from future
|
dabeschte
|
closed
|
[
"high priority",
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 3
|
NONE
|
### 🐛 Describe the bug
The latest nightly version does not support `typing.List` in combination with `from __future__ import annotations`, but requires the use of `list`. While I agree that using `list` is better and more modern, this introduces a breaking change that makes it difficult to keep the custom ops working both on older and newer pytorch versions.
This change was introduced 5 days ago with this commit: https://github.com/pytorch/pytorch/commit/a79100ab11e09473d7456bc02de36de629d9db62#diff-09dd038b7467c407d31c7feb905906037d68d4a419a714bb1f56cee93976de39
My problem with that is: Using List[T] as annotation now gives me the following error:
```
Traceback (most recent call last):
File "/weka/mathias/kuma/.venv/lib/python3.12/site-packages/torch/_library/infer_schema.py", line 65, in convert_type_string
return eval(annotation_type)
^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 1, in <module>
NameError: name 'List' is not defined. Did you mean: 'list'?
```
I could simply change all annotations for `list`, but then it doesn't work anymore with older pytorch versions, requring to create a complete clone of the custom op module.
I already tried using typing Aliases: use `type List = list if version... else type List = typing.List`, but that doesn't work either, because the types are evaluated as a raw string - using capital L in the beginning doesn't work.
**Turns out that typing.List only fails when importing future annotations.**
So, not sure if you want to fix, but since it did work before and would only require reverting one line of code, I wanted to open this issue to discuss if you want to support this case or not.
My workaround now was to remove the `from __future__ import annotations` from these files and change some types.
The line that need to be changed
https://github.com/pytorch/pytorch/blob/a79100ab11e09473d7456bc02de36de629d9db62/torch/_library/infer_schema.py#L6
to the state before this commit:
https://github.com/pytorch/pytorch/commit/a79100ab11e09473d7456bc02de36de629d9db62#diff-09dd038b7467c407d31c7feb905906037d68d4a419a714bb1f56cee93976de39
Use this script to reproduce the error:
```
from __future__ import annotations
import torch
from typing import List
@torch.library.custom_op("test::custom_op_list", mutates_args=())
def my_custom_op(
x: torch.Tensor,
) -> List[torch.Tensor]:
return [torch.randn_like(x)]
```
### Versions
Pytorch 2.7.0.dev20250122+cu124
Python 3.12.6
Ubuntu 20.04
(collect_env.py crashes when using uv)
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bdhirsh @yf225
| true
|
2,806,701,573
|
simplify torch.utils.cpp_extension.include_paths; use it in cpp_builder
|
h-vetinari
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 13
|
CONTRIBUTOR
|
While working on conda-forge integration, I needed to look at the way the include paths are calculated, and noticed an avoidable duplication between `torch/utils/cpp_extension.py` and `torch/_inductor/cpp_builder.py`. The latter already imports the former anyway, so simply reuse the same function.
Furthermore, remove long-obsolete include-paths. AFAICT, the `/TH` headers have not existed since pytorch 1.11.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,806,657,364
|
[Dynamo] Fix names collisions with foreach decomps
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 6
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/138698
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,806,648,118
|
Modify enable logic of COLLECTIVE_COMM profiler activity type
|
jushg
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Summary:
Since `KINETO_NCCL_PROFILER` flag is not used anymore (we are moving from linking the profiler during compile time to loading it dynamically), we change the logic for enabling the profiler to use `TORCH_PROFILER_ENABLE_COLLECTIVE_PROFILING` environment variable for NCCL Collective Communication Profiler.
For HCCL, we still keep the same logic
Test Plan: See https://interncache-all.fbcdn.net/manifold/perfetto-artifacts/tree/ui/index.html#!/?url=https://interncache-all.fbcdn.net/manifold/gpu_traces/tree/traces/clientAPI/0/1737579474/devvm29927.cln0/nccl_activities_2387985.json.gz for sample trace on nccl-profiler
Differential Revision: D68515945
| true
|
2,806,546,902
|
[XPU] torch 2.7.0.dev20250121+xpu Import Error
|
qwebug
|
closed
|
[
"triaged",
"module: xpu"
] | 7
|
NONE
|
### 🐛 Describe the bug
It brings import error after updating PyTorch version from 2.5.1+xpu to 2.7.0.dev20250121+xpu.
```python
import torch
torch.xpu.is_available()
```
### **Error Logs**
```bash
Traceback (most recent call last):
File "/xxx/test.py", line 1, in <module>
import torch
File "/home/xxx/anaconda3/envs/intel-gpu-pt/lib/python3.10/site-packages/torch/__init__.py", line 404, in <module>
from torch._C import * # noqa: F403
ImportError: /home/xxx/anaconda3/envs/intel-gpu-pt/lib/python3.10/site-packages/torch/lib/../../../../libsycl.so.8: undefined symbol: urBindlessImagesImportExternalMemoryExp, version LIBUR_LOADER_0.10
```
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 18
On-line CPU(s) list: 0-17
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 5 125H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 9
Socket(s): 1
Stepping: 4
BogoMIPS: 5990.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtop
ology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_sin
gle ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 432 KiB (9 instances)
L1i cache: 576 KiB (9 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] pytorch-triton-xpu==3.2.0+gite98b6fcb
[pip3] torch==2.7.0.dev20250121+xpu
[pip3] torchaudio==2.6.0.dev20250122+xpu
[pip3] torchvision==0.22.0.dev20250122+xpu
[conda] numpy 2.1.3 pypi_0 pypi
[conda] pytorch-triton-xpu 3.2.0+gite98b6fcb pypi_0 pypi
[conda] torch 2.7.0.dev20250121+xpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250122+xpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250122+xpu pypi_0 pypi
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,806,500,917
|
Adapt Dynamo Tests to HPUs
|
amathewc
|
closed
|
[
"triaged",
"open source",
"topic: not user facing",
"module: dynamo"
] | 16
|
CONTRIBUTOR
|
This PR is a continuation of https://github.com/pytorch/pytorch/pull/144387 . Adapted two more files with the approach described below.
#MOTIVATION
We recently integrated support for Intel Gaudi devices (identified as 'hpu') into the common_device_type framework via the pull request at https://github.com/pytorch/pytorch/pull/126970. This integration allows tests to be automatically instantiated for Gaudi devices upon loading the relevant library. Building on this development, the current pull request extends the utility of these hooks by adapting selected CUDA tests to operate on Gaudi devices. Additionally, we have confirmed that these modifications do not interfere with the existing tests on CUDA devices.
Other accelerators can also extend the functionality by adding the device in the devices list. ( For eg: xpu )
#CHANGES
Create a separate class for test functions running on CUDA devices
Extend the functionality of these tests to include HPUs
Use instantiate_device_type_tests with targeted attributes to generate device-specific test instances within the new classes
Apply skipIfHPU decorator to bypass tests that are not yet compatible with HPU devices
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,806,469,580
|
[dynamo] added support to trace torch.cuda.is_current_stream_capturing
|
chenyang78
|
closed
|
[
"Stale",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/rocm"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145475
enabled tracing torch.cuda.is_current_stream_capturing
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @chauhang @amjames
| true
|
2,806,456,805
|
fix test_convolution error when use cudnn.flags
|
mengph
|
closed
|
[
"module: cudnn",
"module: convolution",
"triaged",
"open source",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Fixes #145473
cc @csarofeen @ptrblck @xwang233 @eqy
| true
|
2,806,434,387
|
torch.backends.cudnn.flags use error when test
|
mengph
|
closed
|
[
"module: cudnn",
"module: convolution",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
https://github.com/pytorch/pytorch/blob/main/test/nn/test_convolution.py#L3311
There is a problem with the use of cudnn.flags() here. The original purpose was to test the accuracy of allow_tf32 when it was turned on and off, but the call to cudnn.flags() causes allow_tf32 to always be True.
```python
@onlyCUDA
@tf32_on_and_off(0.005)
def test_ConvTranspose2d_size_1_kernel(self, device):
x_cpu = torch.randn(2, 3, 5, 5)
conv_cpu = torch.nn.ConvTranspose2d(3, 3, kernel_size=1)
y_cpu = conv_cpu(x_cpu)
y = torch.rand_like(y_cpu)
y_cpu.backward(y)
with cudnn.flags(enabled=False):
conv_cuda = torch.nn.ConvTranspose2d(3, 3, kernel_size=1).to(device)
conv_cuda.bias.data.copy_(conv_cpu.bias.data)
conv_cuda.weight.data.copy_(conv_cpu.weight.data)
y_cuda = conv_cuda(x_cpu.to(device))
y_cuda.backward(y.to(device))
self.assertEqual(y_cpu, y_cuda, atol=1e-5, rtol=0, exact_device=False)
```
### Versions
Regardless of the environment
cc @csarofeen @ptrblck @xwang233 @eqy
| true
|
2,806,431,221
|
torch.backends.cudnn.flags use error when test
|
mengph
|
closed
|
[] | 0
|
CONTRIBUTOR
|
https://github.com/pytorch/pytorch/blob/main/test/nn/test_convolution.py#L3311
There is a problem with the use of cudnn.flags() here. The original purpose was to test the accuracy of allow_tf32 when it was turned on and off, but the call to cudnn.flags() causes allow_tf32 to always be True.
| true
|
2,806,384,481
|
Is there a PyTorch version that can work properly on the Thor platform based on the Blackwell architecture?
|
wangxianggang1997
|
open
|
[
"module: cuda",
"triaged"
] | 0
|
NONE
|
Is there a PyTorch version that can work properly on the Thor platform based on the Blackwell architecture? I encounter many errors when compiling PyTorch source code on the Thor platform, and I don't know how to solve them. Is there any expert who can help me
cc @ptrblck @msaroufim @eqy
| true
|
2,806,254,541
|
[inductor] [silence] `nn.LazyLinear-F.gumbel_softmax` return inconsistent resutls compared with eager
|
shaoyuyoung
|
closed
|
[
"oncall: pt2"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom:** for `nn.LazyLinear-F.gumbel_softmax`, if `hard=True`, we can see the explicit difference. If `hard=False`, the inconsistency still exists and can't pass the `allclose` check.
**device backend**: both triton and CPP
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.lazy_linear = torch.nn.LazyLinear(out_features=10)
def forward(self, x):
x = self.lazy_linear(x)
x = F.gumbel_softmax(x, tau=0.5, hard=True)
return x
model = Model().eval().cuda()
x = torch.randn(1, 10).cuda()
inputs = [x]
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
print(output)
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
print(torch.allclose(output, c_output, 1e-3, 1e-3, equal_nan=True))
print(torch.max(torch.abs(output - c_output)))
```
### Error logs
```
tensor([[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.]], device='cuda:0')
tensor([[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.]], device='cuda:0')
False
tensor(1., device='cuda:0')
```
### Versions
nightly 20250225
cc @chauhang @penguinwu
| true
|
2,806,253,912
|
libtorch_python.dylib not getting symlinked correctly in OSX 13 with pytorch-cpu
|
stefdoerr
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
Hi, I am installing the `pytorch-cpu` and `libtorch` packages on OSX 13 x86_64 and on OSX 14 Arm64 Github Action runners.
I noticed that on the OSX 14 Arm64 installation it installs the following packages:
```
pytorch-2.5.1 |cpu_generic_py310_h3256795_9 21.8 MB conda-forge
pytorch-cpu-2.5.1 |cpu_generic_py310_h510b526_9 36 KB conda-forge
libtorch-2.5.1 |cpu_generic_h266890c_9 27.0 MB conda-forge
```
while on OSX 13 it installs
```
pytorch-2.5.1 |cpu_openblas_py310hf01ac55_1 33.2 MB
pytorch-cpu-2.5.1 |cpu_openblas_h1234567_1 28 KB
libtorch-2.5.1 |cpu_openblas_hf9ef3f7_1 49.5 MB
```
Now if I `ls` the `/Users/runner/miniconda3/envs/test/lib/python3.10/site-packages/torch/lib` directory on both machines I see
on OSX 14 Arm64
```
lrwxr-xr-x 1 runner staff 24 Jan 23 07:33 libc10.dylib -> ../../../../libc10.dylib
lrwxr-xr-x 1 runner staff 24 Jan 23 07:33 libshm.dylib -> ../../../../libshm.dylib
lrwxr-xr-x 1 runner staff 26 Jan 23 07:33 libtorch.dylib -> ../../../../libtorch.dylib
lrwxr-xr-x 1 runner staff 30 Jan 23 07:33 libtorch_cpu.dylib -> ../../../../libtorch_cpu.dylib
lrwxr-xr-x 1 runner staff 38 Jan 23 07:33 libtorch_global_deps.dylib -> ../../../../libtorch_global_deps.dylib
lrwxr-xr-x 1 runner staff 33 Jan 23 07:33 libtorch_python.dylib -> ../../../../libtorch_python.dylib
```
and on OSX 13
```
lrwxr-xr-x 1 runner staff 24 Jan 23 07:35 libc10.dylib -> ../../../../libc10.dylib
lrwxr-xr-x 1 runner staff 24 Jan 23 07:35 libshm.dylib -> ../../../../libshm.dylib
lrwxr-xr-x 1 runner staff 26 Jan 23 07:35 libtorch.dylib -> ../../../../libtorch.dylib
lrwxr-xr-x 1 runner staff 30 Jan 23 07:35 libtorch_cpu.dylib -> ../../../../libtorch_cpu.dylib
lrwxr-xr-x 1 runner staff 38 Jan 23 07:35 libtorch_global_deps.dylib -> ../../../../libtorch_global_deps.dylib
```
As you can see, on OSX 13 it's missing the `libtorch_python.dylib` symlink. The library is installed correctly, if I check in `/Users/runner/miniconda3/envs/test/lib/` it's there, but it's not getting symlinked into the `torch/lib` directory correctly.
This is causing some compilation issues in my library which includes pytorch extensions. I assume that it's a bug.
### Versions
OSX 14 Arm64
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.7.2 (arm64)
GCC version: Could not collect
Clang version: 19.1.7
CMake version: version 3.31.4
Libc version: N/A
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:20:01) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-14.7.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 (Virtual)
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] numpy==2.0.2
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.5.1
[pip3] torch_geometric==2.5.3
[pip3] torchani==2.2.4
[pip3] torchmetrics==1.6.1
[conda] libtorch 2.5.1 cpu_generic_h266890c_9 conda-forge
[conda] nomkl 1.0 h5ca1d4c_0 conda-forge
[conda] numpy 2.0.2 py310h530be0a_1 conda-forge
[conda] pytorch 2.5.1 cpu_generic_py310_h3256795_9 conda-forge
[conda] pytorch-cpu 2.5.1 cpu_generic_py310_h510b526_9 conda-forge
[conda] pytorch-lightning 2.5.0.post0 pyh101cb37_0 conda-forge
[conda] pytorch_geometric 2.5.3 pyhd8ed1ab_0 conda-forge
[conda] torchani 2.2.4 cpu_py310hc79e986_11 conda-forge
[conda] torchmetrics 1.6.1 pyhd8ed1ab_0 conda-forge
```
OSX 13
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.7.2 (x86_64)
GCC version: Could not collect
Clang version: 19.1.7
CMake version: version 3.31.4
Libc version: N/A
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:12:04) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-13.7.2-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i7-8700B CPU @ 3.20GHz
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] numpy==2.0.2
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.5.1
[pip3] torch-geometric==2.6.1
[pip3] torchani==2.2.4
[pip3] torchmetrics==1.6.1
[conda] libtorch 2.5.1 cpu_openblas_hf9ef3f7_1
[conda] nomkl 1.0 h5ca1d4c_0 conda-forge
[conda] numpy 2.0.2 py310hdf3e1fd_1 conda-forge
[conda] pytorch 2.5.1 cpu_openblas_py310hf01ac55_1
[conda] pytorch-cpu 2.5.1 cpu_openblas_h1234567_1
[conda] pytorch-lightning 2.5.0.post0 pyh101cb37_0 conda-forge
[conda] pytorch_geometric 2.6.1 pyhd8ed1ab_1 conda-forge
[conda] torchani 2.2.4 cpu_py310h5402c11_11 conda-forge
[conda] torchmetrics 1.6.1 pyhd8ed1ab_0 conda-forge
```
| true
|
2,806,238,913
|
[inductor][torchbench] Unsupported operator issue when running the torch_multimodal_clip model with batch size 4.
|
LifengWang
|
open
|
[
"module: nn",
"triaged",
"oncall: pt2",
"module: sdpa"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I'm conducting an accuracy test for the `torch_multimodal_clip `model. I noticed that the graph breaks count is 0 when using a batch size of 1. However, when increasing the batch size to 4, the graph breaks count rises to 3. Upon printing the graph break details, I discovered the unsupported operator issue occurring with batch size 4.
```shell
python benchmarks/dynamo/torchbench.py --accuracy --amp -dcpu --inference -n50 --inductor --timeout 9000 --dynamic-shapes --dynamic-batch-only --freezing --only torch_multimodal_clip --print-graph-breaks --explain --batch-size 4
```
```
loading model: 0it [00:02, ?it/s]
cpu eval torch_multimodal_clip
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] Graph break in user code at /workspace/pytorch/torch/nn/modules/activation.py:1313
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] Reason: Unsupported: unsupported operator: aten._native_multi_head_attention.default (see https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0 for how to fix)
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] User code traceback:
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] File "/workspace/pytorch/benchmarks/dynamo/common.py", line 2782, in run_n_iterations
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] self.model_iter_fn(mod, inputs, collect_outputs=False)
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] File "/workspace/pytorch/benchmarks/dynamo/torchbench.py", line 444, in forward_pass
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] return mod(*inputs)
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] File "/opt/conda/lib/python3.10/site-packages/torchmultimodal/models/clip/model.py", line 71, in forward
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] embeddings_a = self.encoder_a(features_a)
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] File "/opt/conda/lib/python3.10/site-packages/torchmultimodal/models/clip/image_encoder.py", line 108, in forward
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] x = self.encoder(x)
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] File "/workspace/pytorch/torch/nn/modules/transformer.py", line 517, in forward
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] output = mod(
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] File "/workspace/pytorch/torch/nn/modules/transformer.py", line 913, in forward
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] x = x + self._sa_block(
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] File "/workspace/pytorch/torch/nn/modules/transformer.py", line 934, in _sa_block
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] x = self.self_attn(
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] File "/workspace/pytorch/torch/nn/modules/activation.py", line 1313, in forward
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks] return torch._native_multi_head_attention(
V0123 16:00:42.112000 10589 torch/_dynamo/symbolic_convert.py:460] [0/0] [__graph_breaks]
V0123 16:00:42.211000 10589 torch/_dynamo/symbolic_convert.py:467] [1/0] [__graph_breaks] Graph break (details suppressed) in user code at /workspace/pytorch/torch/nn/modules/activation.py:1313
V0123 16:00:42.211000 10589 torch/_dynamo/symbolic_convert.py:467] [1/0] [__graph_breaks] Reason: Unsupported: unsupported operator: aten._native_multi_head_attention.default (see https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0 for how to fix)
V0123 16:00:42.466000 10589 torch/_dynamo/symbolic_convert.py:467] [2/0] [__graph_breaks] Graph break (details suppressed) in user code at /workspace/pytorch/torch/nn/modules/activation.py:1313
V0123 16:00:42.466000 10589 torch/_dynamo/symbolic_convert.py:467] [2/0] [__graph_breaks] Reason: Unsupported: unsupported operator: aten._native_multi_head_attention.default (see https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0 for how to fix)
V0123 16:00:42.569000 10589 torch/_dynamo/symbolic_convert.py:467] [3/0] [__graph_breaks] Graph break (details suppressed) in user code at /workspace/pytorch/torch/nn/modules/activation.py:1313
V0123 16:00:42.569000 10589 torch/_dynamo/symbolic_convert.py:467] [3/0] [__graph_breaks] Reason: Unsupported: unsupported operator: aten._native_multi_head_attention.default (see https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0 for how to fix)
pass
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
Dynamo produced 6 graphs covering 209 ops with 3 graph breaks (1 unique)
```
For reference, below is the log of batch size 1
```shell
python benchmarks/dynamo/torchbench.py --accuracy --amp -dcpu --inference -n50 --inductor --timeout 9000 --dynamic-shapes --dynamic-batch-only --freezing --only torch_multimodal_clip --print-graph-breaks --explain --batch-size 1
```
```
loading model: 0it [00:02, ?it/s]
cpu eval torch_multimodal_clip
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
pass
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
Dynamo produced 1 graphs covering 744 ops with 0 graph breaks (0 unique)
```
### Versions
PyTorch version: 2.7.0a0+git9ae35b8
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) 6972P
CPU family: 6
Model: 173
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 9 MiB (192 instances)
L1i cache: 12 MiB (192 instances)
L2 cache: 384 MiB (192 instances)
L3 cache: 960 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.14.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] torch==2.7.0a0+git9ae35b8
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0a0+b6d4675
[pip3] torchdata==0.7.0a0+11bb5b8
[pip3] torchmultimodal==0.1.0b0
[pip3] torchtext==0.16.0a0+b0ebddc
[pip3] torchvision==0.19.0a0+d23a6e1
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] mkl 2025.0.0 h901ac74_941 conda-forge
[conda] mkl-include 2025.0.0 hf2ce2f3_941 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.7.0a0+git9ae35b8 dev_0 <develop>
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchaudio 2.6.0a0+b6d4675 pypi_0 pypi
[conda] torchdata 0.7.0a0+11bb5b8 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchtext 0.16.0a0+b0ebddc pypi_0 pypi
[conda] torchvision 0.19.0a0+d23a6e1 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @chauhang @penguinwu @cpuhrsch @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,806,195,387
|
[Intel GPU] Add TORCH_API macro to export symbol NestedTensor_to_mask for libtorch_xpu
|
min-jean-cho
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
COLLABORATOR
|
Part of https://github.com/intel/torch-xpu-ops/issues/1141.
The `TORCH_API` macro is added to export the symbol `NestedTensor_to_mask`, which is needed by libtroch_xpu for `NestedTensor_softmax_dropout_xpu`.
| true
|
2,806,170,426
|
torch.sin/cos/tan+torch.floor/round may bring wrong results with torch.compile
|
qwqdlt
|
closed
|
[
"oncall: pt2"
] | 3
|
NONE
|
### 🐛 Describe the bug
torch.cos+torch.floor may bring wrong results with torch.compile on x86 CPU.
I find that torch.sin/cos/tan+torch.floor/round may also brings wrong results.
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, *args):
cos = torch.cos(args[0])
floor = torch.floor(cos)
return floor
m = Model()
inp = torch.randn((32, 32), dtype=torch.float16) # dtype=torch.float32 may also brings wrong results
m_out = m(inp.to('cpu'))
opt = torch.compile(m)
opt_out = opt(inp.to('cpu'))
torch.testing.assert_close(m_out, opt_out)
````
### **Error Logs**
```bash
AssertionError: Tensor-likes are not close!
Mismatched elements: 13 / 1024 (1.3%)
Greatest absolute difference: 1.0 at index (0, 11) (up to 1e-05 allowed)
Greatest relative difference: inf at index (0, 11) (up to 0.001 allowed)
```
### Versions
PyTorch version: 2.7.0.dev20250122+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 18
On-line CPU(s) list: 0-17
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 5 125H
CPU family: 6
Model: 170
Thread(s) per core: 2 i umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full.39
L1d cache: 432 KiB (9 instances)r pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtop
L1i cache: 576 KiB (9 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] torch==2.7.0.dev20250122+cpu
[pip3] torchaudio==2.6.0.dev20250122+cpu
[pip3] torchvision==0.22.0.dev20250122+cpu
[conda] numpy 2.2.2 pypi_0 pypi
[conda] torch 2.7.0.dev20250122+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250122+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250122+cpu pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,806,086,713
|
OpenReg: Refactor impl_registry
|
Zhenbin-8
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Refactor impl_registry to use `driver.exec` as fallback.
cc @albanD
| true
|
2,806,071,913
|
[fx] low overhead checking of nondeterministic_seeded
|
xmfan
|
closed
|
[
"release notes: fx",
"fx"
] | 1
|
MEMBER
|
FIXES https://github.com/pytorch/pytorch/issues/144775
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145464
```
dev,name,batch_size,accuracy,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips,compilation_latency
cuda,BartForConditionalGeneration,1,pass,1794,1,0,0,0,0,0,47.229793 # after this PR
cuda,BartForConditionalGeneration,1,pass,1794,1,0,0,0,0,0,56.271855 # before this PR
cuda,BartForConditionalGeneration,1,pass,1794,1,0,0,0,0,0,46.973284 # before https://github.com/pytorch/pytorch/pull/144319
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,806,059,279
|
Mark Dynamic does not work for nn module constructor inputs
|
laithsakka
|
closed
|
[
"triaged",
"module: dynamic shapes"
] | 2
|
CONTRIBUTOR
|
for the following code if we compile it we get the following graph :
code:
```
import torch
import torch._dynamo.config
from torch.utils.checkpoint import _is_compiling
@torch.compile()
class Y(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x.view(-1, self._N_input, 192)
@torch.compile(dynamic=True)
class M(torch.nn.Module):
def __init__(self, input):
super().__init__()
self._N_input = input.size()[0]
def forward(self, x, z):
# input is [B, n * d]
x = x*2
x = x*5
x = x *3
y = Y()
y._N_input = self._N_input
x = y.forward(x)# [B, n, d]
x = x*20
x = x*30
x = x*43
return x
x = torch.randn(5, 3210, 192) # Shape [B=4, n*d=12]
num_inputs = torch.randn(3210)
m = M(num_inputs)
y1, z1 = 3210, 192 # y1 * z1 == 12
output1 = m(x, z1)
print(output1)
```
graph: Note L_self_N_input is Sym(s1)
```
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] TRACED GRAPH
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] ===== __compiled_fn_1 =====
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] /home/lsakka/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] def forward(self, s0: "Sym(s0)", s1: "Sym(s1)", L_x_: "f32[s0, s1, 192][192*s1, 192, 1]cpu", L_self_N_input: "Sym(s1)"):
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] l_x_ = L_x_
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] l_self_n_input = L_self_N_input
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:751 in forward, code: x = x*2
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x: "f32[s0, s1, 192][192*s1, 192, 1]cpu" = l_x_ * 2; l_x_ = None
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:752 in forward, code: x = x*5
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_1: "f32[s0, s1, 192][192*s1, 192, 1]cpu" = x * 5; x = None
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:753 in forward, code: x = x *3
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_2: "f32[s0, s1, 192][192*s1, 192, 1]cpu" = x_1 * 3; x_1 = None
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:737 in __init__, code: super().__init__()
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] _log_api_usage_once = torch._C._log_api_usage_once('python.nn_module'); _log_api_usage_once = None
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:740 in forward, code: return x.view(-1, self._N_input, 192)
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_3: "f32[s0, s1, 192][192*s1, 192, 1]cpu" = x_2.view(-1, l_self_n_input, 192); x_2 = l_self_n_input = None
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:759 in forward, code: x = x*20
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_4: "f32[s0, s1, 192][192*s1, 192, 1]cpu" = x_3 * 20; x_3 = None
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:760 in forward, code: x = x*30
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_5: "f32[s0, s1, 192][192*s1, 192, 1]cpu" = x_4 * 30; x_4 = None
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:761 in forward, code: x = x*43
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_6: "f32[s0, s1, 192][192*s1, 192, 1]cpu" = x_5 * 43; x_5 = None
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] return (x_6,)
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:39:29.878000 2775135 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
```
Now if we want to use mark_dynamic instead, I :
1) remove(dynamic=True)
2) call mark dynamic as the following:
```
mport torch
import torch._dynamo.config
from torch.utils.checkpoint import _is_compiling
@torch.compile()
class Y(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x.view(-1, self._N_input, 192)
@torch.compile()
class M(torch.nn.Module):
def __init__(self, input):
super().__init__()
self._N_input = input.size()[0]
def forward(self, x, z):
# input is [B, n * d]
x = x*2
x = x*5
x = x *3
y = Y()
y._N_input = self._N_input
x = y.forward(x)# [B, n, d]
x = x*20
x = x*30
x = x*43
return x
x = torch.randn(5, 3210, 192) # Shape [B=4, n*d=12]
num_inputs = torch.randn(3210)
torch._dynamo.decorators.mark_dynamic(x, 1)
torch._dynamo.decorators.mark_dynamic(num_inputs, 0)
m = M(num_inputs)
y1, z1 = 3210, 192 # y1 * z1 == 12
output1 = m(x, z1)
print(output1)
```
but this generate this graph, looks like mark_dynamic did not work on num_inputs
raise ConstraintViolationError(
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['x'].size()[1])! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of RelaxedUnspecConstraint(L['x'].size()[1]) are valid because L['x'].size()[1] was inferred to be a constant (3210).
```
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] TRACED GRAPH
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] ===== __compiled_fn_1 =====
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] /home/lsakka/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] def forward(self, L_x_: "f32[5, 3210, 192][616320, 192, 1]cpu"):
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] l_x_ = L_x_
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:751 in forward, code: x = x*2
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x: "f32[5, 3210, 192][616320, 192, 1]cpu" = l_x_ * 2; l_x_ = None
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:752 in forward, code: x = x*5
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_1: "f32[5, 3210, 192][616320, 192, 1]cpu" = x * 5; x = None
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:753 in forward, code: x = x *3
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_2: "f32[5, 3210, 192][616320, 192, 1]cpu" = x_1 * 3; x_1 = None
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:737 in __init__, code: super().__init__()
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] _log_api_usage_once = torch._C._log_api_usage_once('python.nn_module'); _log_api_usage_once = None
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:740 in forward, code: return x.view(-1, self._N_input, 192)
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_3: "f32[5, 3210, 192][616320, 192, 1]cpu" = x_2.view(-1, 3210, 192); x_2 = None
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:759 in forward, code: x = x*20
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_4: "f32[5, 3210, 192][616320, 192, 1]cpu" = x_3 * 20; x_3 = None
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:760 in forward, code: x = x*30
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_5: "f32[5, 3210, 192][616320, 192, 1]cpu" = x_4 * 30; x_4 = None
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/lsakka/pytorch/example.py:761 in forward, code: x = x*43
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x_6: "f32[5, 3210, 192][616320, 192, 1]cpu" = x_5 * 43; x_5 = None
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] return (x_6,)
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0122 22:42:15.613000 2829692 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
```
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,805,969,991
|
[TEST ONLY] Conv with `oc = 0`
|
chunyuan-w
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145462
| true
|
2,805,951,390
|
Incomplete check of LR as a tensor in Optimizer
|
Tony-Y
|
closed
|
[
"module: optimizer",
"triaged",
"actionable"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
A tutorial "[Running the compiled optimizer with an LR Scheduler](https://pytorch.org/tutorials/recipes/compiling_optimizer_lr_scheduler.html)" presents how LR as a tensor is used. According to this tutorial, we should use a 0-dim tensor for LR. However, Optimizer can accept a 1-dim tensor of size 1. When using a 1-dim tensor, we get a runtime error.
```
optimizer = torch.optim.SGD(model.parameters(), lr=torch.tensor([0.01]).cuda())
scheduler = torch.optim.lr_scheduler.LinearLR(optimizer, total_iters=5)
```
```
RuntimeError: fill_ only supports 0-dimension value tensor but got tensor with 1 dimensions.
```
Currently, Optimizer checks the tensor size:
https://github.com/pytorch/pytorch/blob/faa10faa2cad1cf6eef95a3e5bda255be6bc4c87/torch/optim/adam.py#L50-L56
I think that Optimizer should check the tensor dimension:
```
if lr.dim() != 0:
raise ValueError("Tensor lr must be 0-dimension")
```
The whole code I used:
```
import torch
print(torch.__version__)
# exit cleanly if we are on a device that doesn't support ``torch.compile``
if torch.cuda.get_device_capability() < (7, 0):
print("Exiting because torch.compile is not supported on this device.")
import sys
sys.exit(0)
# Setup random seed
torch.manual_seed(12345)
# Create simple model
model = torch.nn.Sequential(
*[torch.nn.Linear(1024, 1024, False, device="cuda") for _ in range(10)]
)
input = torch.rand(1024, device="cuda")
# run forward pass
output = model(input)
# run backward to populate the grads for our optimizer below
output.sum().backward()
optimizer = torch.optim.SGD(model.parameters(), lr=torch.tensor([0.01]).cuda())
scheduler = torch.optim.lr_scheduler.LinearLR(optimizer, total_iters=5)
def fn():
optimizer.step()
scheduler.step()
opt_fn = torch.compile(fn)
for i in range(5):
opt_fn()
print(i, optimizer.param_groups[0]['lr'].item())
```
### Error logs
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-67dacbf46b2f> in <cell line: 26>()
24
25 optimizer = torch.optim.SGD(model.parameters(), lr=torch.tensor([0.01]).cuda())
---> 26 scheduler = torch.optim.lr_scheduler.LinearLR(optimizer, total_iters=5)
27
28 def fn():
/usr/local/lib/python3.10/dist-packages/torch/optim/lr_scheduler.py in __init__(self, optimizer, start_factor, end_factor, total_iters, last_epoch, verbose)
746 self.end_factor = end_factor
747 self.total_iters = total_iters
--> 748 super().__init__(optimizer, last_epoch, verbose)
749
750 def get_lr(self):
/usr/local/lib/python3.10/dist-packages/torch/optim/lr_scheduler.py in __init__(self, optimizer, last_epoch, verbose)
144 patch_track_step_called(self.optimizer)
145 self.verbose = _check_verbose_deprecated_warning(verbose)
--> 146 self._initial_step()
147
148 def _initial_step(self):
/usr/local/lib/python3.10/dist-packages/torch/optim/lr_scheduler.py in _initial_step(self)
149 """Initialize step counts and perform a step."""
150 self._step_count = 0
--> 151 self.step()
152
153 def state_dict(self):
/usr/local/lib/python3.10/dist-packages/torch/optim/lr_scheduler.py in step(self, epoch)
248 param_group, lr = data
249 if isinstance(param_group["lr"], Tensor):
--> 250 param_group["lr"].fill_(lr)
251 else:
252 param_group["lr"] = lr
RuntimeError: fill_ only supports 0-dimension value tensor but got tensor with 1 dimensions.
```
### Versions
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.6.56+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.30
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB (2 instances)
L1i cache: 64 KiB (2 instances)
L2 cache: 2 MiB (2 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.6.0.74
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvtx==0.2.10
[pip3] onnx==1.17.0
[pip3] optree==0.13.1
[pip3] pynvjitlink-cu12==0.4.0
[pip3] pytorch-ignite==0.5.1
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.5.1+cu121
[pip3] torchaudio==2.5.1+cu121
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtune==0.5.0
[pip3] torchvision==0.20.1+cu121
[conda] Could not collect
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @chauhang @penguinwu
| true
|
2,805,889,854
|
Flex Attention not support score_mod with gradients
|
LoserCheems
|
closed
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 2
|
NONE
|
### 🐛 Describe the bug
`Flex Attention` does not support `score_mod with gradients`, making it impossible to define a learnable score_mod for the dynamic mask attention variants.
```python
import torch
from torch.nn.attention.flex_attention import flex_attention
A = torch.nn.Parameter(torch.zeros(1))
query = torch.randn(1, 1, 1, 32)
key = torch.randn(1, 1, 1, 32)
value = torch.randn(1, 1, 1, 32)
dt = torch.randn(1, 1, 1)
dynamic_mask = torch.exp(A * dt).transpose(-1, -2)
attn_mask = dynamic_mask[:, :, None, :]
def dynamic_mod(score, batch, head, q_idx, kv_idx):
score = score + attn_mask[batch][head][0][kv_idx]
return score
attn_output = flex_attention(
query=query,
key=key,
value=value,
score_mod=dynamic_mod,
)
print(attn_output)
```
```
Traceback (most recent call last):
File "e:\Doge\bug.py", line 22, in <module>
attn_output = flex_attention(
File "E:\conda\envs\doge\lib\site-packages\torch\nn\attention\flex_attention.py", line 1038, in flex_attention
out, lse = torch.compile(
File "E:\conda\envs\doge\lib\site-packages\torch\_dynamo\eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "E:\conda\envs\doge\lib\site-packages\torch\nn\attention\flex_attention.py", line 1032, in _flex_attention_hop_wrapper
def _flex_attention_hop_wrapper(*args, **kwargs):
File "E:\conda\envs\doge\lib\site-packages\torch\_dynamo\eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "<eval_with_key>.3", line 24, in forward
File "E:\conda\envs\doge\lib\site-packages\torch\_higher_order_ops\flex_attention.py", line 109, in __call__
return super().__call__(
File "E:\conda\envs\doge\lib\site-packages\torch\_ops.py", line 433, in __call__
return wrapper()
File "E:\conda\envs\doge\lib\site-packages\torch\_dynamo\eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "E:\conda\envs\doge\lib\site-packages\torch\_ops.py", line 429, in wrapper
return self.dispatch(
File "E:\conda\envs\doge\lib\site-packages\torch\_ops.py", line 412, in dispatch
return kernel(*args, **kwargs)
File "E:\conda\envs\doge\lib\site-packages\torch\_higher_order_ops\flex_attention.py", line 703, in flex_attention_autograd
out, logsumexp = FlexAttentionAutogradOp.apply(
File "E:\conda\envs\doge\lib\site-packages\torch\autograd\function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "E:\conda\envs\doge\lib\site-packages\torch\_higher_order_ops\flex_attention.py", line 578, in forward
assert (
AssertionError: Captured buffers that require grad are not yet supported.
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+df5bbc09d1.nv24.12
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 566.36
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900KS
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
BogoMIPS: 6374.42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 32 MiB (16 instances)
L3 cache: 36 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cudnn-frontend==1.8.0
[pip3] nvtx==0.2.5
[pip3] onnx==1.17.0
[pip3] optree==0.13.1
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-triton==3.0.0+72734f086
[pip3] torch==2.6.0a0+df5bbc09d1.nv24.12
[pip3] torch_tensorrt==2.6.0a0
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.20.0a0
[conda] Could not collect
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,805,888,568
|
[AOTInductor] Align behavior between CPU and GPU
|
muchulee8
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"merging"
] | 29
|
CONTRIBUTOR
|
Summary:
(1) Make sure CPU and GPU doesn't have different implementation and behavior when calling from the same path and API. Only difference between CPU and GPU after this PR should ONLY be the running hardware.
(2) This PR fixes the issue of memory access when it==constants_map.end()
(3) This PR resolves T179437596
Test Plan: buck2 run mode/dev sigmoid/inference/test:e2e_test_cpu
Differential Revision: D68540744
| true
|
2,805,873,069
|
[c10d] Flush file in file recorder
|
c-p-i-o
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145458
Summary:
Flushing file to hopefully prevent file corruptions as reported in
https://github.com/pytorch/pytorch/pull/145125
Test Plan:
Couldn't get file corruption to occur in my tests.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,805,829,310
|
DISABLED test_tensor_subclass_basic (__main__.TestCompiledAutograd)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: asan, linux, rocm, mac, macos, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tensor_subclass_basic&suite=TestCompiledAutograd&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36028557832).
Over the past 3 hours, it has been determined flaky in 20 workflow(s) with 40 failures and 20 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tensor_subclass_basic`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_compiled_autograd.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,805,822,844
|
torch._dynamo.exc.Unsupported: Graph break due to unsupported builtin torch._C._dynamo.eval_frame.set_eval_frame.
|
taowenqi
|
open
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 2
|
NONE
|
### 🐛 Describe the bug
Part of an example as the following:
```python
from torch_mlir import fx
module = fx.export_and_import(
model,
input_tensor,
output_type="tosa",
func_name="forward",
)
```
when I use pyinstaller to pack the fx graph export and import to a binary file, it encounters the following problem. Actually, when I run the unpacked python code, it works well.

### Versions
`torch==2.6.0.dev20241216+cpu`
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,805,818,451
|
Update TorchBench commit to main
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"test-config/default",
"module: dynamo",
"ciflow/inductor",
"ciflow/inductor-perf-compare",
"no-runner-experiments",
"ciflow/inductor-periodic"
] | 7
|
CONTRIBUTOR
|
I'm adding sam2 to TorchBench https://github.com/pytorch/benchmark/issues/2566, so, as part of that, I'm updating PyTorch CI to use latest TorchBench commit.
The corresponding change from TorchBench is https://github.com/pytorch/benchmark/pull/2584
The main thing to call out that the newer transformers added by https://github.com/pytorch/benchmark/pull/2488 is regressing several models. This needs to be investigated further, and I pin the version to unblock this change.
* `hf_Roberta_base` a new model added by https://github.com/pytorch/benchmark/pull/2279, not sure why it fails accuracy on A10G, but it works fine on A100
* `speech_transformer` failures are pre-existing trunk failures, i.e. https://github.com/pytorch/pytorch/actions/runs/13040114684/job/36380989702#step:22:2408
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,805,815,556
|
Add check that envvar configs are boolean
|
Raymo111
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
MEMBER
|
So we don't get unexpected behavior when higher typed values are passed in
| true
|
2,805,814,674
|
[CUDA] Illegal Memory Access with `AdaptiveMaxPool2d`
|
jwnhy
|
open
|
[
"module: nn",
"module: cuda",
"triaged",
"module: edge cases"
] | 0
|
NONE
|
### 🐛 Describe the bug
Found by fuzzer, the parameter used to trigger this does not look too corner-case.
@eqy
```python
import torch
m1 = torch.randn(8812, 1, 2).cuda()
model = torch.nn.AdaptiveMaxPool2d(output_size=[262143, 1]).cuda()
model(m1)
```
```bash
computer-sanitizer python3 poc7.py
```
compute-sanitizer log.
```python
========= Invalid __global__ write of size 8 bytes
========= at void at::native::<unnamed>::adaptivemaxpool<float>(const T1 *, T1 *, long *, int, int, int, int, long, long, long)+0x1b80
========= by thread (0,4,0) in block (8195,0,0)
========= Address 0x79d8025f0008 is out of bounds
========= and is 17,173,643,256 bytes before the nearest allocation at 0x79dc02000000 of size 18,480,103,424 bytes
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2dfbef]
========= in /lib/x86_64-linux-gnu/libcuda.so.1
========= Host Frame: [0x15803]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/../../../../libcudart.so.12
========= Host Frame:cudaLaunchKernel [0x75230]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/../../../../libcudart.so.12
========= Host Frame:at::native::structured_adaptive_max_pool2d_out_cuda::impl(at::Tensor const&, c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&)::{lambda()#1}::operator()() const [0x160853b]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::native::structured_adaptive_max_pool2d_out_cuda::impl(at::Tensor const&, c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&) [0x160a8f4]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::(anonymous namespace)::wrapper_CUDA_adaptive_max_pool2d(at::Tensor const&, c10::ArrayRef<long>) [0x3606f1e]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, c10::ArrayRef<long>), &at::(anonymous namespace)::wrapper_CUDA_adaptive_max_p
ool2d>, std::tuple<at::Tensor, at::Tensor>, c10::guts::typelist::typelist<at::Tensor const&, c10::ArrayRef<long> > >, std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, c10::ArrayRef<long>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::Array
Ref<long>) [0x3606fe2]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::_ops::adaptive_max_pool2d::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>) [0x27406bd]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::VariableType::(anonymous namespace)::adaptive_max_pool2d(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>) [0x49ee7d4]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<std::tuple<at::Tensor, at::Tensor> (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>), &torch::autograd::VariableType::(a
nonymous namespace)::adaptive_max_pool2d>, std::tuple<at::Tensor, at::Tensor>, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long> > >, std::tuple<at::Tensor, at::Tensor> (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>)>::
call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>) [0x49eef95]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::_ops::adaptive_max_pool2d::call(at::Tensor const&, c10::ArrayRef<long>) [0x27cb9db]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::THPVariable_adaptive_max_pool2d(_object*, _object*, _object*) [0x765397]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_python.so
========= Host Frame:cfunction_call in /usr/local/src/conda/python-3.12.7/Objects/methodobject.c:537 [0x149d53]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyObject_MakeTpCall in /usr/local/src/conda/python-3.12.7/Objects/call.c:240 [0x11af9a]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyEval_EvalFrameDefault in Python/bytecodes.c:2715 [0x125902]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:method_vectorcall in /usr/local/src/conda/python-3.12.7/Objects/classobject.c:91 [0x175716]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyEval_EvalFrameDefault in Python/bytecodes.c:3263 [0x12acac]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:method_vectorcall in /usr/local/src/conda/python-3.12.7/Objects/classobject.c:91 [0x175716]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyEval_EvalFrameDefault in Python/bytecodes.c:3263 [0x12acac]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyObject_FastCallDictTstate in /usr/local/src/conda/python-3.12.7/Objects/call.c:133 [0x11db06]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyObject_Call_Prepend in /usr/local/src/conda/python-3.12.7/Objects/call.c:508 [0x157e55]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:slot_tp_call in /usr/local/src/conda/python-3.12.7/Objects/typeobject.c:8782 [0x22f065]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyObject_MakeTpCall in /usr/local/src/conda/python-3.12.7/Objects/call.c:240 [0x11af9a]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyEval_EvalFrameDefault in Python/bytecodes.c:2715 [0x125902]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:PyEval_EvalCode in /usr/local/src/conda/python-3.12.7/Python/ceval.c:578 [0x1e3c6d]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:run_eval_code_obj in /usr/local/src/conda/python-3.12.7/Python/pythonrun.c:1722 [0x20a0b6]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:run_mod in /usr/local/src/conda/python-3.12.7/Python/pythonrun.c:1743 [0x2056d6]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:pyrun_file in /usr/local/src/conda/python-3.12.7/Python/pythonrun.c:1643 [0x21d601]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyRun_SimpleFileObject in /usr/local/src/conda/python-3.12.7/Python/pythonrun.c:433 [0x21cf3f]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyRun_AnyFileObject in /usr/local/src/conda/python-3.12.7/Python/pythonrun.c:78 [0x21cd32]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:Py_RunMain in /usr/local/src/conda/python-3.12.7/Modules/main.c:713 [0x215dc2]
[
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-1007-oem-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) SILVER 4510
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 37%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd sgx_lc fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 48 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.1.3 py312hc5e2394_0
[conda] numpy-base 2.1.3 py312h0da6c21_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py312_cu124 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.1 py312_cu124 pytorch
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim @eqy
| true
|
2,805,782,588
|
Update TorchBench commit to main
|
huydhn
|
closed
|
[
"topic: not user facing",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
WIP, no need to review yet
| true
|
2,805,773,579
|
[Dynamo]while_loop raise an exception
|
zhejiangxiaomai
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 3
|
NONE
|
### 🐛 Describe the bug
while loop will raise an exception.
The reason is input tensor has been sliced.
```
tensor strided info: shape: torch.Size([192]), original_tshape: [64], slice:slice(None, None, 3)
```
This creates tensor with stride=3, but once the body_fn execute tensor+1 the stride is reset to 1 and this causes the error.
If tensor which is strided by nature due to slice before passing to while_loop then pytorch doesn't accept this.
mini reproducer:
```
import torch
class OpWrapperModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, ifm, op_inputs_dict):
result = torch.while_loop(**op_inputs_dict)
return result
torch.manual_seed(7616)
ifm_t = torch.randn([64])
ifm = ifm_t[slice(None, None, 3)]
iterations = torch.tensor(50)
def cond_fn(inputs, iterations):
return iterations > 0
def body_fn(inputs, iterations):
return [inputs + 1, iterations - 1]
params = {
"cond_fn": cond_fn,
"body_fn": body_fn,
"carried_inputs": (ifm, iterations),
}
model = OpWrapperModule()
result = model(ifm, params)
```
ERROR log and trace:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 55, in graph_break_as_hard_error
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 1115, in call_function
unimplemented(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Expected carried_inputs and body outputs return tensors with same metadata but find:
pair[0] differ in 'stride: (3,) vs (1,)', where lhs is TensorMetadata(shape=torch.Size([22]), dtype=torch.float32, requires_grad=False, stride=(3,), memory_format=None, is_quantized=False, qparams={}) and rhs is TensorMetadata(shape=torch.Size([22]), dtype=torch.float32, requires_grad=False, stride=(1,), memory_format=None, is_quantized=False, qparams={})
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/zhenzhao/qnpu/dynamo/src/pytorch-training-tests/tests/torch_feature_val/single_op/repro_while_loop.py", line 28, in <module>
result = model(ifm, params)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1742, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1753, in _call_impl
return forward_call(*args, **kwargs)
File "/home/zhenzhao/qnpu/dynamo/src/pytorch-training-tests/tests/torch_feature_val/single_op/repro_while_loop.py", line 8, in forward
result = torch.while_loop(**op_inputs_dict)
File "/usr/local/lib/python3.10/dist-packages/torch/_higher_order_ops/while_loop.py", line 165, in while_loop
return torch.compile(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 58, in graph_break_as_hard_error
raise UncapturedHigherOrderOpError(reason + msg) from e
torch._dynamo.exc.UncapturedHigherOrderOpError: while_loop doesn't work unless it is captured completely with torch.compile. Scroll up to find out what causes the graph break.
from user code:
File "/usr/local/lib/python3.10/dist-packages/torch/_higher_order_ops/while_loop.py", line 156, in _while_loop_op_wrapper
return while_loop_op(*args, **kwargs)
### Versions
PyTorch version: 2.6.0a0+git30ac7fd
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5 (ssh://[git@github.com](mailto:git@github.com)/habana-internal/tpc_llvm10 150d2d7c6a8ff8abf0d8ce194d3fac3986b078e6)
CMake version: version 3.28.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225
| true
|
2,805,753,868
|
Replace is_same with is_same_v for concise syntax
|
zeshengzong
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: sparse",
"ci-no-td"
] | 28
|
CONTRIBUTOR
|
Replace `std::is_same<T, U>::value` with `std::is_same_v` for concise and consistent syntax with other code.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,805,747,931
|
Fix incorrect type comparison
|
aorenste
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 6
|
CONTRIBUTOR
|
Summary: This change was incorrectly made as part of #145166
Differential Revision: D68536221
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,805,701,539
|
Record inputs at time of tracing, constrain to them for triton fn
|
eellison
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 17
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145448
* #145953
Record input fake tensors at time of tracing and store them in the node meta. Inductor passes have the possibility of changing strides, so it is safer to record the strides of the inputs at tracing. See, https://github.com/pytorch/pytorch/issues/137979 for more context.
We can also extend this to custom ops, and user-visible outputs. If this ends up being compilation time sensitive we can just record strides (and maybe storage offset, per @zou3519) instead of the complete fake tensor.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,805,696,593
|
[dynamo][benchmarks] update compile time benchmarks to dump compile times to stdout and csv
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
MEMBER
|
```python
# inductor.csv
dev,name,batch_size,accuracy,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips,compilation_latency
cuda,cait_m36_384,8,pass,2510,1,0,0,0,0,0,87.705186
```
```python
loading model: 0it [01:27, ?it/s]
cuda eval cait_m36_384
Compilation time (from dynamo_timed): 87.705186276 # <----------------
pass
TIMING: _recursive_pre_grad_passes:0.11023 pad_mm_benchmark:0.50341 _recursive_joint_graph_passes:3.88557 _recursive_post_grad_passes:6.71182 async_compile.wait:4.16914 code_gen:17.57586 inductor_compile:42.55769 backend_compile:72.47122 entire_frame_compile:87.70519 gc:0.00112 total_wall_time:87.70519
STATS: call_* op count: 2510 | FakeTensorMode.__torch_dispatch__:101743 | FakeTensor.__torch_dispatch__:12959 | ProxyTorchDispatchMode.__torch_dispatch__:41079
Dynamo produced 1 graphs covering 2510 ops with 0 graph breaks (0 unique)
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145447
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,805,694,771
|
[dynamo] fix graph break on random.random
|
williamwen42
|
open
|
[
"triaged",
"module: regression",
"oncall: pt2",
"module: dynamo"
] | 2
|
MEMBER
|
We used to not graph break on `random.random`, but we do now.
```python
import random
import torch
@torch.compile(backend="eager", fullgraph=True)
def fn(x):
return x + random.random()
fn(torch.ones(5, 5))
```
This does not happen to other supported random functions - `randint`, `randrange`, `uniform`, listed in https://github.com/pytorch/pytorch/blob/d95a6babcc581ff06d1b914ee9f92c81b2e850e2/torch/_dynamo/variables/user_defined.py#L743.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,805,693,005
|
[dynamo] `random.Random` gives wrong result on second call
|
williamwen42
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 0
|
MEMBER
|
`random.Random` calls are not correct on the second call onward. This is because Dynamo generates RNG on `random.Random` objects by creating a new object and applying the RNG state at the time of tracing, which is not correct in general.
Example failing test:
```python
def test_random_object_repeat(self):
def fn(x, rng):
return x + rng.randint(1, 100)
inp = torch.randn(3, 3)
opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
rng1 = random.Random(0)
rng2 = random.Random(0)
self.assertEqual(fn(inp, rng1), opt_fn(inp, rng2))
with torch.compiler.set_stance("fail_on_recompile"):
self.assertEqual(fn(inp, rng1), opt_fn(inp, rng2))
self.assertEqual(fn(inp, rng1), opt_fn(inp, rng2))
self.assertEqual(rng1.getstate(), rng2.getstate())
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,805,691,790
|
[Docs] Add clarification for target types in CrossEntropyLoss doc
|
spzala
|
closed
|
[
"module: nn",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
CrossEntropyLoss function requires that target for class indices are provided as a long and class probabilities are provided as a float datatype.
The CrossEntropyLoss function distinguish the two scenarios (indices and probabilities) by comparing the shapes. When input and target shapes are the same it’s a case for probabilities otherwise it will be used as a class index as already covered in the doc. The related code is here,
https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/LossNLL.cpp#L624
I think the current documentation is great but seems like it can confuse users about types as reported in the issues so this PR adds a bit more clarification.
Fixes #137188
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,805,684,813
|
[draft_export] add LOC for data-dep error logging
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 8
|
CONTRIBUTOR
|
Summary:
maybe this is too much info, but it's difficult to go through old draft export reports where the stack trace is out of sync with the current codebase. Data-dependent errors now look like:
```
2. Data dependent error.
When exporting, we were unable to evaluate the value of `u306`.
This occurred at the following stacktrace:
File /data/users/pianpwk/fbsource/buck-out/v2/gen/fbcode/78204cab86e8a0fb/sigmoid/inference/ts_migration/__pt2i_readiness_main__/pt2i_readiness_main#link-tree/caffe2/torch/fb/training_toolkit/common/proxy_module_thrift/embedding_bag_proxy.py, lineno 109, in _forward_impl:
`if offsets[-1] > len(input):`
As a result, it was specialized to evaluate to `261`, and asserts were inserted into the graph.
Please add `torch._check(...)` to the original code to assert this data-dependent assumption.
Please refer to https://docs.google.com/document/d/1kZ_BbB3JnoLbUZleDT6635dHs88ZVYId8jT-yTFgf3A/edit#heading=h.boi2xurpqa0o for more details.
```
This would be even more helpful for reports on torch-packaged models, but that requires some more work on PT2I-specific stack trace processing
Test Plan: .
Differential Revision: D68534017
| true
|
2,805,671,221
|
[ONNX] Remove LegacyDynamoStrategy
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 6
|
COLLABORATOR
|
It's legacy. So remove. Shouldn't affect anything and will facilitate cleaning up our legacy code.
| true
|
2,805,664,674
|
seg fault in aot_inductor_package on arm GPU with 2.6.0 RC
|
tinglvv
|
closed
|
[
"high priority",
"module: crash",
"triaged",
"module: regression",
"oncall: pt2",
"oncall: cpu inductor"
] | 16
|
COLLABORATOR
|
### 🐛 Describe the bug
When running internal test for 2.6.0 RC ARM wheels (https://download.pytorch.org/whl/test/torch/) on Grace Hopper 1GPU, getting seg fault/bus errors which are happening alternating on below test
Reproduced errors on both CUDA and CPU wheels.
```python test/inductor/test_aot_inductor_package.py -k test_add -k cpu```
Error:
```
Running only test/inductor/test_aot_inductor_package.py::TestAOTInductorPackage_cpu::test_add
Running 1 items in this shard
Fatal Python error: Segmentation fault
```
Backtrace:
```
(gdb) bt
#0 0x0000ef67c4019c54 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() ()
from /tmp/KFSXvp/data/aotinductor/model/cnxl5jak4cmycxqpoiy3wdbyygqqgbph4tl5wjzolu24zpiqo25v.so
#1 0x0000ef67c40312f4 in torch::aot_inductor::AOTInductorModelContainer::AOTInductorModelContainer(unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > const&) ()
from /tmp/KFSXvp/data/aotinductor/model/cnxl5jak4cmycxqpoiy3wdbyygqqgbph4tl5wjzolu24zpiqo25v.so
#2 0x0000ef67c4017744 in AOTInductorModelContainerCreateWithDevice ()
from /tmp/KFSXvp/data/aotinductor/model/cnxl5jak4cmycxqpoiy3wdbyygqqgbph4tl5wjzolu24zpiqo25v.so
#3 0x0000ef6b33cbc464 in torch::inductor::AOTIModelContainerRunner::AOTIModelContainerRunner(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#4 0x0000ef6b33cbcf80 in torch::inductor::AOTIModelContainerRunnerCpu::AOTIModelContainerRunnerCpu(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long) () from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#5 0x0000ef6b33cbd038 in torch::inductor::(anonymous namespace)::create_aoti_runner_cpu(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
```
Output from our nightly CI with cuda-gdb
```
Program terminated with signal SIGBUS, Bus error.
#0 0x0000eca2a34b7628 in ?? () from /usr/lib/aarch64-linux-gnu/libc.so.6
[Current thread is 1 (Thread 0xeca2a37b4580 (LWP 706))]
(cuda-gdb) where
#0 0x0000eca2a34b7628 in ?? () from /usr/lib/aarch64-linux-gnu/libc.so.6
#1 0x0000eca2a346cb3c in raise () from /usr/lib/aarch64-linux-gnu/libc.so.6
#2 <signal handler called>
#3 0x0000ec9e30089c54 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() () from /tmp/6cr6fe/data/aotinductor/model/csovzderskxoxbxohbxsgppmjvvjbbnermfydfa4ubnngqepcq2c.so
#4 0x0000ec9e300a12f4 in torch::aot_inductor::AOTInductorModelContainer::AOTInductorModelContainer(unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > const&) () from /tmp/6cr6fe/data/aotinductor/model/csovzderskxoxbxohbxsgppmjvvjbbnermfydfa4ubnngqepcq2c.so
#5 0x0000ec9e30087744 in AOTInductorModelContainerCreateWithDevice () from /tmp/6cr6fe/data/aotinductor/model/csovzderskxoxbxohbxsgppmjvvjbbnermfydfa4ubnngqepcq2c.so
#6 0x0000eca292c1c7e8 in torch::inductor::AOTIModelContainerRunner::AOTIModelContainerRunner(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#7 0x0000eca292c1d2b8 in torch::inductor::AOTIModelContainerRunnerCpu::AOTIModelContainerRunnerCpu(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#8 0x0000eca292c1d368 in torch::inductor::(anonymous namespace)::create_aoti_runner_cpu(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#9 0x0000eca292c19f2c in torch::inductor::AOTIModelPackageLoader::AOTIModelPackageLoader(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) () from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#10 0x0000eca2976da644 in pybind11::cpp_function::initialize<pybind11::detail::initimpl::constructor<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>::execute<pybind11::class_<torch::inductor::AOTIModelPackageLoader>, , 0>(pybind11::class_<torch::inductor::AOTIModelPackageLoader>&)::{lambda(pybind11::detail::value_and_holder&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)#1}, void, pybind11::detail::value_and_holder&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<torch::inductor::AOTIModelPackageLoader>&&, void (*)(pybind11::detail::value_and_holder&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::detail::is_new_style_constructor const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call&) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_python.so
#11 0x0000eca29719e430 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) () from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_python.so
#12 0x00000000005041c4 in ?? ()
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @atalman @malfet @ptrblck @nWEIdia @xwang233
### Versions
Reproduced with plain ubuntu 24.04 container with 2.6.0 RC wheel
| true
|
2,805,663,942
|
reset dynamo cache for Inductor tests under test_ops_gradients.py
|
shunting314
|
closed
|
[
"ciflow/trunk",
"ciflow/inductor",
"keep-going"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145440
| true
|
2,805,662,057
|
Add check that envvar configs are boolean
|
Raymo111
|
closed
|
[
"topic: not user facing"
] | 1
|
MEMBER
|
So we don't get unexpected behavior when higher typed values are passed in
| true
|
2,805,649,878
|
Backout PEP585 use of Iterable
|
aorenste
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: dataloader",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary:
Importing Iterable from collections.abc here causes an internal product to fail
MRO discovery causing a collision between Iterable and Generic.
This fixes the failure on D68461304
Differential Revision: D68531443
| true
|
2,805,644,973
|
Advance past fc window for stft center
|
jackzhxng
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Long overdue follow-up on https://github.com/pytorch/pytorch/pull/73432/files#diff-5f3d4caa0693a716fc46fd7f6339312f1b5f0bf89e3a3ff58e9dc13a9486b17aR719
Onnx stft doesn't support centering, [and all of the existing tests are for center = False](https://github.com/pytorch/pytorch/blob/main/test/onnx/test_pytorch_onnx_onnxruntime.py#L8026). I will open a follow-up issue to address this, this is just a nice-to-have.
Pr chain:
- -> [Advance past fc window for stft center #145437](https://github.com/pytorch/pytorch/pull/145437)
- [Add stft option to align window for center = false #145324](https://github.com/pytorch/pytorch/pull/145324)
- [Add istft option to align window for center = false](https://github.com/pytorch/pytorch/pull/145510)
| true
|
2,805,642,948
|
[NVIDIA] Full Family Blackwell Support codegen
|
johnnynunez
|
closed
|
[
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"release notes: build"
] | 23
|
CONTRIBUTOR
|
cc @ptrblck @msaroufim @eqy @Fuzzkatt
More references:
https://github.com/NVIDIA/nccl
| true
|
2,805,637,186
|
[dynamo] save/restore system random state more carefully
|
williamwen42
|
closed
|
[
"ciflow/trunk",
"topic: bug fixes",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 10
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145435
Fixes https://github.com/pytorch/pytorch/issues/145329.
We we need to save/restore system `random` state in 2 places:
- in `eval_frame.py`, we need to make sure that wrapper code between the user call to a `torch.compile`d function and the actual function call (intercepted by eval_frame.c) doesn't modify random state (https://github.com/pytorch/pytorch/blob/b2c89bc115123aea8e075e882ee121537ec92f89/torch/_dynamo/eval_frame.py#L532)
- in `eval_frame.c`, we need to make sure that guard eval and calling convert_frame don't modify random state (https://github.com/pytorch/pytorch/blob/b2c89bc115123aea8e075e882ee121537ec92f89/torch/csrc/dynamo/eval_frame.c#L575)
Followup - perhaps more global state from `convert_frame.py:preserve_global_state` can be moved to `eval_frame.py/c`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D68532640](https://our.internmc.facebook.com/intern/diff/D68532640)
| true
|
2,805,621,015
|
Crash in wrapper_benchmark.py with --profile enabled
|
mgraczyk
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Generated code crashes when compiled output code is run with profiling.
## Steps to reproduce
Run the following with the latest version of pytorch
```
# Generate the output code
$ TORCH_LOGS_FORMAT="" TORCH_LOGS=output_code python 2>/tmp/output.py <<EOF
import torch, sys
print("# torch version = ", torch.__version__)
print("# python version = ", sys.version)
@torch.compile
def f(x):
return x + 1
x = torch.randn(1, device="cpu")
f(x)
EOF
# torch version = 2.5.1
# python version = 3.12.8 (main, Dec 3 2024, 18:42:41) [Clang 15.0.0 (clang-1500.1.0.2.5)]
```
Then run
```
# Remove extra lines from output
$ sed -i '' '/Output code/d' /tmp/output.py
```
Finally
```
# Run the generated output code with profiling enabled
$ python /tmp/output.py --profile
0.000001
0.000001
Profiling result for a compiled module of benchmark None:
Chrome trace for the profile is written to /var/folders/fk/4_6j3_d57_1bqqttnp9l07lc0000gn/T/compiled_module_profile.json
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Input Shapes
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------
aten::randn 33.67% 10.995us 42.21% 13.786us 13.786us 1 [[], [], [], [], []]
aten::empty 4.85% 1.584us 4.85% 1.584us 0.528us 3 [[], [], [], [], [], []]
aten::normal_ 6.38% 2.082us 6.38% 2.082us 2.082us 1 [[1], [], [], []]
aten::as_strided 5.87% 1.916us 5.87% 1.916us 1.916us 1 [[1], [], [], []]
aten::to 0.64% 0.208us 0.64% 0.208us 0.208us 1 [[10], [], [], [], [], []]
aten::lift_fresh 0.13% 0.042us 0.13% 0.042us 0.042us 1 [[10]]
aten::detach_ 2.42% 0.791us 2.93% 0.957us 0.957us 1 [[10]]
detach_ 0.51% 0.166us 0.51% 0.166us 0.166us 1 [[10]]
aten::median 6.89% 2.250us 20.15% 6.582us 6.582us 1 [[10]]
aten::clone 6.00% 1.959us 12.12% 3.957us 3.957us 1 [[10], []]
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ --------------------------------
Self CPU time total: 32.657us
Traceback (most recent call last):
File "/tmp/output.py", line 67, in <module>
compiled_module_main('None', benchmark_compiled_module)
File "/Users/michael/dev/rfp-tool/.venv/lib/python3.12/site-packages/torch/_inductor/wrapper_benchmark.py", line 313, in compiled_module_main
parse_profile_event_list(
File "/Users/michael/dev/rfp-tool/.venv/lib/python3.12/site-packages/torch/_inductor/wrapper_benchmark.py", line 264, in parse_profile_event_list
report()
File "/Users/michael/dev/rfp-tool/.venv/lib/python3.12/site-packages/torch/_inductor/wrapper_benchmark.py", line 246, in report
f"\nPercent of time when {device_name.upper()} is busy: {device_busy_percent}"
^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'upper'
```
## Expected results
Profiling output with no crash
## Actual results
Crash because `device_name` is None. It appears to always be None running on CPU.
### Versions
```
Collecting environment information...
Traceback (most recent call last):
File "/Users/michael/dev/rfp-tool/optim/collect_env.py", line 692, in <module>
main()
File "/Users/michael/dev/rfp-tool/optim/collect_env.py", line 675, in main
output = get_pretty_env_info()
^^^^^^^^^^^^^^^^^^^^^
File "/Users/michael/dev/rfp-tool/optim/collect_env.py", line 670, in get_pretty_env_info
return pretty_str(get_env_info())
^^^^^^^^^^^^^^
File "/Users/michael/dev/rfp-tool/optim/collect_env.py", line 495, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/michael/dev/rfp-tool/optim/collect_env.py", line 450, in get_pip_packages
for line in out.splitlines()
^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
I don't use pip
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,805,612,037
|
XPU - UserWarning: Failed to initialize XPU devices. when run on the host without Intel GPU Driver
|
atalman
|
closed
|
[
"triaged",
"module: xpu"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Based on following issue: https://github.com/pytorch/pytorch/issues/145290#issuecomment-2606374858
Test on Linux x86 without Intel Driver installed:
```
pip install --pre torch --index-url https://download.pytorch.org/whl/test/xpu
Looking in indexes: https://download.pytorch.org/whl/test/xpu
Collecting torch
Obtaining dependency information for torch from https://download.pytorch.org/whl/test/xpu/torch-2.6.0%2Bxpu-cp310-cp310-linux_x86_64.whl.metadata
Downloading https://download.pytorch.org/whl/test/xpu/torch-2.6.0%2Bxpu-cp310-cp310-linux_x86_64.whl.metadata (27 kB)
Collecting jinja2
Downloading https://download.pytorch.org/whl/test/Jinja2-3.1.4-py3-none-any.whl (133 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.3/133.3 kB 28.6 MB/s eta 0:00:00
Collecting typing-extensions>=4.10.0
Downloading https://download.pytorch.org/whl/test/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting networkx
Downloading https://download.pytorch.org/whl/test/networkx-3.3-py3-none-any.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 114.9 MB/s eta 0:00:00
Collecting intel-cmplr-lib-rt==2025.0.2
Downloading https://download.pytorch.org/whl/test/xpu/intel_cmplr_lib_rt-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (45.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.9/45.9 MB 57.1 MB/s eta 0:00:00
Collecting tcmlib==1.2.0
Downloading https://download.pytorch.org/whl/test/xpu/tcmlib-1.2.0-py2.py3-none-manylinux_2_28_x86_64.whl (4.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.2/4.2 MB 117.4 MB/s eta 0:00:00
Collecting pytorch-triton-xpu==3.2.0
Obtaining dependency information for pytorch-triton-xpu==3.2.0 from https://download.pytorch.org/whl/test/pytorch_triton_xpu-3.2.0-cp310-cp310-linux_x86_64.whl.metadata
Downloading https://download.pytorch.org/whl/test/pytorch_triton_xpu-3.2.0-cp310-cp310-linux_x86_64.whl.metadata (1.3 kB)
Collecting sympy==1.13.1
Downloading https://download.pytorch.org/whl/test/sympy-1.13.1-py3-none-any.whl (6.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 134.6 MB/s eta 0:00:00
Collecting fsspec
Downloading https://download.pytorch.org/whl/test/fsspec-2024.6.1-py3-none-any.whl (177 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 177.6/177.6 kB 47.3 MB/s eta 0:00:00
Collecting filelock
Downloading https://download.pytorch.org/whl/test/filelock-3.13.1-py3-none-any.whl (11 kB)
Collecting intel-cmplr-lib-ur==2025.0.2
Downloading https://download.pytorch.org/whl/test/xpu/intel_cmplr_lib_ur-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (25.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 25.1/25.1 MB 88.2 MB/s eta 0:00:00
Collecting umf==0.9.1
Downloading https://download.pytorch.org/whl/test/xpu/umf-0.9.1-py2.py3-none-manylinux_2_28_x86_64.whl (161 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 161.6/161.6 kB 43.3 MB/s eta 0:00:00
Collecting intel-pti==0.10.0
Downloading https://download.pytorch.org/whl/test/xpu/intel_pti-0.10.0-py2.py3-none-manylinux_2_28_x86_64.whl (651 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 651.8/651.8 kB 104.1 MB/s eta 0:00:00
Collecting intel-cmplr-lic-rt==2025.0.2
Downloading https://download.pytorch.org/whl/test/xpu/intel_cmplr_lic_rt-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (18 kB)
Collecting intel-sycl-rt==2025.0.2
Downloading https://download.pytorch.org/whl/test/xpu/intel_sycl_rt-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (12.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.4/12.4 MB 135.9 MB/s eta 0:00:00
Collecting packaging
Downloading https://download.pytorch.org/whl/test/packaging-22.0-py3-none-any.whl (42 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 42.6/42.6 kB 14.9 MB/s eta 0:00:00
Collecting mpmath<1.4,>=1.1.0
Downloading https://download.pytorch.org/whl/test/mpmath-1.3.0-py3-none-any.whl (536 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 47.2 MB/s eta 0:00:00
Collecting MarkupSafe>=2.0
Downloading https://download.pytorch.org/whl/test/MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Downloading https://download.pytorch.org/whl/test/xpu/torch-2.6.0%2Bxpu-cp310-cp310-linux_x86_64.whl (1029.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.0/1.0 GB 694.9 kB/s eta 0:00:00
Downloading https://download.pytorch.org/whl/test/pytorch_triton_xpu-3.2.0-cp310-cp310-linux_x86_64.whl (348.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 348.4/348.4 MB 3.0 MB/s eta 0:00:00
Using cached https://download.pytorch.org/whl/test/xpu/torch-2.6.0%2Bxpu-cp310-cp310-linux_x86_64.whl (1029.5 MB)
Using cached https://download.pytorch.org/whl/test/pytorch_triton_xpu-3.2.0-cp310-cp310-linux_x86_64.whl (348.4 MB)
Installing collected packages: tcmlib, mpmath, intel-pti, intel-cmplr-lic-rt, intel-cmplr-lib-rt, umf, typing-extensions, sympy, packaging, networkx, MarkupSafe, fsspec, filelock, pytorch-triton-xpu, jinja2, intel-cmplr-lib-ur, intel-sycl-rt, torch
Successfully installed MarkupSafe-2.1.5 filelock-3.13.1 fsspec-2024.6.1 intel-cmplr-lib-rt-2025.0.2 intel-cmplr-lib-ur-2025.0.2 intel-cmplr-lic-rt-2025.0.2 intel-pti-0.10.0 intel-sycl-rt-2025.0.2 jinja2-3.1.4 mpmath-1.3.0 networkx-3.3 packaging-22.0 pytorch-triton-xpu-3.2.0 sympy-1.13.1 tcmlib-1.2.0 torch-2.6.0+xpu typing-extensions-4.12.2 umf-0.9.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@b90276700e5d:/# python
Python 3.10.16 (main, Jan 14 2025, 05:29:27) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
/usr/local/lib/python3.10/site-packages/torch/_subclasses/functional_tensor.py:275: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
>>> torch.__version__
'2.6.0+xpu'
>>> exit()
root@b90276700e5d:/# git clone https://github.com/pytorch/pytorch.git
Cloning into 'pytorch'...
remote: Enumerating objects: 1047258, done.
remote: Counting objects: 100% (959/959), done.
remote: Compressing objects: 100% (445/445), done.
remote: Total 1047258 (delta 717), reused 565 (delta 514), pack-reused 1046299 (from 3)
Receiving objects: 100% (1047258/1047258), 955.33 MiB | 47.36 MiB/s, done.
Resolving deltas: 100% (838919/838919), done.
Updating files: 100% (17938/17938), done.
root@b90276700e5d:/# cd pytorch/.ci/pytorch/
root@b90276700e5d:/pytorch/.ci/pytorch# cd smoke_test/
root@b90276700e5d:/pytorch/.ci/pytorch/smoke_test# pip install numpy
Collecting numpy
Downloading numpy-2.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.4/16.4 MB 103.5 MB/s eta 0:00:00
Installing collected packages: numpy
Successfully installed numpy-2.2.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[notice] A new release of pip is available: 23.0.1 -> 24.3.1
[notice] To update, run: pip install --upgrade pip
root@b90276700e5d:/pytorch/.ci/pytorch/smoke_test# python smoke_test.py --package torchonly
torch: 2.6.0+xpu
ATen/Parallel:
at::get_num_threads() : 4
at::get_num_interop_threads() : 4
OpenMP 201511 (a.k.a. OpenMP 4.5)
omp_get_max_threads() : 4
Intel(R) oneAPI Math Kernel Library Version 2025.0.1-Product Build 20241031 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 4
Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
std::thread::hardware_concurrency() : 8
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
Skip version check for channel None as stable version is None
Testing smoke_test_conv2d
Testing smoke_test_linalg on cpu
Testing smoke_test_compile for cpu and torch.float16
/usr/local/lib/python3.10/site-packages/torch/xpu/__init__.py:60: UserWarning: Failed to initialize XPU devices. The driver may not be installed, installed incorrectly, or incompatible with the current setup. Please refer to the guideline (https://github.com/pytorch/pytorch?tab=readme-ov-file#intel-gpu-support) for proper installation and configuration. (Triggered internally at /pytorch/c10/xpu/XPUFunctions.cpp:54.)
return torch._C._xpu_getDeviceCount()
/usr/local/lib/python3.10/site-packages/torch/xpu/__init__.py:60: UserWarning: Failed to initialize XPU devices. The driver may not be installed, installed incorrectly, or incompatible with the current setup. Please refer to the guideline (https://github.com/pytorch/pytorch?tab=readme-ov-file#intel-gpu-support) for proper installation and configuration. (Triggered internally at /pytorch/c10/xpu/XPUFunctions.cpp:54.)
return torch._C._xpu_getDeviceCount()
Testing smoke_test_compile for cpu and torch.float32
Testing smoke_test_compile for cpu and torch.float64
Picked CPU ISA VecAVX512 bit width 512
Testing smoke_test_compile with mode 'max-autotune' for torch.float32
```
Creates following warning when testing torch.compile:
```
Testing smoke_test_compile for cpu and torch.float16
/usr/local/lib/python3.10/site-packages/torch/xpu/__init__.py:60: UserWarning: Failed to initialize XPU devices. The driver may not be installed, installed incorrectly, or incompatible with the current setup. Please refer to the guideline (https://github.com/pytorch/pytorch?tab=readme-ov-file#intel-gpu-support) for proper installation and configuration. (Triggered internally at /pytorch/c10/xpu/XPUFunctions.cpp:54.)
```
Instead of Generating warning every time we could perhaps Display it once during torch import and during torch compile call fallback on cpu ?
### Versions
2.6.0 and 2.7.0
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,805,608,959
|
[dynamo][dicts] Insert LENTGH guard on an if condition on dict
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145432
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,805,592,336
|
[dynamo] Re-enable `test_torch_name_rule_map_updated`
|
StrongerXi
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145431
This patch re-enables `test_torch_name_rule_map_updated` and adds
relevant fixes for the failures.
Fixes #114831.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,805,589,977
|
[c10/metal] Add a vectype variant for `short`/`int`/`long`
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 4
|
MEMBER
|
Some of the kernels (exp_complex/atan_complex) need the specialization.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,805,586,380
|
[ca][hop] test CA on all HOPs
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145429
* #145422
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,805,583,543
|
torch._neg_view correctness
|
FindHao
|
closed
|
[
"module: correctness (silent)",
"topic: not user facing"
] | 1
|
MEMBER
|
### 🐛 Describe the bug
```
import torch
device = "cuda"
dtype = torch.float64
# Basic test for negative view
x = torch.randn(20, 20, device=device, dtype=dtype, requires_grad=False)
physical_neg = torch.neg(x)
view_neg = torch._neg_view(x)
assert torch.is_neg(view_neg), "view_neg should be negative"
assert not torch.is_neg(x), "x should not be negative"
assert torch.allclose(
physical_neg, view_neg
), "physical_neg and view_neg should be equal"
# Test in-place operations on negative view
x = torch.randn(20, 20, device=device, dtype=dtype, requires_grad=False)
neg_x = torch._neg_view(x)
neg_x.add_(1.0)
assert torch.is_neg(neg_x), "neg_x should still be negative after in-place operation"
expected = -x + 1.0
assert torch.allclose(neg_x, expected), "neg_x should match expected result"
```
The output of the above tests is
```
% TORCH_LOGS=+inductor,dynamo python test_neg_view.py
Traceback (most recent call last):
File "test_neg_view.py", line 23, in <module>
assert torch.allclose(neg_x, expected), "neg_x should match expected result"
AssertionError: neg_x should match expected result
```
I'm curious if this is expected result?
possible related failed ci tests https://github.com/pytorch/pytorch/pull/145127 .
cc @shunting314 @eellison @zou3519 @masnesral
### Versions
nightly
| true
|
2,805,580,914
|
add pt2 callbacks for backward pass
|
burak-turk
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary: This change adds callbacks for lazy backwards compilation.
Differential Revision: D68515699
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,805,574,923
|
Fix aot inductor intermediate debug printing
|
exclamaforte
|
closed
|
[
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"module: aotinductor"
] | 13
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/145425
The other way to fix this would be to change `_print_debugging_tensor_value_info` to handle constants.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @desertfire @yushangdi @ColinPeppler
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.