id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,989,714,486
|
Add export specific tests for dynamo and export
|
Lucaskabela
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151588
* __->__ #151135
* #151134
* #151133
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,989,714,401
|
Add functionality for installing free variables
|
Lucaskabela
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151134
* #152036
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,989,714,274
|
add basic unit tests and noop config
|
Lucaskabela
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151588
* #151135
* #151134
* __->__ #151133
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,989,684,535
|
[AMD] Block mem efficient attention for FP32 in CK backend
|
xw285cornell
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Summary: CK doesn't support FP32 attention, but aotriton does. If we prefer CK, and the input dtype is FP32, we'll select mem efficient attention but CK doesn't support it. So we'll exclude mem eff attention and pick math.
Differential Revision: D72880985
| true
|
2,989,611,599
|
[dynamo] Use sentinel value for guard filter.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: `None` can collide with the real values in the scope, so we should use a separate value. Also added "has_value" to the struct so that it's more clear whether the value is absent or not.
Test Plan: CI
Differential Revision: D72881300
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,989,526,907
|
Using hasattr for `_boxed_call` is asking for trouble
|
aorenste
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary:
There are a number of places in the code checking for the existence of `_boxed_call` instead of checking for a `True` value. This is somewhat dangerous because one would assume that setting it to `None` or `False` would be the same as not setting it (output_code.py does this, for example).
Change `hasattr()` to `getattr(..., False)` for these cases.
Test Plan: unit tests pass
Differential Revision: D72806693
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,989,499,174
|
Fix setUpClass() / tearDownClass() for device-specific tests
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 22
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151129
Finishes up the work started in #121686 + adds test
Update: this was not as straightforward as I originally imagined. Context below.
**TL;DR:** `TestFoo{CPU, CUDA}` now actually derive from `TestFoo`! Also, `{CPU, CUDA}TestBase` setup / teardown logic is now always called (it is required to set the primary device), regardless of whether `super().setUpClass()` / `super().tearDownClass()` are called or not.
**Background:** The typical way to get device-specific tests is to write a generic `TestFoo` and call `instantiate_device_type_tests(TestFoo, locals())` to get `TestFooCPU`, `TestFooCUDA`, etc. After this, generic tests (e.g. `TestFoo.test_bar()`) become `TestFooCPU.test_bar_cpu()` / `TestFooCUDA.test_bar_cuda()`.
Behind the scenes, this was historically accomplished by creating a `TestFooCUDA` that derives from both a `CUDATestBase` and an *empty class* called `TestFoo_base`. This `TestFoo_base` has the same bases as `TestFoo`, but none of the test functions (e.g. `test_bar()`). The documented reason for this is to avoid things like a derived `TestFooCUDA.test_bar()` being discovered in addition to the real device-specific test `TestFooCUDA.test_bar_cuda()`.
(1) A reason this matters is because it should be possible to call e.g. `super().setUpClass()` from a custom setup / teardown classmethod. If the generated TestFooCUDA does not derive from TestFoo, but instead derives from the empty class described above, this syntax does not work; in fact there is no way to form a proper `super()` call that works across the device-specific test variants. Here's an example that breaks in the OpInfo tests:
https://github.com/pytorch/pytorch/blob/070f3897453ea71a5505e4fcfaccb44e1e0cfa78/test/test_ops.py#L218-L221
(2) Further, there is some precedent within a custom `setUpClass()` impl for storing things on the `cls` object to be accessed at test time. This must be the device-specific test class (`TestFooCUDA`) and not `TestFoo` for this to work. As an example, the open device registration tests load a module during setup and use it in the test logic:
https://github.com/pytorch/pytorch/blob/070f3897453ea71a5505e4fcfaccb44e1e0cfa78/test/test_cpp_extensions_open_device_registration.py#L63-L77
https://github.com/pytorch/pytorch/blob/070f3897453ea71a5505e4fcfaccb44e1e0cfa78/test/test_cpp_extensions_open_device_registration.py#L79-L80
To accomplish both (1) and (2) at the same time, I decided to revisit the idea of utilizing a proper inheritance hierarchy for `TestFoo` -> `{TestFooCPU, TestFooCUDA}`. That is: have TestFooCPU / TestFooCUDA **actually** derive from `TestFoo`. This achieves both (1) and (2). The only thing left is to make sure the generic tests (e.g. `TestFoo.test_bar()`) are not discoverable, as was the stated reason for diverging from this in the first place. It turns out we can simply `delattr()` these generic tests from `TestFoo` once `TestFooCPU` / `TestFooCUDA` have been setup with the device-specific variants, and all works well. The `instantiate_device_type_tests(...)` logic already deletes `TestFoo` from scope, so I don't see a problem with deleting generic tests from this base class as well (CI will prove me right or wrong ofc).
**Side note:** I was encountering a weird race condition where sometimes the custom `setUpClass()` / `tearDownClass()` defined & swapped in [here](https://github.com/pytorch/pytorch/blob/4a47dd9b3f5d3aa587a8f909ce53f3a5eddcde6d/torch/testing/_internal/common_device_type.py#L940-L955) would be used, and sometimes it wouldn't. This non-deterministic behavior was called out previously by @ngimel here:
https://github.com/pytorch/pytorch/blob/4a47dd9b3f5d3aa587a8f909ce53f3a5eddcde6d/test/inductor/test_torchinductor_dynamic_shapes.py#L128-L130
To address this, I moved this block of logic to before the first call to `instantiate_test()`, as that method queries for the primary device, and the primary device identification logic may manually invoke `setUpClass()` (see [here](https://github.com/pytorch/pytorch/blob/4a47dd9b3f5d3aa587a8f909ce53f3a5eddcde6d/torch/testing/_internal/common_device_type.py#L381-L384)). Goal: define the `setUpClass()` / `tearDownClass()` we want for correctness before they're ever called. This seems to work and the behavior is deterministic now AFAICT.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,989,478,881
|
Reapply "Support tuning of _scaled_grouped_mm (#150421)"
|
bertmaher
|
closed
|
[
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151128
This reverts commit 6a65f2c4feb91f6dcc8b2879962b7f8badc3eac6.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,989,472,948
|
Wrong handling of deferred == and != runtime checks in torch.compile!
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
Two problems:
1) for the following we do not recompile but do also do not perform runtime assertions the calls to
func(torch.tensor([100]), torch.tensor([1,2]))
func(torch.tensor([1]), torch.tensor([1,2]))
should fail but they do not !! Here we do not generate the runtime assertion because its true in the graph !
```
torch._dynamo.config.capture_scalar_outputs = True
@torch.compile(fullgraph=True)
def func(a, b):
b = a.item()
torch._check(b==5)
# torch._check(b<=5)
return b*10
with fresh_inductor_cache():
func(torch.tensor([5]), torch.tensor([1,2]))
func(torch.tensor([100]), torch.tensor([1,2]))
func(torch.tensor([1]), torch.tensor([1,2]))
```
this should fail!
```
===== Forward graph 0 =====
/home/lsakka/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "i64[1][1]cpu"):
# File: /home/lsakka/pytorch/example8.py:123 in func, code: b = a.item()
_local_scalar_dense: "Sym(5)" = torch.ops.aten._local_scalar_dense.default(arg0_1); arg0_1 = None
ge: "Sym(True)" = _local_scalar_dense >= 5
_assert_scalar = torch.ops.aten._assert_scalar.default(ge, "Runtime assertion failed for expression u0 >= 5 on node 'ge'"); ge = _assert_scalar = None
le: "Sym(True)" = _local_scalar_dense <= 5
_assert_scalar_1 = torch.ops.aten._assert_scalar.default(le, "Runtime assertion failed for expression u0 <= 5 on node 'le'"); le = _assert_scalar_1 = None
# No stacktrace found for following nodes
eq: "Sym(True)" = _local_scalar_dense == 5
_assert_scalar_2 = torch.ops.aten._assert_scalar.default(eq, "Runtime assertion failed for expression Eq(u0, 5) on node 'eq_1'"); eq = _assert_scalar_2 = None
# File: /home/lsakka/pytorch/example8.py:126 in func, code: return b*10
mul: "Sym(50)" = _local_scalar_dense * 10; _local_scalar_dense = None
return (mul,)
async_compile.wait(globals())
del async_compile
def call(args):
arg0_1, = args
args.clear()
assert_size_stride(arg0_1, (1, ), (1, ))
return (50, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((1, ), (1, ), device='cpu', dtype=torch.int64)
fn = lambda: call([arg0_1])
return print_performance(fn, times=times, repeat=repeat)
```
2) For this we also do not generate runtime asserts in inductor , we should fail in inductor
func(torch.tensor([100]), torch.tensor([1,2]))
func(torch.tensor([1]), torch.tensor([1,2]))
```
from torch._inductor.utils import fresh_inductor_cache
torch._dynamo.config.capture_scalar_outputs = True
@torch.compile(fullgraph=True)
def func(a, b):
b = a.item()
torch._check(b!=5)
# torch._check(b<=5)
return b*10
with fresh_inductor_cache():
func(torch.tensor([5]), torch.tensor([1,2]))
func(torch.tensor([100]), torch.tensor([1,2]))
func(torch.tensor([1]), torch.tensor([1,2]))
```
```
TRACED GRAPH
===== Forward graph 0 =====
/home/lsakka/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "i64[1][1]cpu"):
# File: /home/lsakka/pytorch/example8.py:123 in func, code: b = a.item()
_local_scalar_dense: "Sym(u0)" = torch.ops.aten._local_scalar_dense.default(arg0_1); arg0_1 = None
# No stacktrace found for following nodes
eq: "Sym(Eq(u0, 5))" = _local_scalar_dense == 5
sym_not: "Sym(Ne(u0, 5))" = torch.sym_not(eq); eq = None
_assert_scalar = torch.ops.aten._assert_scalar.default(sym_not, "Runtime assertion failed for expression Ne(u0, 5) on node 'sym_not'"); sym_not = _assert_scalar = None
# File: /home/lsakka/pytorch/example8.py:126 in func, code: return b*10
mul: "Sym(10*u0)" = _local_scalar_dense * 10; _local_scalar_dense = None
return (mul,)
async_compile.wait(globals())
del async_compile
def call(args):
arg0_1, = args
args.clear()
assert_size_stride(arg0_1, (1, ), (1, ))
u0 = arg0_1.item()
buf0 = None
del arg0_1
return (10*u0, )
```
### what works lol
this work!
```
from torch._inductor.utils import fresh_inductor_cache
torch._dynamo.config.capture_scalar_outputs = True
@torch.compile(fullgraph=True)
def func(a, b):
b = a.item()
torch._check(b<=5)
return b*10
with fresh_inductor_cache():
func(torch.tensor([5]), torch.tensor([1,2]))
func(torch.tensor([100]), torch.tensor([1,2]))
func(torch.tensor([1]), torch.tensor([1,2]))
```
```
def call(args):
arg0_1, = args
args.clear()
assert_size_stride(arg0_1, (1, ), (1, ))
u0 = arg0_1.item()
buf0 = None
del arg0_1
if not u0 <= 5:
raise RuntimeError('u0 <= 5')
buf1 = None
return (10*u0, )
```
why someone need to debug!
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,989,472,392
|
[aarch64] Fixes to build with ArmPL's cblas.h
|
andrewjcg
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary:
Various fixes to make fbcode work w/ ArmPL's cblas header:
1) Avoid re-declaring prototypes for internal blas methods which ArmPL already declares.
2) Fix `std::complex` conversion when using these methods.
3) Drop `extern "C"` around include fo `cblas.h`.
Test Plan: CI
Differential Revision: D72808561
| true
|
2,989,344,962
|
Guard additional use of DriverAPI
|
pganssle-google
|
open
|
[
"oncall: distributed",
"open source",
"ciflow/trunk",
"release notes: distributed (c10d)",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Most uses of DriverAPI are guarded by `PYTORCH_C10_DRIVER_API_SUPPORTED`, but this use is not, which causes compilation errors when building without C10 DriverAPI support.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,989,246,489
|
[profiler][retry] don't disable CUPTI_LAZY_REINIT for cuda >= 12.6
|
davidberard98
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: profiler",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151124
Retry of https://github.com/pytorch/pytorch/pull/150957, which was reverted due to internal meta failures
Credit to @mgmtea who wrote the initial version of this PR: https://github.com/pytorch/pytorch/pull/146604
Context: CUPTI is the NVIDIA library that Kineto uses for collecting GPU-side info during profiling. The intended usage is to register a callback while you want profiling to occur, and then unregister the callback when you want profiling to stop. But a bug would cause crashes if CUPTI callbacks were de-registered when used with cudagraphs. The workaround was to disable "CUPTI_LAZY_REINIT" and "CUPTI_TEARDOWN" in Kineto - which prevents crashes, but can result in slower execution after profiling has occurred and completed.
This bug is believed to be fixed in CUDA >= 12.6, so this PR qualifies that DISABLE_CUPTI_LAZY_REINIT=1 and CUPTI_TEARDOWN=0 should only be applied if CUDA >= 12.6. Additionally, `profiler_allow_cudagraph_cupti_lazy_reinit_cuda12()` is added as an escape hatch so that we can add a killswitch in case we see more crashes related to this.
Differential Revision: [D72842114](https://our.internmc.facebook.com/intern/diff/D72842114/)
**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D72842114/)!
Differential Revision: [D72842114](https://our.internmc.facebook.com/intern/diff/D72842114)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,989,240,677
|
Fix tensor_constant name collision in aot_export_module
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Summary:
When we have an exported program that looks like this:
```
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, b__tensor_constant0: "f32[1]", ... c_lifted_tensor_0: "i64[925]", …. , tupleized_input_0_0: "f32[10, 2139]",
clone: "i64[925]" = torch.ops.aten.clone.default(c_lifted_tensor_0); c_lifted_tensor_0 = None
index_select: "f32[10, 925]" = torch.ops.aten.index_select.default(tupleized_input_0_0, 1, clone); clone = None
```
The graph after `aot_export_module` could have a name collision, notice that `_tensor_constant0` arg of `clone` is different from the `_tensor_constant0` in the input module .
```
def forward(self):
arg9_1: "f32[10, 2139]"
_tensor_constant0: "f32[1]" = self._tensor_constant0 # this should be int64, conflicted with the original _tensor_constant0, had a clone on this constant before lifting
index: "f32[10, 925]" = torch.ops.aten.index.Tensor(arg9_1, [None, _tensor_constant0]); _tensor_constant0 = None
```
This caused the `tensors used as indices must binary, int...` aoti error on PT2I dashboard because later we used `clone` as index.
We had this error because we created a new `_tensor_constant0` at [here](https://github.com/pytorch/pytorch/blob/main/torch/fx/_symbolic_trace.py#L403-L412), and the new `_tensor_constant0` overrides the original `_tensor_constant0` on the input Module in `_unlift_graph`. The `arg` for `clone` is created at `create_proxy` in `proxy.py`.
To fix this, we do a graph pass before we unlift the graph inputs to avoid name collision
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r aot_compile_constant_folding
buck2 run mode/dev-nosan caffe2/test/inductor:test_aot_inductor -- -r aoti_constant_tensor_name_collision
```
Differential Revision: D72761937
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,989,219,287
|
Avoid guarding on ids of optional dictionary tensor values:
|
laithsakka
|
closed
|
[
"dynamo-dicts"
] | 2
|
CONTRIBUTOR
|
is it possible to not have recompilation here, instead of guarding on the object id guard on weather its none or not?
````
@torch.compile()
def func(x):
for k, v in x.items():
if v is None:
return v*100
else:
return v*200
func({10:torch.tensor([1])})
func({20:torch.tensor([100])})
```
this generate two graphs.
```
I0411 10:00:52.560000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] TRACED GRAPH
I0411 10:00:52.560000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] ===== Forward graph 0 =====
I0411 10:00:52.560000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] /home/lsakka/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
I0411 10:00:52.560000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] def forward(self, arg0_1: "i64[1][1]cpu"):
I0411 10:00:52.560000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] # File: /home/lsakka/pytorch/example8.py:113 in func, code: return v*200
I0411 10:00:52.560000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] mul: "i64[1][1]cpu" = torch.ops.aten.mul.Tensor(arg0_1, 200); arg0_1 = None
I0411 10:00:52.560000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] return (mul,)
I0411 10:00:52.560000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs]
I0411 10:00:52.560000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs]
```
```
I0411 10:01:12.246000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/1] [__aot_graphs] TRACED GRAPH
I0411 10:01:12.246000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/1] [__aot_graphs] ===== Forward graph 1 =====
I0411 10:01:12.246000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/1] [__aot_graphs] /home/lsakka/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
I0411 10:01:12.246000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/1] [__aot_graphs] def forward(self, arg0_1: "i64[1][1]cpu"):
I0411 10:01:12.246000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/1] [__aot_graphs] # File: /home/lsakka/pytorch/example8.py:113 in func, code: return v*200
I0411 10:01:12.246000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/1] [__aot_graphs] mul: "i64[1][1]cpu" = torch.ops.aten.mul.Tensor(arg0_1, 200); arg0_1 = None
I0411 10:01:12.246000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/1] [__aot_graphs] return (mul,)
I0411 10:01:12.246000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/1] [__aot_graphs]
I0411 10:01:12.246000 2324482 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/1] [__aot_graphs]
```
internal cross ref
https://docs.google.com/document/d/12LyeEpY20m0L7h5eMZ9A4-NGshOrVQxW7oeW0ooEKno/edit?tab=t.0#heading=h.dukshhde9mj3
| true
|
2,989,215,736
|
[ONNX] Fix bfloat16 support in onnx_program callable
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: bug fixes"
] | 3
|
COLLABORATOR
|
- Added a test to guard bfloat16. The optimizer incorrectly turns bfloat16 initializers into uint16, but this is not relevant to export logic.
- Fix bfloat16 support in onnx_program callable
Tested with the following with cuda
```py
import torch
class BfloatModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.param = torch.nn.Parameter(torch.tensor(2.0, dtype=torch.bfloat16))
def forward(self, x):
return x * torch.tensor(1.0, dtype=torch.bfloat16) * self.param
input = torch.randn(1, 10, dtype=torch.bfloat16)
model = BfloatModel()
onnx_program = torch.onnx.export(model, (input,), dynamo=True, optimize=False, verify=True)
```
| true
|
2,989,178,938
|
Reland prologue transposed changes
|
eellison
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151120
* #151013
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,989,156,323
|
hack to try to fix not empty triton dir
|
bertmaher
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Differential Revision: D72741938
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,989,137,073
|
Tune linalg_eigh_cusolver: better heuristic for syevj_batched selection on cuda
|
MauriceDHanisch
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
NONE
|
This change is not tied to an open issue.
**Summary**
This PR updates the heuristics in linalg_eigh_cusolver for batched matrix diagonalization. The current logic only applies syevj_batched for matrix sizes ≤ 32, which is too conservative. In the favorable regions, this heuristic improves performance by more than 10× compared to the previous default.
**Benchmark**
benchmarked all three solver variants (`syevd`, `syevj`, `syevj_batched`) across a grid of matrix sizes and batch sizes, for both `float32` and `float64` on CUDA. The results are summarized below:
(See attached 2D heatmaps of absolute and relative timings)
→ Left column shows runtime of syevj_batched
→ Middle and right columns show speedup of syevj / syevd relative to syevj_batched

All benchmarks were run on an NVIDIA RTX 4090 (CUDA 12.2).
The full benchmark setup (script, results, and plotting notebook) is available here:
📂 [benchmarks/batched_eigh](https://github.com/MauriceDHanisch/pytorch/tree/benchmark-cusolver-eigh-methods-c/benchmarks/batched_eigh)
🌿 Branch: [benchmark-cusolver-eigh-methods-c](https://github.com/MauriceDHanisch/pytorch/tree/benchmark-cusolver-eigh-methods-c)
It includes:
- A grid-based benchmark across matrix and batch sizes
- Results saved as .json
- A notebook that produces the 2D plots attached above
**Code Change**
New logic
For `float32`:
- Use `syevj_batched` if:
- `batch > 15 && matrix_size < 512`, or
- `batch ≤ 15 && matrix_size < 100`
- Otherwise fall back to `syevd` for batched, or use existing `syevj` heuristic in unbatched case.
For `float64':
- Use `syevj_batched` if `batch > 15 && matrix_size < 256`
- Otherwise use `syevd` for batched
- Unbatched logic unchanged (default to `syevd`)
All uses of `syevj_batched` remain gated behind `use_cusolver_syevj_batched_`.
**Further Notes**
As seen in the plots, the transition boundary is not linear and depends on batch size, matrix size, and dtype. For future flexibility, it may be worth exposing the solver choice to users via an explicit flag or context override.
| true
|
2,989,113,227
|
Remove ls from filesystem base
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (checkpoint)"
] | 4
|
CONTRIBUTOR
|
Summary: User reported issue where they are inheriting from filesystembase but don't have the ls method which was added in the PR https://github.com/pytorch/pytorch/pull/150701#discussion_r2039840129. Removing the method from the base class but keeping it in derived class
Test Plan: buck test 'fbcode//mode/opt' fbcode//caffe2/test/distributed/checkpoint:test_hf_storage
Differential Revision: D72867722
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,989,053,429
|
TESTING: IGNORE
|
zxiiro
|
open
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,989,050,735
|
[reland][AOTI] Add protocol field to OSS schema of ExternKernelNodes
|
yiming0416
|
open
|
[
"fb-exported",
"module: inductor",
"ciflow/inductor",
"release notes: export",
"ci-no-td"
] | 2
|
CONTRIBUTOR
|
Summary:
This diff adds a "protocol" field to `ExternKernelNodes` in the OSS AOTI schema.
Test Plan: CI
Differential Revision: D72804878
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,989,025,497
|
DISABLED test_parity__foreach_acos_fastpath_outplace_cuda_float16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_outplace_cuda_float16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40386187509).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_outplace_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,988,899,885
|
Failure in test_vmap_autograd_grad_nn_functional_conv2d_cpu_float32
|
Flamefire
|
open
|
[
"module: tests",
"triaged",
"module: vmap",
"module: functorch"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
When running the test with `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=16 python functorch/test_ops.py TestOperatorsCPU.test_vmap_autograd_grad_nn_functional_conv2d_cpu_float32`
it fails with:
```
AssertionError: Tensor-likes are not close!
Mismatched elements: 2 / 144 (1.4%)
Greatest absolute difference: 1.1444091796875e-05 at index (0, 4, 0, 0, 2) (up to 1e-05 allowed)
Greatest relative difference: 2.064850013994146e-05 at index (0, 4, 0, 0, 2) (up to 1.3e-06 allowed)
The failure occurred for item [1]
[...]
Exception: Caused by sample input at index 16: SampleInput(input=Tensor[size=(2, 4, 6, 6), device="cpu", dtype=torch.float32], args=TensorList[Tensor[size=(8, 1, 3, 3), device="cpu", dtype=torch.float32], Tensor[size=(8,), device="cpu", dtype=torch.float32]], kwargs={'groups': '4'}, broadcasts_input=False, name='')
```
This is on an AMD EPYC 7702 64-Core Processor and consistent, i.e. the difference is always the same.
For CUDA the tolerance is already raised to 5e-5, so I'd suggest to do that here too.
Seen on PyTorch 2.6.0 and nightly 2.8.0.dev20250411+cpu
### Versions
PyTorch version: 2.8.0.dev20250411+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.9 (Green Obsidian) (x86_64)
GCC version: (GCC) 13.2.0
Clang version: Could not collect
CMake version: version 3.27.6
Libc version: glibc-2.28
Python version: 3.11.5 (main, Mar 27 2024, 15:51:24) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architektur: x86_64
CPU Operationsmodus: 32-bit, 64-bit
Adressgrößen: 43 bits physical, 48 bits virtual
Byte-Reihenfolge: Little Endian
CPU(s): 256
Liste der Online-CPU(s): 0-255
Anbieterkennung: AuthenticAMD
Modellname: AMD EPYC 7702 64-Core Processor
Prozessorfamilie: 23
Modell: 49
Thread(s) pro Kern: 2
Kern(e) pro Sockel: 64
Sockel: 2
Stepping: 0
Übertaktung: aktiviert
Skalierung der CPU(s): 69%
Maximale Taktfrequenz der CPU: 2183,5930
Minimale Taktfrequenz der CPU: 1500,0000
BogoMIPS: 4000,22
Markierungen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualisierung: AMD-V
L1d Cache: 4 MiB (128 Instanzen)
L1i Cache: 4 MiB (128 Instanzen)
L2 Cache: 64 MiB (128 Instanzen)
L3 Cache: 512 MiB (32 Instanzen)
NUMA-Knoten: 8
NUMA-Knoten0 CPU(s): 0-15,128-143
NUMA-Knoten1 CPU(s): 16-31,144-159
NUMA-Knoten2 CPU(s): 32-47,160-175
NUMA-Knoten3 CPU(s): 48-63,176-191
NUMA-Knoten4 CPU(s): 64-79,192-207
NUMA-Knoten5 CPU(s): 80-95,208-223
NUMA-Knoten6 CPU(s): 96-111,224-239
NUMA-Knoten7 CPU(s): 112-127,240-255
Schwachstelle Gather data sampling: Not affected
Schwachstelle Itlb multihit: Not affected
Schwachstelle L1tf: Not affected
Schwachstelle Mds: Not affected
Schwachstelle Meltdown: Not affected
Schwachstelle Mmio stale data: Not affected
Schwachstelle Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Schwachstelle Spec rstack overflow: Mitigation; Safe RET
Schwachstelle Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Schwachstelle Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Schwachstelle Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Schwachstelle Srbds: Not affected
Schwachstelle Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] torch==2.8.0.dev20250411+cpu
[pip3] triton==3.2.0
[conda] Could not collect
cc @mruberry @ZainRizvi @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,988,894,730
|
[Quant][X86] add an op to compute uint8 pointwise mul
|
Xia-Weiwen
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"intel"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151112
**Summary**
Add a new op, `onednn.qmul.tensor`, for int8 elementwise mul, which accepts inputs on CPU device (instead of QuantizedCPU).
The new op is implemented by AVX512 instructions and it provides similar or better performance, depending on shape, than its counterpart for QuantizedCPU device `quantized.mul`.
The new op supports output dtypes other than uint8 (fp32, fp16 and bf16 are supported).
**Test plan**
```
pytest test/quantization/core/test_quantized_op.py -k test_int8_mul_onednn
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
2,988,863,344
|
[Intel GPU] skip a cuda api call in amp to save some host overhead on xpu
|
jianyizh
|
closed
|
[
"triaged",
"open source",
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 17
|
CONTRIBUTOR
|
This can save ~0.2ms on non cuda devices by skip calling `amp_definitely_not_available()`. It can improve small models in torchbench like lennard_jones on xpu 10% on both eager and inductor in dynamo benchmarks.
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
| true
|
2,988,837,503
|
Update epsilon logic to improve numerical stability
|
albanD
|
open
|
[
"module: numerical-stability",
"module: bc-breaking",
"module: autograd",
"triaged"
] | 5
|
COLLABORATOR
|
Many operations in PyTorch use a variety of "epsilon" to ensure numerical stability or avoid infinite value.
This is used in particular for normalization functions batch/rms/layer norm and optimizers.
These epsilons are usually added following paper formula or historical techniques, to improve uniformisation I would suggest we enforce the following rules:
- the epsilon type in user API should always be `Optional[float]`/`std::optional<double>`.
- the default epsilon value should always be based off `torch.finfo(torch_tp).eps`/`std::numeric_limits<tp>::epsilon()` with tp being the type of the Tensor that needs to be guaranteed to be `>0`, to ensure we have an epsilon of appropriate scale.
- epsilon should be applied via `max( . , eps)` and NOT `+ eps`. This has two main advantages: it ensures no numerical change when not close to 0 and it ensures that if the result can never be `0` (see below).
- For operations with implicit autograd formula, this max() should not stop the gradients from flowing and propagate gradients as if `+ eps` was used.
Curious in particular @rgommers if you have opinions on this?
> it ensures that if the result can never be `0`
This can sound like a surprising statement as the value being added is always mathematically guaranteed to be positive. Unfortunately, we've had a couple examples where mathematically true can be broken due to floating point (https://github.com/pytorch/pytorch/issues/29442 would be the example we looked into the most details).
We've also observed very rare NaNs in some very rare cases with some normalization functions which I suspect are also related.
| true
|
2,988,820,582
|
[AOTI][reland] Remove typedef for half and bfloat16
|
desertfire
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td",
"release notes: inductor (aoti)"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151109
Summary: Reland https://github.com/pytorch/pytorch/pull/150657
typedef is prone to name collision. Explicitly spell out the actual aten types, needed for the libtorch-free codegen.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D72878456](https://our.internmc.facebook.com/intern/diff/D72878456)
| true
|
2,988,624,263
|
add sbgemv dispatch in torch cpu flash attention
|
taoye9
|
open
|
[
"triaged",
"open source",
"module: arm",
"topic: not user facing"
] | 10
|
NONE
|
# Summary
This PR introduces a dispatch to the OpenBLAS sbgemv kernel in PyTorch CPU Flash Attention kernel when the query sequence length is 1.
# Motivation
During the decoding phase in transformer models (e.g., for autoregressive inference), the shape of the query tensor often has sequence length = 1. Currently, this leads to dispatching A(m, k) * B(k, n) into the general sbgemm kernel, even when the operation is effectively a matrix-vector multiplication. This PR optimizes such cases by dispatching to sbgemv, which is better suited and shows measurable performance improvements.
# Heuristic Consideration
Our heuristic ensures that the matmul is dispatched to sbgemv only when matrix A is multiplied by a vector B, which is the intended use case for GEMV operations. Also we limit the dispatch to transb == NoTranspose because when transb == Transpose, the leading dimension (lda) might not be 1. This causes the sbgemv kernel to handle non-contiguous memory, which performs poorly.
# Benchmark result
Benchmarked using `torch.nn.functional.scaled_dot_product_attention` on **Neoverse™ V1**.
**Configuration:**
- `OMP_NUM_THREADS=16`
- Tensor shapes:
- Query: `[1, 16, 1, 32]`
- Key: `[1, 16, 1500, 32]`
- Value: `[1, 16, 1500, 32]`
**Results:**
| Kernel | Latency (µs) | Speedup |
|----------|--------------|---------|
| `sbgemm` | 121.700 | — |
| `sbgemv` | 104.663 | ~16% |
# Benchmark script
```
import torch
import time
import numpy as np
import math
from torch.profiler import profile, record_function, ProfilerActivity
class SimpleAttentionModel(torch.nn.Module):
def __init__(self, query, key, value):
super(SimpleAttentionModel, self).__init__()
self.query = query
self.key = key
self.value = value
def forward(self, attn_mask=None):
torch.nn.functional.scaled_dot_product_attention(
self.query,
self.key,
self.value,
attn_mask=attn_mask)
# implementation run for BertSdpaSelfAttention
def bench_sdpa(batch_size = 1, num_attention_heads = 16, sequence_length = 142, query_sequence_length = 142 , hidden_size=1024, precision=torch.float32):
with torch.no_grad():
attention_head_size = int(hidden_size / num_attention_heads)
query = torch.rand(size=(batch_size, num_attention_heads, query_sequence_length, attention_head_size), dtype=precision)
key = torch.rand(size=(batch_size, num_attention_heads, sequence_length, attention_head_size), dtype=precision)
value = torch.rand(size=(batch_size, num_attention_heads, sequence_length, attention_head_size), dtype=precision)
model = SimpleAttentionModel(query, key, value)
model.eval()
#model = torch.nn.utils.pack_linear.pack_linear_weights(model)
for _ in range(100):
model()
times = []
n_iters = 10000
for _ in range(n_iters):
s = time.time_ns()
model()
times.append((time.time_ns() - s) / 1e3)
min_times = np.min(times)
mean_times = np.mean(times)
print(f"Min Times = {min_times} us")
print(f"Mean Times = {mean_times} us")
# print("Times = ", times)
if __name__ == "__main__":
batch_size = 1
num_attention_heads = 16
sequence_length = 1500
query_sequence_length = 1
hidden_size=512
print("BF16 mode:")
with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:
with record_function("model_inference"):
bench_sdpa(batch_size = batch_size, num_attention_heads = num_attention_heads, sequence_length = sequence_length, query_sequence_length = query_sequence_length, hidden_size = hidden_size, precision=torch.bfloat16)
profile_data = prof.key_averages(group_by_input_shape=True).table(sort_by="cpu_time_total")
print(profile_data)
```
cc @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
2,988,469,718
|
[HOP] Reworked DispatchKey.Autograd
|
bohnstingl
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 7
|
COLLABORATOR
|
This PR intends to rework the dispatching of the autograd key.
I.e., currently the DispatchKey.Autograd of the HOPs was triggered, even if non of the operands of the HOP have `requires_grad=True`. With this rework, the autograd is bypassed if non of the operands require gradients and only invoked if any of the operands require gradients.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @ydwu4
| true
|
2,988,446,905
|
distributed/tensor/_op_schema has_symints does not check args_schema
|
IvanKobzarev
|
open
|
[
"oncall: distributed",
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: dtensor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
distributed/tensor/_op_schema has_symints does not check args_schema
Then hash() that will reduce args will fail with error: `TypeError('unhashable type: non-nested SymInt')`
Potential fix:
```
└─ $ git diff
diff --git a/torch/distributed/tensor/_op_schema.py b/torch/distributed/tensor/_op_schema.py
index 27672206a8d..c841d6a7520 100644
--- a/torch/distributed/tensor/_op_schema.py
+++ b/torch/distributed/tensor/_op_schema.py
@@ -306,11 +306,15 @@ class OpSchema:
def __post_init__(self) -> None:
has_symints = False
- for a in self.args_schema:
+ from torch.types import py_sym_types
+ for a in tree_leaves(self.args_schema):
if isinstance(a, DTensorSpec) and a.tensor_meta is not None:
if any(isinstance(s, torch.SymInt) for s in a.tensor_meta.shape):
has_symints = True
break
+ elif isinstance(a, py_sym_types):
+ has_symints = True
+ break
self.has_symints = has_symints
```
### Error logs
```
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2284, in CALL_FUNCTION_KW
[rank0]: self.call_function(fn, args, kwargs)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1178, in call_function
[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/variables/torch.py", line 1258, in call_function
[rank0]: tensor_variable = wrap_fx_proxy(
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/variables/builder.py", line 2362, in wrap_fx_proxy
[rank0]: return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/variables/builder.py", line 2428, in wrap_fx_proxy_cls
[rank0]: return _wrap_fx_proxy(
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/variables/builder.py", line 2526, in _wrap_fx_proxy
[rank0]: example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/utils.py", line 3269, in get_fake_value
[rank0]: raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/utils.py", line 3167, in get_fake_value
[rank0]: ret_val = wrap_fake_exception(
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/utils.py", line 2681, in wrap_fake_exception
[rank0]: return fn()
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/utils.py", line 3168, in <lambda>
[rank0]: lambda: run_node(tx.output, node, args, kwargs, nnmodule)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/utils.py", line 3365, in run_node
[rank0]: raise RuntimeError(make_error_message(e)).with_traceback(
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/utils.py", line 3324, in run_node
[rank0]: return node.target(*args, **kwargs)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/functional.py", line 222, in split
[rank0]: return tensor.split(split_size_or_sections, dim)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_tensor.py", line 1053, in split
[rank0]: return torch._VF.split_with_sizes(self, split_size, dim)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_compile.py", line 51, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/eval_frame.py", line 850, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/distributed/tensor/_api.py", line 350, in __torch_dispatch__
[rank0]: return DTensor._op_dispatcher.dispatch(
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/distributed/tensor/_dispatch.py", line 160, in dispatch
[rank0]: self.sharding_propagator.propagate(op_info)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/distributed/tensor/_sharding_prop.py", line 266, in propagate
[rank0]: OutputSharding, self.propagate_op_sharding(op_info.schema)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/distributed/tensor/_sharding_prop.py", line 45, in __call__
[rank0]: return self.cache(*args, **kwargs)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/distributed/tensor/_op_schema.py", line 399, in __hash__
[rank0]: return hash((self.op, args_to_hash))
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/__init__.py", line 595, in __hash__
[rank0]: raise TypeError("unhashable type: non-nested SymInt")
[rank0]: torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function <function split at 0x7f98a6c5ee60>(*(DTensor(local_tensor=FakeTensor(..., device='cuda:0', size=(118, 5120), dtype=torch.bfloat16,
[rank0]: requires_grad=True), device_mesh=DeviceMesh('cuda', [0, 1], mesh_dim_names=('tp',)), placements=(Replicate(),)),), **{'split_size_or_sections': [u0, u1, u2, u3, u4, u5, u6, u7, u8, u9, u10, u11, u12, u13, u14, u15], 'dim': 0}): got TypeError('unhashable type: non-nested SymInt')
[rank0]: from user code:
[rank0]: File "/home/ivankobzarev/github/torchtune/torchtune/modules/transformer.py", line 135, in torch_dynamo_resume_in_forward_at_130
[rank0]: mlp_out = self.mlp(self.mlp_norm(h))
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1854, in _call_impl
[rank0]: return inner()
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1805, in inner
[rank0]: result = forward_call(*args, **kwargs)
[rank0]: File "/home/ivankobzarev/github/torchtune/torchtune/modules/moe/moe.py", line 137, in forward
[rank0]: routed_output = self.experts(routed_input, num_tokens_per_expert)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1854, in _call_impl
[rank0]: return inner()
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1805, in inner
[rank0]: result = forward_call(*args, **kwargs)
[rank0]: File "/data/users/ivankobzarev/a/pytorch/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/home/ivankobzarev/github/torchtune/torchtune/modules/moe/experts.py", line 61, in forward
[rank0]: x = torch.split(
[rank0]: Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
pytorch main apr 11
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @chauhang @penguinwu @ezyang @bobrenjc93 @tianyu-l @XilunWu
| true
|
2,988,351,869
|
Whether `x` and `dx` can be used together in `torch.trapezoid()`?
|
ILCSFNO
|
closed
|
[
"module: docs",
"triaged",
"actionable",
"module: python frontend"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The doc of [torch.trapezoid()](https://pytorch.org/docs/stable/generated/torch.trapezoid.html#torch-trapezoid) shows its description as below:
https://github.com/pytorch/pytorch/blob/d94cc0e9942087180305061dd693adff93448d2e/torch/_torch_docs.py#L12698-L12807
From the document, I can't find conflict between `x` and `dx`, and its def also shows that:
```text
trapezoid(y, x=None, *, dx=None, dim=-1) -> Tensor
```
But in Repro:
### Repro
```python
import torch
x = torch.randn(5, 5)
y = torch.randn(5, 5)
result = torch.trapezoid(y, x=x, dx=0.01)
```
### Result
```text
TypeError: trapezoid() received an invalid combination of arguments - got (Tensor, dx=float, x=Tensor), but expected one of:
* (Tensor y, Tensor x, *, int dim = -1)
* (Tensor y, *, Number dx = 1, int dim = -1)
```
The signature shows that `x` and `dx` can't be used together.
Which one is correct? I wonder.
Thanks for noting!
### Versions
Nightly
cc @svekars @sekyondaMeta @AlannaBurke @albanD
| true
|
2,988,326,730
|
Optional tag like `keepdim` may have been removed in several funcs
|
ILCSFNO
|
closed
|
[
"module: docs",
"triaged",
"actionable",
"module: python frontend"
] | 1
|
CONTRIBUTOR
|
### 📚 The doc issue
Seen from #146156 and its PR [pull#146485](https://github.com/pytorch/pytorch/pull/146485), maybe some other funcs also have optional tags removed:
What I met is `torch.any()`, but suggest fix this by change the description of `keepdim` and any other similar arguments, for that there may be more funcs that have similar problem.
Note that there may have some funcs that has `keepdim` or any other similar arguments not optional. So change the description of them may have risk!
https://github.com/pytorch/pytorch/blob/d94cc0e9942087180305061dd693adff93448d2e/torch/_torch_docs.py#L852-L880
While `keepdim` is showed here:
https://github.com/pytorch/pytorch/blob/d94cc0e9942087180305061dd693adff93448d2e/torch/_torch_docs.py#L55-L61
https://github.com/pytorch/pytorch/blob/d94cc0e9942087180305061dd693adff93448d2e/torch/_torch_docs.py#L43-L52
Some similar arguments may include `dim`, etc.
Suggest to solve it together with #146156
### Suggest a potential alternative/fix
I suggest any one below:
* Change the description of `keepdim` and any other similar arguments from origin def (have risk)
* Change the description of `keepdim` and any other similar arguments one by one (long time to fix)
Thanks for noting!
cc @svekars @sekyondaMeta @AlannaBurke @albanD
| true
|
2,988,266,905
|
Whether `recompute_scale_factor=True` needs `scale_factor` passed in or not in `torch.nn.Upsample()`?
|
ILCSFNO
|
closed
|
[
"module: nn",
"triaged"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The doc of [torch.nn.Upsample()](https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html#upsample) shows its argument as below:
https://github.com/pytorch/pytorch/blob/d94cc0e9942087180305061dd693adff93448d2e/torch/nn/modules/upsampling.py#L41-L48
But in repro:
### Minified Repro
```python
import torch
m = torch.nn.Upsample(size=(2, 2), recompute_scale_factor=True)
print(m)
```
### Output
```text
Upsample(size=(2, 2), mode='nearest')
```
In this case, `recompute_scale_factor` is set to `True`, while `scale_factor` is not passed in, but it can run well?
It is against from:
```text
If `recompute_scale_factor` is ``True``, then `scale_factor` must be passed in and `scale_factor` is used to compute the output `size`.
```
I consider whether passing `scale_factor` is necessary while `recompute_scale_factor` is set to `True`.
May focus on:
* Doc not express properly or
* Codes not observe the doc
### Versions
Nightly
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,988,227,690
|
[BE] detect CXX pytree requirement with `TorchVersion`
|
XuehaiPan
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148328
* #148180
* #137400
* #152624
* __->__ #151102
| true
|
2,988,193,590
|
The `size` of `x` can have more dims in `torch.cdist()`
|
ILCSFNO
|
closed
|
[
"module: docs",
"triaged",
"actionable",
"module: python frontend"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The doc of [torch.cdist()](https://pytorch.org/docs/stable/generated/torch.cdist.html) shows its description as below:
https://github.com/pytorch/pytorch/blob/d94cc0e9942087180305061dd693adff93448d2e/torch/functional.py#L1462-L1464
But I tried repro below, it can run well, in which `size` of x2 is against from document above:
### Repro 1
```python
import torch
B, P, R, M = 1, 10, 10, 5
x1 = torch.randn(B, P, M, dtype=torch.float32)
x2 = torch.randn(B, R, M, R, M, dtype=torch.float32)
dist = torch.cdist(x1, x2)
print(x1.shape, x2.shape, dist.shape)
```
### Output 1
```text
torch.Size([1, 10, 5]) torch.Size([1, 10, 5, 10, 5]) torch.Size([1, 10, 5, 10, 10])
```
In this case, I saw that x2 can be not only in 3 dimensions, but also 5 or other more dimensions.
I wonder whether:
* It is an unexpected behavior
* Less check is paid attention to the size mismatch
* Description in document is biased
Further, it should be noted that both `x1` and `x2` can have other dimensions in practice, see repro below:
### Repro 2
```python
import torch
B, P, R, M = 1, 10, 10, 5
x1 = torch.randn(B, P, M, R, M, dtype=torch.float32)
x2 = torch.randn(B, R, M, R, M, dtype=torch.float32)
dist = torch.cdist(x1, x2)
print(x1.shape, x2.shape, dist.shape)
```
### Output 2
```text
torch.Size([1, 10, 5, 10, 5]) torch.Size([1, 10, 5, 10, 5]) torch.Size([1, 10, 5, 10, 10])
```
Thanks for noting.
### Versions
Nightly
cc @svekars @sekyondaMeta @AlannaBurke @albanD
| true
|
2,988,185,250
|
Turn MemPool into a C++ custom class
|
lw
|
open
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150564
* __->__ #151100
* #150684
* #150683
| true
|
2,988,133,285
|
DISABLED test_parity__foreach_acos_fastpath_outplace_cuda_complex64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_outplace_cuda_complex64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40370997282).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_outplace_cuda_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,988,027,356
|
precision error for attention like-operation
|
syheliel
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
### 🐛 Describe the bug
tlparse file: [dedicated_log_torch_trace_ve9nm2rv.log](https://github.com/user-attachments/files/19702029/dedicated_log_torch_trace_ve9nm2rv.log)
The result before and after torch.compile have a big difference:
```
Maximum error between out_normal and out_opt: 2.276662826538086 # first try
Maximum error between out_normal and out_opt: 2.2177655696868896 # second try
Maximum error between out_normal and out_opt: 2.2525100708007812 # third try
```
Here is the source. maybe caused by indcuctor/sfdp optimization:
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.inv_scale = 1.0 / 8 ** 0.5
self.query = torch.nn.Linear(64, 64)
self.key = torch.nn.Linear(64, 64)
self.value = torch.nn.Linear(64, 64)
def forward(self, x1, attn_mask):
q = self.query(x1).permute([0, 2, 1, 3])
k = self.key(x1).permute([0, 2, 1, 3])
v = self.value(x1).permute([0, 2, 1, 3])
t1 = torch.matmul(q, k.transpose(-2, -1))
t2 = t1.div(self.inv_scale)
t3 = t2 + attn_mask
t4 = t3.softmax(dim=-1)
t5 = t4.matmul(v)
return t5
func = Model().to('cpu')
x1 = torch.randn(1, 16, 64, 64)
attn_mask = torch.zeros(1, 1, 16, 16)
test_inputs = [x1, attn_mask]
opt = torch.compile(func)
out_normal = func(x1, attn_mask)
out_opt = opt(x1, attn_mask)
# Calculate maximum error between out_normal and out_opt
max_error = torch.max(torch.abs(out_normal - out_opt))
print(f"Maximum error between out_normal and out_opt: {max_error.item()}")
```
### Error logs
_No response_
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 18
On-line CPU(s) list: 0-17
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 5 125H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 9
Socket(s): 1
Stepping: 4
BogoMIPS: 5990.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 432 KiB (9 instances)
L1i cache: 576 KiB (9 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.6.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnxruntime==1.21.0
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
2,987,951,350
|
[XPU] Upgrade the XPU support packages version to 2025.1 in CI/CD
|
chuanqi129
|
open
|
[
"triaged",
"module: xpu"
] | 0
|
COLLABORATOR
|
As the XPU support packages [deep learning essential 2025.1](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html?packages=dl-essentials&dl-essentials-os=linux&dl-lin=offline) has been public released.
- [ ] **Dependencies**
- - [x] Land https://github.com/pytorch/kineto/pull/1066 for PTI 0.12.0 new interface in 2025.1.1
- - [ ] Update Kineto submodule version to include this change #152007
- - [ ] Enable XCCL build in CI by https://github.com/pytorch/pytorch/pull/150927
- - [ ] Enable oneMKL build for XPU in CI
- [ ] **Step 1.** Upgrade 2025.1.1
- - [ ] Upgrade 2025.1.0 CI test both for Linux & Windows and add new docker image `pytorch-linux-jammy-xpu-2025.1-py3` for Linux #151899
- - [ ] Upgrade CD build with 2025.1.0 and add new runtime Pypi packages dependencies both for Linux & Windows CD whls #151899
- - [ ] pytorch/test-infra repo update https://github.com/pytorch/test-infra/pull/6553
- - [ ] pytorch libs repo update if need
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,987,640,355
|
[Intel GPU][Windows] test_xpu.py::TestXpuXPU::test_lazy_init_xpu - subprocess.CalledProcessError
|
LuFinch
|
open
|
[
"triaged",
"module: xpu"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When running UT test/test_xpu.py::TestXpuXPU::test_lazy_init_xpu on Windows, it fails with
```
File "C:\ProgramData\miniforge3\envs\lfq\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\ProgramData\miniforge3\envs\lfq\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
AttributeError: Can't get attribute 'run_model' on <module '__main__' (built-in)>
Traceback (most recent call last):
File "<string>", line 25, in <module>
File "<string>", line 16, in test_multi_process
AssertionError
```
### Versions
pytest test/test_xpu.py -k test_lazy_init
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,987,612,466
|
improve noop elimination for view
|
BoyuanFeng
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
This PR improves noop elimination.
### View Noop
```python
>>> torch.Size([1,2,3]) == [1,2,3]
False
>>> torch.Size([1,2,3]) == (1,2,3)
True
```
So we add `tuple(size)` in `view_noop`.
Example:
```python
import torch
@torch.compile()
def f(x):
batch_size = x.shape[0]
x = x.transpose(1, 2) # (batch_size, 2, 3)
x = x.reshape(batch_size, 2, 3) # noop
return x
x = torch.randn((2,3,2))
f(x)
x = torch.randn((4,3,2))
f(x)
```
Before:

After:

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,987,508,165
|
[torch.export] pytorch 2.7.0 torch.export failed and the error message is very confused
|
shykoe
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 2
|
NONE
|
### 🐛 Describe the bug
here is my code
```python
import torch.nn as nn
import torch
class Smelu(nn.Module):
def __init__(self, beta: float = 1.0):
super(Smelu, self).__init__()
self.beta = beta
def forward(self, x):
return torch.where(torch.abs(x) <= self.beta, ((x + self.beta) ** 2) / (4 * self.beta), nn.functional.relu(x))
class GateNN(nn.Module):
def __init__(self, input_dim, hidden_unit, output_unit):
super(GateNN, self).__init__()
self.hidden_layer = nn.Sequential(
nn.Linear(input_dim, hidden_unit),
nn.ReLU()
)
self.output_layer = nn.Sequential(
nn.Linear(hidden_unit, output_unit),
nn.Sigmoid()
)
def forward(self, inputs, training=True):
hidden = self.hidden_layer(inputs)
output = 2 * self.output_layer(hidden)
return output
class LHUC(nn.Module):
def __init__(self, input_dim, hidden_units, gate_units):
super(LHUC, self).__init__()
self.hidden_units = hidden_units
self.gate_units = gate_units
self.gate_nn = nn.ModuleList()
self.dense_layers = nn.ModuleList()
self.input_dim = input_dim
# 初始化dense layers
dense_input = self.input_dim
for i in range(len(self.hidden_units) - 1):
layer = nn.Sequential(
nn.Linear(dense_input, self.hidden_units[i]),
Smelu()
)
dense_input = self.hidden_units[i]
self.dense_layers.append(layer)
layer = nn.Linear(self.hidden_units[-2], self.hidden_units[-1])
self.dense_layers.append(layer)
# 870, 400, 870
self.gate_nn.append(GateNN(self.input_dim, self.gate_units, self.input_dim))
input_dim = self.input_dim
for i, unit_num in enumerate(self.hidden_units[:-1]):
self.gate_nn.append(GateNN(self.input_dim, self.gate_units, unit_num))
def forward(self, inputs):
#2560, 870
origin_embedding, auxiliary_embedding = inputs
hidden = origin_embedding
for i in range(len(self.hidden_units)):
gate = self.gate_nn[i](auxiliary_embedding)
hidden = hidden * gate
hidden = self.dense_layers[i](hidden)
output = hidden
return output
batch_size = 2560
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.dht_bn = nn.BatchNorm1d(870)
self.bias_bn = nn.BatchNorm1d(80)
self.criterion = torch.nn.CrossEntropyLoss()
self.dht_tower = LHUC(
input_dim=870,
hidden_units=[1024, 512, 256, 64, 1],
gate_units=512)
# [96, 64, 1]
self.bias_mlp = nn.Sequential()
bias_input_dim = 80
bias_list = [96, 64, 1]
for dim in bias_list:
self.bias_mlp.append(nn.Linear(bias_input_dim, dim))
self.bias_mlp.append(nn.BatchNorm1d(dim))
self.bias_mlp.append(nn.Dropout(0.1))
bias_input_dim = dim
def forward(self, inputs, training=True):
# 2560, 87, 10
dht_table = inputs["dht_table"]
# 2560, 870
dht_table = dht_table.reshape([batch_size, -1])
dht_table = self.dht_bn(dht_table)
# 2560, 8, 10
bias_table = inputs["bias_table"]
bias_table = bias_table.reshape([batch_size, -1])
bias_table = self.bias_bn(bias_table)
features = [dht_table, dht_table]
main_logits = self.dht_tower(features)
main_pred = nn.functional.sigmoid(main_logits)
bias_logits = self.bias_mlp(bias_table)
bias_pred = nn.functional.sigmoid(bias_logits)
pred = main_pred * bias_pred
return {"combine_ctr_pred": pred, "dht_ctr_pred": main_pred}
def compute_loss_and_metrics(self, labels, model_outputs, sample_weights) -> tuple[torch.Tensor, dict]:
loss = self.criterion(model_outputs["combine_ctr_pred"].reshape(-1), labels["label"].reshape(-1))
return loss, {"auc": [labels["label"], model_outputs["combine_ctr_pred"]], "loss": loss}
if __name__ == "__main__":
device = "cuda" if torch.cuda.is_available() else "cpu"
model = Net().to(device=device)
batch_dim = torch.export.Dim("batch", min=1, max=65536)
example_inputs={"lhuc_table":torch.rand(2560, 15, 10).to('cuda'), "bias_table":torch.rand(2560, 8, 10).to('cuda'), "dht_table":torch.rand(2560, 87, 10).to('cuda')}
inputs = {'inputs':{'lhuc_table':{0:batch_dim,1:torch.export.Dim.AUTO,2:torch.export.Dim.AUTO},'bias_table':{0:batch_dim,1:torch.export.Dim.AUTO,2:torch.export.Dim.AUTO}, 'dht_table':{0:batch_dim,1:torch.export.Dim.AUTO,2:torch.export.Dim.AUTO}}}
example_tuple = tuple([example_inputs[x] for x in example_inputs.keys()])
exported = torch.export.export(model, (example_inputs,), dynamic_shapes=inputs)
```
i got error message
```
- Not all values of batch = L['inputs']['lhuc_table'].size()[0] in the specified range batch <= 65536 are valid because batch was inferred to be a constant (2560).
- Not all values of batch = L['inputs']['bias_table'].size()[0] in the specified range batch <= 65536 are valid because batch was inferred to be a constant (2560).
- Not all values of batch = L['inputs']['dht_table'].size()[0] in the specified range batch <= 65536 are valid because batch was inferred to be a constant (2560).
```
the error is very confused
but when i change the dynamic_shapes to
```
inputs = {'inputs':{'lhuc_table':{0:torch.export.Dim.AUTO,1:torch.export.Dim.AUTO,2:torch.export.Dim.AUTO},'bias_table':{0:torch.export.Dim.AUTO,1:torch.export.Dim.AUTO,2:torch.export.Dim.AUTO}, 'dht_table':{0:torch.export.Dim.AUTO,1:torch.export.Dim.AUTO,2:torch.export.Dim.AUTO}}}
exported = torch.export.export(model, (example_inputs,), dynamic_shapes=inputs)
```
it worked fine.
but I try to convert the model to aot inductor. if i use AUTO instead of dynamic dim , i'm worried about it' performance.
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250218+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: TencentOS Linux 3.2 (Final) (x86_64)
GCC version: (GCC) 10.3.1 20210422 (Red Hat 10.3.1-1)
Clang version: 9.0.1 (Red Hat 9.0.1-2.module_el8.2.0+309+0c7b6b03)
CMake version: version 3.19.0
Libc version: glibc-2.28
Python version: 3.9.16 (main, Dec 11 2024, 20:47:20) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] (64-bit runtime)
Python platform: Linux-5.4.119-1-tlinux4-0010.3-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10
GPU 1: NVIDIA A10
GPU 2: NVIDIA A10
GPU 3: NVIDIA A10
GPU 4: NVIDIA A10
GPU 5: NVIDIA A10
GPU 6: NVIDIA A10
GPU 7: NVIDIA A10
GPU 8: NVIDIA A10
GPU 9: NVIDIA A10
GPU 10: NVIDIA A10
GPU 11: NVIDIA A10
GPU 12: NVIDIA A10
GPU 13: NVIDIA A10
GPU 14: NVIDIA A10
GPU 15: NVIDIA A10
Nvidia driver version: 525.116.04
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.9.7
/usr/lib/libcudnn_adv_infer.so.8.9.7
/usr/lib/libcudnn_adv_train.so.8.9.7
/usr/lib/libcudnn_cnn_infer.so.8.9.7
/usr/lib/libcudnn_cnn_train.so.8.9.7
/usr/lib/libcudnn_ops_infer.so.8.9.7
/usr/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Thread(s) per core: 2
Core(s) per socket: 26
Socket(s): 4
NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8372HC CPU @ 3.40GHz
Stepping: 11
CPU MHz: 3799.983
CPU max MHz: 3401.0000
CPU min MHz: 1200.0000
BogoMIPS: 6800.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-25,104-129
NUMA node1 CPU(s): 26-51,130-155
NUMA node2 CPU(s): 52-77,156-181
NUMA node3 CPU(s): 78-103,182-207
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.19.2
[pip3] onnxscript==0.2.0
[pip3] onnxsim==0.4.36
[pip3] tf2onnx==1.9.3
[pip3] torch==2.7.0.dev20250218+cu118
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,987,470,553
|
DISABLED test_parity__foreach_acos_fastpath_outplace_cuda_complex128 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_outplace_cuda_complex128&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40359724773).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_outplace_cuda_complex128`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_acos', keys=('aten::_foreach_acos', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.complex128], Tensor[size=(19, 19), device="cuda:0", dtype=torch.complex128], Tensor[size=(18, 18), device="cuda:0", dtype=torch.complex128], Tensor[size=(17, 17), device="cuda:0", dtype=torch.complex128], Tensor[size=(16, 16), device="cuda:0", dtype=torch.complex128], Tensor[size=(15, 15), device="cuda:0", dtype=torch.complex128], Tensor[size=(14, 14), device="cuda:0", dtype=torch.complex128], Tensor[size=(13, 13), device="cuda:0", dtype=torch.complex128], Tensor[size=(12, 12), device="cuda:0", dtype=torch.complex128], Tensor[size=(11, 11), device="cuda:0", dtype=torch.complex128], Tensor[size=(10, 10), device="cuda:0", dtype=torch.complex128], Tensor[size=(9, 9), device="cuda:0", dtype=torch.complex128], Tensor[size=(8, 8), device="cuda:0", dtype=torch.complex128], Tensor[size=(7, 7), device="cuda:0", dtype=torch.complex128], Tensor[size=(6, 6), device="cuda:0", dtype=torch.complex128], Tensor[size=(5, 5), device="cuda:0", dtype=torch.complex128], Tensor[size=(4, 4), device="cuda:0", dtype=torch.complex128], Tensor[size=(3, 3), device="cuda:0", dtype=torch.complex128], Tensor[size=(2, 2), device="cuda:0", dtype=torch.complex128], Tensor[size=(1, 1), device="cuda:0", dtype=torch.complex128]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_acos_fastpath_outplace_cuda_complex128
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,987,451,684
|
[Intel GPU][PT2E] Register qconv impls to general qconv_pointwise schema
|
ZhiweiYan-96
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 4
|
COLLABORATOR
|
# Motivation
Refer to https://github.com/pytorch/pytorch/pull/150751, general scheme for `qconv_pointwise` is added and `qconv2d_pointwise` is removed in callers. This PR registers the XPU backend implementations to this operator.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151092
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,987,451,563
|
[Openreg][PrivateUse1] Fix releasing tensor issue when using pin_memory
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 24
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151091
* #151007
As the title stated.
Related PR: https://github.com/pytorch/pytorch/pull/147066
Co-authored-by: Zhenbin Lin <lin-zhenbin@qq.com>
| true
|
2,987,429,099
|
DISABLED test_pp_fsdp_dp_type_FSDP_MP_ScheduleClass3 (__main__.ComposabilityTest)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,427,886
|
DISABLED test_pp_fsdp_dp_type_FSDP_ScheduleClass3 (__main__.ComposabilityTest)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,426,968
|
DISABLED test_pp_fsdp_dp_type_FSDP_ScheduleClass2 (__main__.ComposabilityTest)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,426,381
|
DISABLED test_pp_fsdp_dp_type_FSDP_ScheduleClass1 (__main__.ComposabilityTest)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,425,034
|
DISABLED test_pp_fsdp_dp_type_FSDP_ScheduleClass0 (__main__.ComposabilityTest)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,423,097
|
DISABLED test_pp_fsdp_dp_type_FSDP_MP_ScheduleClass2 (__main__.ComposabilityTest)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,421,767
|
DISABLED test_pp_fsdp_dp_type_FSDP_MP_ScheduleClass1 (__main__.ComposabilityTest)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,420,265
|
DISABLED test_pp_fsdp_dp_type_FSDP_MP_ScheduleClass0 (__main__.ComposabilityTest)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,418,562
|
DISABLED test_pp_ddp_ScheduleClass2 (__main__.ComposabilityTest)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,417,254
|
DISABLED test_pp_ddp_ScheduleClass1 (__main__.ComposabilityTest)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,402,407
|
docs: allow empty targets tensor in ctc_loss
|
jPorterDosch
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 16
|
CONTRIBUTOR
|
docs: allow empty targets tensor in ctc_losswhen target_lengths are zero, as described in issue
Fixes #150995
| true
|
2,987,396,667
|
Rewrite autograd producer consumer stream sync logic
|
soulitzer
|
open
|
[
"oncall: distributed",
"ciflow/trunk",
"release notes: autograd"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151079
Also see previous work https://github.com/pytorch/pytorch/pull/142097
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,987,393,265
|
DISABLED test_pp_ddp_ScheduleClass0 (__main__.ComposabilityTest)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,391,585
|
DISABLED test_allgather_stress_cuda (__main__.ProcessGroupGlooTest)
|
jithunnair-amd
|
open
|
[
"module: rocm",
"triaged",
"skipped"
] | 1
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in https://github.com/pytorch/pytorch/pull/150667: https://github.com/pytorch/pytorch/actions/runs/14372628446/job/40320881178
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,380,934
|
When can the 50 series be installed with a lower version of torch? Now the pytorch version is too high and many things can't run
|
jhluaa
|
closed
|
[] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
We don't just need torch 2.6, we need to be compatible with lower versions of torch under cuda 12.8
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,987,303,477
|
don't return logits for benchmark script
|
shunting314
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151075
PT2 benchmark scripts has a pattern like:
```
def forward_and_backward_pass(self, mod, inputs, collect_outputs=True):
cloned_inputs = clone_inputs(inputs)
self.optimizer_zero_grad(mod)
with self.autocast(**self.autocast_arg):
pred = mod(**cloned_inputs)
loss = self.compute_loss(pred)
self.grad_scaler.scale(loss).backward()
self.optimizer_step()
if collect_outputs:
return collect_results(mod, pred, loss, cloned_inputs)
return None
```
for training.
The collect_outputs argument is True only for accuracy testing and it's false for performance testing.
For HF benchmark suite, a model usually returns tuple (loss, logits). For performance testing, even though the logits is never used anywhere, dynamo has to keep it due to the control flow.
A few bad things if we keep logits here
1. the peak memory will be higher since the logits is large and we can not release its memory earlier.
2. we can not do optimization like chunking for the logits because the tensor needs to be returned from the pre-grad graph
Actually I think it's fine to not return logits at all.
- For training cases, checking loss and gradients for accuracy is good enough. It's hard to see two runs have mismatch logits but matching loss/gradients.
- Also, discarding logits as soon as possible for perf benchmarking makes it more fair for us.
On the other hand, it may be interesting to let dynamo support something like dynamo.constexpr (similar to tl.constexpr). A variable annotated as dynamo.constexpr will be specialized at compile time and we can do more optimization (DCE e.g.) at compile time. (A small [repro](https://gist.github.com/shunting314/0912a8947028a904c34f361021b8024d))
Benchmark results here [link](https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Fri%2C%2004%20Apr%202025%2018%3A03%3A26%20GMT&stopTime=Fri%2C%2011%20Apr%202025%2018%3A03%3A26%20GMT&granularity=hour&mode=training&dtype=amp&deviceName=cuda%20(h100)&lBranch=gh/shunting314/204/head&lCommit=fe25dab3f65e1b0e9db0af03f7664af70fcc9c66&rBranch=main&rCommit=55e62ff74ad5614faf80b060c7bfc551e3b7af5a)
- HF 15% (1.51 -> 1.66 compression ratio) peak memory improvement
- I also see 5% (2.74 -> 2.79x) perf win for HF. It could be true. We may generate more efficient kernels since we don't need keep logits and return it from the pre-grad graph. But I'll double check
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,987,290,381
|
DISABLED test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False_use_static_cuda_launcher_False (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False_use_static_cuda_launcher_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40358528573).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False_use_static_cuda_launcher_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 319, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False_use_static_cuda_launcher_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,987,282,566
|
[Not for land] save Q,K,V tensor at start of flash attention fwd and bwd
|
danielvegamyhre
|
closed
|
[] | 2
|
CONTRIBUTOR
| null | true
|
2,987,240,689
|
[dtensor] add op support for torch._grouped_mm
|
tianyu-l
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151072
This PR would make TP work with Grouped MM in MoE implementations like https://github.com/pytorch/torchtitan/pull/1084
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,987,240,637
|
[dtensor] add op support for torch.cumsum
|
tianyu-l
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151072
* __->__ #151071
For `torch.cumsum`, any sharding placement shoud propogate through if the cumsum `dim` is not sharded; otherwise it needs to be replicated first.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,987,238,537
|
[2/N] Use internal linkage in aten C++ files
|
cyyever
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"oncall: mobile",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: linalg_frontend",
"ciflow/periodic",
"ciflow/android"
] | 8
|
COLLABORATOR
|
Turn functions and variables into static if they are not used outside the ten cpp files. In some cases, missing header inclusion is added. In other cases, unused functions are removed.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,987,230,179
|
[ONNX] Support float4
|
justinchuby
|
open
|
[
"module: onnx",
"triaged",
"open source",
"ciflow/trunk",
"release notes: onnx",
"topic: new features"
] | 3
|
COLLABORATOR
|
- Support exporting float4 models (note: currently we use IR version 10 universally in the exporter, which does not include float 4 support. Eventually when onnx runtime and the ecosystem moves to support the new IR version 11 we should bump our version to 11 in the exporter as well)
- The shape of the type is set according to https://github.com/pytorch/pytorch/pull/148791#discussion_r2038704986 (added last dim with size 2)
- Use ml_dtypes types when converting to numpy for consistency with ONNX IR
Fix https://github.com/pytorch/pytorch/issues/150202
| true
|
2,987,207,422
|
[Profiler/Easy] Remove temp flag for on-demand Memory Snapshot
|
sraikund16
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: profiler",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary: Now that we have profiler impl in we don't need the temporary flag. submodule update too.
Test Plan: CI
Reviewed By: sanrise
Differential Revision: D72672186
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,987,178,908
|
auto functionalize base_hop
|
ydwu4
|
open
|
[
"release notes: fx",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152984
* #152974
* __->__ #151067
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,987,173,494
|
[ROCm] QR decomposition is much slower on MI300x than A100
|
WenboGong
|
open
|
[
"module: performance",
"module: rocm",
"triaged"
] | 6
|
NONE
|
### 🐛 Describe the bug
Performing QR decomposition on large matrix with MI300x is significantly slower than that on A100. The average time consumption 5x more.
### Sample code
```python
import torch
import time
dim = 2048
n= 300
device = torch.device("cuda")
total_time = 0
for _ in range(n):
M= torch.randn(dim, dim, device=device)
start = time.time()
Q = torch.linalg.qr(M)[0]
end = time.time()
total_time += end - start
print(f"Time taken: {total_time / n} seconds")
```
We get
`Time taken: 0.01596961180369059 seconds` on A100 GPU
`Time taken: 0.09405646562576293 seconds` on MI300X
Is this expected or any suggestions to improve the speed of QR on MI300x?
### Versions
Collecting environment information...
PyTorch version: 2.8.0a0+gitb6929ae
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 18.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.3.0 24455 f24aa3b4a91f6ee2fcd15629ba0b49fa545d8d6b)
CMake version: version 3.31.2
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1026-azure-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI300X VF (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42134
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8480C
Stepping: 8
CPU MHz: 2000.000
CPU max MHz: 2000.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 4.5 MiB
L1i cache: 3 MiB
L2 cache: 192 MiB
L3 cache: 210 MiB
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves user_shstk avx_vnni avx512_bf16 avx512vbmi umip waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm serialize ibt amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.13.0
[pip3] torch==2.8.0a0+gitb6929ae
[pip3] torchvision==0.22.0a0+ef4718a
[pip3] triton==3.3.0+git96316ce5
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2021.4.0 h06a4308_640
[conda] numpy 1.22.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.8.0a0+gitb6929ae pypi_0 pypi
[conda] torchvision 0.22.0a0+ef4718a pypi_0 pypi
[conda] triton 3.3.0+git96316ce5 pypi_0 pypi
cc @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,987,154,203
|
[export] Add draft-export to error msg
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Given an exception in torch.export, I want to try/catch it to add the message "hey try out draft-export!". Currently I only add this message for errors that draft-export is known to fix, like DataDependentErrors, ConstraintViolationErrors, and no fake impl.
Originally the error message looks like:
```
File "/data/users/angelayi/pytorch/torch/_library/custom_ops.py", line 626, in fake_impl
raise RuntimeError(
RuntimeError: There was no fake impl registered for <CustomOpDef(mylib::foo2)>. This is necessary for torch.compile/export/fx tracing to work. Please use `foo2_impl.register_fake` to add an fake impl.
```
Now, the error msg now looks something like:
```
File "/data/users/angelayi/pytorch/torch/_library/custom_ops.py", line 626, in fake_impl
raise RuntimeError(
RuntimeError: There was no fake impl registered for <CustomOpDef(mylib::foo2)>. This is necessary for torch.compile/export/fx tracing to work. Please use `foo2_impl.register_fake` to add an fake impl.
The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can rerun your program with the `DRAFT_EXPORT=1` envvar, or replace your `export()` call with `draft_export()`.
```
In python versions >= 3.11, we can use `exception.add_note` to add to the error message. However with previous versions I did a hack to modify `e.args`.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151050
* __->__ #151065
* #151051
| true
|
2,987,134,257
|
[ONNX] Use dlpack to transfer tensors when onnxruntime implements proper support
|
justinchuby
|
open
|
[
"module: onnx",
"triaged"
] | 0
|
COLLABORATOR
|
Dependent on https://github.com/microsoft/onnxruntime/issues/24071
| true
|
2,987,126,003
|
[c10d][fr] Fix the false positive in the dtype check in fr analysis script
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151063
When checking dtype in fr analysis script, we should only check it when the input of output numbel is larger than zero. For the case when it is gather or scatter, the output/input size will be an empty list for non-src or non-dst ranks which we should just skip the check.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
Differential Revision: [D72826823](https://our.internmc.facebook.com/intern/diff/D72826823)
| true
|
2,987,122,695
|
[dynamo] Remove `traceable_tensor_subclasses`-related code
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151062
* #151061
* #151060
Since #149792 deprecates `traceable_tensor_subclasses` and it's been
landed for over a week, we can safely remove all the old code that uses
`traceable_tensor_subclasses` (they were primarily for testing purposes
and are equivalent to no-ops now).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,987,122,333
|
[dynamo] handle tensor subclass with non-classmethod `__torch_function__`
|
StrongerXi
|
closed
|
[
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151062
* __->__ #151061
* #151060
As title, this patch fixes bugs in
1. emulating `has_torch_function`
2. emulating calling `__torch_function__`
3. building a callable VT for non-classmethod `__torch_function__`
Fixes #120799, #150265, #150848.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,987,122,285
|
[dynamo] Properly handle `super().some_classmethod(...)`
|
StrongerXi
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151062
* #151061
* __->__ #151060
Previously we were passing in the instance as first argument to a
`super().some_classmethod(...)` call, but we should've passed in the
type object instead, per semantics of `@classmethod`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,987,089,602
|
[AOTI] Add _weight_int4pack_mm to the C shim fallback list
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ciflow/rocm"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151059
Summary: As title
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,987,082,413
|
Remove C10_DEPRECATED
|
desertfire
|
open
|
[
"oncall: distributed",
"fb-exported",
"module: amp (automated mixed precision)",
"release notes: sparse"
] | 2
|
CONTRIBUTOR
|
Summary:
Revive https://github.com/pytorch/pytorch/pull/138406. In additional to the original code, fixed internal test failures and completely removed c10/util/Deprecated.h.
Summary from the original PR,
```
Looking in the code I see
// NB: __cplusplus doesn't work for MSVC, so for now MSVC always uses
// the "__declspec(deprecated)" implementation and not the C++14
// "[[deprecated]]" attribute. We tried enabling "[[deprecated]]" for C++14 on
// MSVC, but ran into issues with some older MSVC versions.
But looking at the MSVC C++ support table I see that the [[deprecated]] attribute is supported as of MSVC 2015 and that the vast majority of C++17 features became supported in MSVC 2015 or later.
Since PyTorch is C++17 now, I infer that PyTorch must not support versions of MSVC earlier than MSVC 2015, so the versions of MSVC supported by PyTorch must support [[deprecated]].
Therefore, since we are finished deprecating old MSVCs we can deprecate C10_DEPRECATED.
```
Test Plan: CI
Differential Revision: D72762767
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @mcarilli @ptrblck @leslie-fang-intel @jgong5
| true
|
2,987,072,894
|
Cache the value of torch_key in subproc
|
oulgen
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 21
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151057
No need to recalculate torch_key in subprocs, lets pass it from main process.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,987,034,868
|
[dynamo, nested graph breaks] pack resume function stack + locals into a list
|
williamwen42
|
open
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151056
* #144516
We need to be able to pass frame stack+locals in lists to hand off to nested functions in the future, so we implement this part first.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,986,987,213
|
segfault when TORCH_LOGS=invalid_arg
|
BoyuanFeng
|
closed
|
[
"high priority",
"module: logging",
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
`TORCH_LOGS=aot_g python3 reshape.py >torch_logs_error 2>&1` gives seg fault.
Error message: [P1782497957](https://www.internalfb.com/phabricator/paste/view/P1782497957)
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,986,964,382
|
DISABLED test_parity__foreach_acos_fastpath_outplace_cuda_bfloat16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_acos_fastpath_outplace_cuda_bfloat16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40345930112).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_acos_fastpath_outplace_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_acos', keys=('aten::_foreach_acos', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.bfloat16]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_acos_fastpath_outplace_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,986,935,937
|
Don't log benchmarking event to Scuba
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151053
These two events are really common, and also make up a huge portion of logs (~70%) we get internally in PT2 Compile Events. I don't think it's actually that useful to aggregate them, so instead of logging them to PT2 Compile Events, lets just only log them to chromium.
These two events will still be visible from tlparse: they just won't be in our internal tables. Please let me know if folks disagree.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,986,903,340
|
[c10d][libuv] Add back correct EOF case check
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151052
We removed the wrong EOF case in https://github.com/pytorch/pytorch/pull/150987, and we added the correct one back in this PR. Since https://github.com/pytorch/pytorch/pull/150987 is a fix, so we merge that PR first and use this PR as a follow-up to further makes the logic more complete.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,986,900,664
|
[export] Make draft-export predispatch=True by default
|
angelayi
|
closed
|
[
"Merged",
"release notes: export"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151050
* #151065
* __->__ #151051
| true
|
2,986,900,559
|
[export] Add DRAFT_EXPORT envvar
|
angelayi
|
closed
|
[
"release notes: export"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151050
* #151065
* #151051
| true
|
2,986,891,275
|
[ROCm] Support torch compile graph mode for custom ops - trigged by vLLM V1 and aiter
|
hongxiayang
|
closed
|
[
"module: rocm",
"triaged",
"enhancement"
] | 1
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
To make vLLM to be able to work with aiter in V1 + graph mode.
Example:
rocm/vllm-dev:llama4-20250409
model
```
export LLAMA_DIR=/data/Llama-4-Scout-17B-16E
export VLLM_ROCM_USE_AITER=1
export VLLM_ROCM_USE_AITER_RMSNORM=0
export VLLM_ROCM_USE_AITER_MOE=1
VLLM_USE_V1=1 VLLM_WORKER_MULTIPROC_METHOD=spawn SAFETENSORS_FAST_GPU=1 python basic_llama4.py
```
basic_llama4.py:
```
from vllm import LLM, SamplingParams
def test():
# Sample prompts.
prompts = [
"The color of the sky is blue but sometimes it can also be",
"The capital of France is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8,
top_p=0.95,
max_tokens=256)
# Create an LLM.
llm = LLM(
model=os.environ.get("LLAMA_DIR",
"ll-re/Llama-4-Scout-17B-16E-Instruct"),
enforce_eager=False,
tensor_parallel_size=8,
max_model_len=32768,
)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
if __name__ == "__main__":
test()
```
Errors look like:
```
ERROR 04-10 20:35:11 [core.py:386] Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"^M
ERROR 04-10 20:35:11 [core.py:386] ^M
ERROR 04-10 20:35:11 [core.py:386] Traceback (most recent call last):^M
ERROR 04-10 20:35:11 [core.py:386] File "/app/vllm/vllm/v1/executor/multiproc_executor.py", line 376, in worker_busy_loop^M
ERROR 04-10 20:35:11 [core.py:386] output = func(*args, **kwargs)^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context^M
ERROR 04-10 20:35:11 [core.py:386] return func(*args, **kwargs)^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/app/vllm/vllm/v1/worker/gpu_worker.py", line 157, in determine_available_memory^M
ERROR 04-10 20:35:11 [core.py:386] self.model_runner.profile_run()^M
ERROR 04-10 20:35:11 [core.py:386] File "/app/vllm/vllm/v1/worker/gpu_model_runner.py", line 1573, in profile_run^M
ERROR 04-10 20:35:11 [core.py:386] hidden_states = self._dummy_run(self.max_num_tokens)^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context^M
ERROR 04-10 20:35:11 [core.py:386] return func(*args, **kwargs)^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/app/vllm/vllm/v1/worker/gpu_model_runner.py", line 1423, in _dummy_run^M
ERROR 04-10 20:35:11 [core.py:386] hidden_states = model(^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl^M
ERROR 04-10 20:35:11 [core.py:386] return self._call_impl(*args, **kwargs)^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl^M
ERROR 04-10 20:35:11 [core.py:386] return forward_call(*args, **kwargs)^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/app/vllm/vllm/model_executor/models/mllama4.py", line 821, in forward^M
ERROR 04-10 20:35:11 [core.py:386] return self.language_model(input_ids, positions, intermediate_tensors,^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl^M
ERROR 04-10 20:35:11 [core.py:386] return self._call_impl(*args, **kwargs)^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl^M
ERROR 04-10 20:35:11 [core.py:386] return forward_call(*args, **kwargs)^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/app/vllm/vllm/model_executor/models/llama.py", line 541, in forward^M
ERROR 04-10 20:35:11 [core.py:386] model_output = self.model(input_ids, positions, intermediate_tensors,^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/app/vllm/vllm/compilation/decorators.py", line 238, in __call__^M
ERROR 04-10 20:35:11 [core.py:386] output = self.compiled_callable(*args, **kwargs)^M
ERROR 04-10 20:35:11 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^M
ERROR 04-10 20:35:11 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 659, in _fn^M
ERROR 04-10 20:35:11 [core.py:386] raise e.with_traceback(None) from None^M
ERROR 04-10 20:35:11 [core.py:386] torch._dynamo.exc.Unsupported: Attempted to call function marked as skipped^M
ERROR 04-10 20:35:11 [core.py:386] Explanation: Dynamo does not know how to trace the builtin `aiter.jit.aiter_.PyCapsule.ck_moe.` This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind).^M
ERROR 04-10 20:35:11 [core.py:386] Hint: If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround.^M
ERROR 04-10 20:35:11 [core.py:386] Hint: If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use `torch.compiler.allow_in_graph`.^M
ERROR 04-10 20:35:11 [core.py:386] ^M
ERROR 04-10 20:35:11 [core.py:386] Developer debug context: module: aiter.jit.aiter_, qualname: PyCapsule.ck_moe, skip reason: <missing reason>^M
ERROR 04-10 20:35:11 [core.py:386] ^M
```
### Alternatives
_No response_
### Additional context
_No response_
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @naromero77amd
| true
|
2,986,878,692
|
[c10d][fr] Add logging of nccl_version into fr and its dump
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151048
Users also want to see the nccl version in the FR dump so let's add it to FR. We only add it per rank per PG nccl comm, so this is really add a couple bytes to FR memory.
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k
| true
|
2,986,818,068
|
[DRAFT] INitial version of sticky export
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 15
|
CONTRIBUTOR
|
Summary: This is to make torchnative demos and benchmarking real models more simple by not requiring ppl to find example inputs first.
Test Plan: CI
Differential Revision: D72815584
| true
|
2,986,816,689
|
remove MTIA from the check of duplicate flow events
|
fenypatel99
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
MEMBER
|
Summary: For MTIA, there can be more than one event with same correlation id. Need to omit this check
Test Plan: CIs
Differential Revision: D72815463
| true
|
2,986,811,207
|
c10d/Store: add clone feature (#150966) (#150966)
|
d4l3k
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 5
|
MEMBER
|
Summary:
This adds a new `clone()` method to Store which will return a new Store instance that can be used from a different thread.
This is intended to better support multiple threads with stores such as when ProcessGroupNCCL needs a store to do error propagation.
Related issue: https://github.com/pytorch/pytorch/issues/150943
Approved by: https://github.com/fduwjj
Test Plan:
contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/205881ea4a451574c3a3de87c42484043a955d6e
Test plan from GitHub:
```
pytest test/distributed/test_store.py -k PythonStore
pytest test/distributed/test_store.py -k clone
```
Differential Revision: D72789690
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
2,986,802,859
|
[dynamo] unimplemented -> unimplemented_v2 in variables/builder.py
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"module: compile ux"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151044
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,986,799,967
|
FlexAttention: create_block_mask is considerably slow when using flattened 1D sequences (document masking / jagged tensors)
|
mikkelfo
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 8
|
NONE
|
### 🐛 Describe the bug
FlexAttention seems considerably slow for 1D-vector approaches where the batch_size and seq_len is collapsed. This scenario is pretty common for any document_masking or jagged tensors setups and is even part of the [FlexAttention blog examples](https://pytorch.org/blog/flexattention/#document-maskingjagged-sequences). Especially prevalent when chaining together multiple `mask_mod`, then `create_block_mask` runs significantly slower than I would expect. On various systems, the below code takes between 0.15-0.30s per call, which seems quite slow. For combined `mask_mod` chains (e.g. padding, causal, and document mask), it takes ~0.35s
Do I have an unreasonable expectation of the performance of a 1D `(batch_size*seq_len)` FlexAttention implementation, that has to re-compile every batch (document masking/jagged tensor setups)?
```Py
import time
import torch
from torch.nn.attention.flex_attention import create_block_mask, and_masks
# Random wrapper function to mimic document_masking setup and to force recompilation
def wrapper(randn):
def overzero(b, h, q_idx, kv_idx):
return randn[q_idx] >= 0
return overzero
def causal(b, h, q_idx, kv_idx):
return q_idx >= kv_idx
batch_size, seq_len = 512, 512
# Warmup
randn = torch.randn(batch_size*seq_len, device=torch.device("cuda"))
block_mask = create_block_mask(and_masks(wrapper(randn), causal), None, None, batch_size*seq_len, batch_size*seq_len, _compile=True)
total = 0
n = 50
for _ in range(n):
randn = torch.randn(batch_size*seq_len, device=torch.device("cuda")) # Forces recompilation of mask
start = time.perf_counter()
block_mask = create_block_mask(and_masks(wrapper(randn), causal), None, None, batch_size*seq_len, batch_size*seq_len, _compile=True)
end = time.perf_counter()
total += end - start
print(total/n)
```
### Versions
Seen on both 2.5.1 and 2.6
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,986,798,922
|
[MPS] Fix `determine_backend_memory_format` logic
|
malfet
|
closed
|
[
"module: cpu",
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150824
* __->__ #151042
If input is channels last than MPS will return a channels last output
This fixed `GPUTests.test_convolution_4_mps` from test_torchinductor.py
That previous failed with
```
AssertionError: expected size 3==3, stride 1==192 at dim=1; expected size 12==12, stride 48==16 at dim=2; expected size 16==16, stride 3==1 at dim=3
```
As FakeTensor implementation of conv returned `Contiguous`, rather than `ChannelLast` layout on MacOS-15 or later.
This doesn't seem to be very well documented, so will try to document the call path for `ExternKernel` invocation for `aten::convolution`:
- First inductor decomp defined here is called
https://github.com/pytorch/pytorch/blob/c93e4b829072c96e64f5d85f8f71c10f17771c06/torch/_inductor/kernel/conv.py#L424-L425
- Then it goes thru FakeTensor decomposition implemented here
https://github.com/pytorch/pytorch/blob/320914f1b6ce7303548f84ea1bdc3d3ce5cb6e55/torch/_subclasses/fake_impls.py#L739-L740
- Finally it goes down to convolution meta registrations implemented here
https://github.com/pytorch/pytorch/blob/320914f1b6ce7303548f84ea1bdc3d3ce5cb6e55/torch/_meta_registrations.py#L2416-L2417
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,986,792,127
|
Log information about suppressed data dependent errors
|
laithsakka
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151041
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,986,786,052
|
[ROCm] Improve behavior of get_torch_rocm_version helper function on non-ROCm systems.
|
naromero77amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
COLLABORATOR
|
Fixes #150041
Return a zero tuple when ROCm is _not_ supported, similar to what is done for the CUDA version of this function.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,986,730,656
|
torch.gather for float64 is still slow on AMD after float32 has been fixed
|
hfhongzy
|
closed
|
[
"module: performance",
"module: rocm",
"triaged"
] | 3
|
NONE
|
## 🐛 Describe the bug
`torch.scatter_add` for `float64` can be very slow running on AMD GPU compared with NVIDIA GPU.
According to the profiling results, the problem comes from the kernel `_scatter_gather_elementwise_kernel`. A previous issue [torch.gather can be slow on AMD with duplicated index](https://github.com/pytorch/pytorch/issues/128631) has already mentioned this problem, but the fix only resolved the issue for float32; float64 performance is still slow.
To reproduce, run the following script on AMD GPU. It takes about 4s for float64 but only 0.005s for float32.
```python
c = torch.tensor([i + 0.1 for i in range(1000_0000)], dtype=torch.float64, device='cuda')
a = torch.tensor([i % 4 for i in range(1000_0000)], device='cuda')
b = torch.zeros(4, dtype=c.dtype, device='cuda')
print (c.dtype, a.dtype)
t1 = time.time()
b.scatter_add(0, a, c)
torch.cuda.synchronize()
t2 = time.time()
print ("time = ", t2 - t1)
```
Profiling Results
```
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
aten::scatter_add_ 0.00% 193.818us 0.00% 2.191ms 199.219us 73.900s 99.97% 73.900s 6.718s 11
void at::native::_scatter_gather_elementwise_kernel<... 0.00% 0.000us 0.00% 0.000us 0.000us 73.890s 99.96% 73.890s 10.556s 7
aten::copy_ 0.04% 29.496ms 0.43% 322.901ms 3.799ms 10.752ms 0.01% 10.752ms 126.496us 85
aten::to 0.00% 167.464us 0.43% 317.778ms 2.542ms 0.000us 0.00% 10.736ms 85.888us 125
aten::_to_copy 0.00% 481.008us 0.43% 317.610ms 4.411ms 0.000us 0.00% 10.736ms 149.111us 72
Memcpy HtoD (Host -> Device) 0.00% 0.000us 0.00% 0.000us 0.000us 10.387ms 0.01% 10.387ms 1.484ms 7
void at::native::_scatter_gather_elementwise_kernel<... 0.00% 0.000us 0.00% 0.000us 0.000us 7.263ms 0.01% 7.263ms 2.421ms 3
aten::unique_dim 0.00% 308.995us 0.15% 115.258ms 38.419ms 4.117ms 0.01% 4.212ms 1.404ms 3
void at::native::_scatter_gather_elementwise_kernel<... 0.00% 0.000us 0.00% 0.000us 0.000us 2.416ms 0.00% 2.416ms 2.416ms 1
aten::index 0.00% 165.703us 0.02% 11.353ms 1.622ms 806.234us 0.00% 1.577ms 225.226us 7
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
```
## Versions
Latest PyTorch.
Appreciate it if previous contributors could help fix it.
@smalltalkman @mhalk @jithunnair-amd @hongxiayang @jerrymannil @Yuzhen11
cc @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,986,726,033
|
Do not log exception when recording is disabled or already recording
|
laithsakka
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor",
"ci-no-td"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151038
I am not sure why do we log all exceptions here and re-raise them , but at least when recording is disabled this should be
transparent. namely logging dde could be spamming.
before:
<img width="995" alt="Screenshot 2025-04-10 at 12 47 31 PM" src="https://github.com/user-attachments/assets/f90d4557-d958-4558-a917-0d687366cad1" />
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,986,722,074
|
[Inductor UT] Generalize device-bias code in `test_flex_attention.py`
|
anmyachev
|
closed
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 4
|
COLLABORATOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Part of https://github.com/pytorch/pytorch/pull/143553
@etaf @davidberard98 @hoshibara @guangyey could you take a look?
| true
|
2,986,663,219
|
Remove conda usage in windows binary builds
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 4
|
CONTRIBUTOR
|
This is related to : https://github.com/pytorch/pytorch/issues/146048
Removing conda from windows binary builds. At this point we are only removing conda and replacing it with python builds. Not rewriting all batch files as python or bash.
Additionally cleanup unused files:
```
.ci/pytorch/windows/internal/static_lib_test.bat
.ci/pytorch/windows/internal/env_fix.bat
.ci/pytorch/windows/internal/vs_install.bat
```
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.