id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,961,473,536
|
[PP] Add schedule visualizer
|
H-Huang
|
closed
|
[
"oncall: distributed",
"release notes: distributed (pipeline)",
"module: pipelining"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150333
Added a new private file (`_schedule_visualizer.py`) with some helper methods that can be used to visualize the operations of a schedule and plot with matplotlib.
Interleaved1F1B (pp_group=4, microbatches=8):

cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,471,730
|
Stash tensors for reduce_scatter_v and all_gather_v
|
kwen2501
|
closed
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 1
|
CONTRIBUTOR
|
Mirror of #149753 for 2.7 release.
Fix 1 of 3 for https://github.com/pytorch/pytorch/pull/148590
https://github.com/pytorch/pytorch/pull/148590 removed record_stream. Since previous AVOID_RECORD flag does not cover reduce_scatter_v and all_gather_v which are in coalescing form, these two ops were missed. Causing TorchRec's Variable Length Embedding to fail.
This PR adds a vector to stash tensors when coalescing is in flight. And the end of coalescing, it will hand over the tensors to Work.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,460,155
|
[FlexAttention] Don't load invalid values from mask mod
|
drisspg
|
open
|
[
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150331
## Summary
See https://github.com/pytorch/pytorch/issues/150321 for more details
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,961,460,009
|
[FlexAttention] Allow dispatch to SAC for flex
|
drisspg
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150331
* __->__ #150330
| true
|
2,961,456,398
|
Testing binary builds
|
malfet
|
closed
|
[
"topic: not user facing",
"ciflow/binaries_wheel"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,961,449,189
|
☂️ Update submodule dependencies to supported version of Cmake
|
malfet
|
open
|
[
"module: build",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Recent cmake-4.0.0 uncovered that number of PyTorch projects depend on pretty old (and sometimes archived) repositories which are no longer compatible and should be cleaned up
Below is the list of the submodules that needs to be udpated/removed (in no particular order):
- [ ] protobuf
- [x] gloo: Updated by https://github.com/pytorch/pytorch/pull/150320
- [ ] FP16
- [ ] PSimd
- [ ] TensorPipe
- [ ] hiprtc
### Versions
CI
cc @seemethere
| true
|
2,961,429,184
|
[cherry-pick] [CI] Disable some tests that are failing in periodic #150059
|
atalman
|
closed
|
[
"topic: not user facing",
"ciflow/periodic",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Disabling some tests to restore periodic
nogpu avx512 timeout:
https://hud.pytorch.org/pytorch/pytorch/commit/59f14d19aea4091c65cca2417c509e3dbf60c0ed#38492953496-box
profiler failure: https://hud.pytorch.org/pytorch/pytorch/commit/7ae0ce6360b6e4f944906502d20da24c04debee5#38461255009-box
test_accelerator failure:
https://hud.pytorch.org/pytorch/pytorch/commit/87bfd66c3c7061db6d36d8daa62f08f507f90e39#39476723746-box origin: 146098
test_overrides failure:
https://hud.pytorch.org/pytorch/pytorch/commit/bf752c36da08871d76a66fd52ad09f87e66fc770#39484562957-box origin: 146098
inductor cpu repro:
https://hud.pytorch.org/pytorch/pytorch/commit/bb9c4260249ea0c57e87395eff5271fb479efb6a#38447525659-box
functorch eager transforms:
https://hud.pytorch.org/pytorch/pytorch/commit/8f858e226ba81fde41d39aa34f1fd4cb4a4ecc51#39488068620-box https://hud.pytorch.org/pytorch/pytorch/commit/f2cea01f7195e59abd154b5551213ee3e38fa40d#39555064878 https://hud.pytorch.org/pytorch/pytorch/commit/b5281a4a1806c978e34c5cfa0befd298e469b7fd#39599355600 either 148288 or 148261?
https://hud.pytorch.org/hud/pytorch/pytorch/2ec9aceaeb77176c4bdeb2d008a34cba0cd57e3c/1?per_page=100&name_filter=periodic&mergeLF=true
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150059
Approved by: https://github.com/ZainRizvi, https://github.com/atalman, https://github.com/malfet
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,961,424,430
|
Add reverse engineered code to iOS build
|
JohnDaWalka
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 2
|
NONE
|
Add reverse engineered code to the repository and update relevant scripts and documentation.
* **Build Process**
- Include reverse engineered code in the build process in `.circleci/scripts/binary_ios_build.sh` and `scripts/build_ios.sh`.
- Update `scripts/xcode_build.rb` to include reverse engineered code in the Xcode build process.
* **Testing**
- Add a step to test the reverse engineered code in `.circleci/scripts/binary_ios_test.sh`.
* **Upload Process**
- Include reverse engineered code in the upload process in `.circleci/scripts/binary_ios_upload.sh`.
* **Dependencies**
- Add additional dependencies required for the reverse engineered code in `.github/requirements/pip-requirements-iOS.txt`.
* **Documentation**
- Update `README.md` to reflect the addition of the reverse engineered code and provide instructions on how to use it.
| true
|
2,961,305,050
|
Avoid circular imports in tracing_state_functions
|
justinchuby
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 12
|
COLLABORATOR
|
tracing_state_functions references some torch functions from submodules like `torch.onnx.is_in_onnx_export` that could trigger module initialization & circular imports. I turned the mapping into a function so that the dictionary is not initialized at torch import.
(discovered in https://github.com/pytorch/pytorch/pull/149646)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,961,299,108
|
[ROCm] cmake 4 workaround for hiprtc
|
amdfaa
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,961,227,011
|
[dynamo] add error message for unsupported LOAD_BUILD_CLASS
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compile ux"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150341
* __->__ #150323
Improved error message for https://github.com/pytorch/pytorch/issues/128942
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,961,217,925
|
[logging] Add pgo remote get/put timings to dynamo_compile
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150322
Test Plan: https://fburl.com/scuba/dynamo_compile/sandbox/xf950tw8
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,961,191,886
|
Masking out Loads from mask_modded out regions
|
drisspg
|
open
|
[
"triaged",
"module: flex attention"
] | 0
|
CONTRIBUTOR
|
# Summary
There is a very common gotcha/footgun that users run into when composing score_mods w/ blockmasks.
```Py
import torch
from torch.nn.attention.flex_attention import create_block_mask, flex_attention
B, H, SEQ_LEN, HEAD_DIM = 1, 1, 128, 16
MAX_LEN = 127 # 1 less then possible values
buffer = torch.arange(MAX_LEN).to(device="cuda")
def make_tensor():
return torch.randn((B, SEQ_LEN, H, HEAD_DIM), device="cuda").permute(0,2,1,3)
q, k, v = make_tensor(), make_tensor(), make_tensor()
def causal(b, h, q, kv):
return q >= kv
def mask_mod(b, h, q, kv):
upper = kv < MAX_LEN
return upper & causal(b, h, q, kv)
def score_mod(score, b, h, q, kv):
# Users expect that we will only apply where we have masked out values
# However for the last block and last row we will attempt to read from the buffer
return score + buffer[kv]
bm = create_block_mask(mask_mod, None, None, SEQ_LEN, SEQ_LEN)
out = flex_attention(q, k, v, score_mod=score_mod, block_mask=bm)
print(out)
```
Above is a minimal repro that highlights this issue. The crux of the issues has to do w/ the overlapping semantics of mask_mods and score_mods. If a users is reading from a buffer and utilizing both a score_mod and mask_mod they typically ensure that any read is globally correct by using the mask_mod to "mask out" any invalid locations.
In this example the line doing that is: `upper = kv < MAX_LEN`. And now they write their score_mod in a mask oblivious way - which feels natural.
If you run this example w/ compute-sanitizer:
```Shell
========= at triton_tem_fused_0+0xa90 in /tmp/torchinductor_drisspg/iu/ciuwwipvcnwchdwps3ekuddezubfuty76o2ynvjrinyvcw4l4qkv.py:398
========= by thread (15,0,0) in block (1,0,0)
========= Address 0x7f434de003f0 is out of bounds
========= and is inside the nearest allocation at 0x7f434de00000 of size 1,016 bytes
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x331d4f]
========= in /lib64/libcuda.so.1
========= Host Frame:launch [0x2513]
```
You can see that the IMA is from a read OOB on the passed in buffer:
```Py
tmp0 = (qk)
tmp1 = (n)
tmp2 = tl.load(in_ptr8 + tmp1) # <--------------
```
If we were free to read OOB this would actually work correctly (and in fact this can happen since we have the cuda-caching allocator)
```Py
# ~~~~~~~~~~~~~~~~~~~ Apply score modification ~~~~~~~~~~~~~~~~~~~
# If this is the last block of a non divisible seqlen, we still need to load [BLOCK_M, BLOCK_N] elements,
# which is larger than the actual number of elements. To avoid access memory out of bound,
# we need to mask out the elements that are out of Q_LEN & KV_LEN.
m = get_bounded_indices(offs_m, Q_LEN if CHECK_BLOCK_BOUNDARY else None)
n = get_bounded_indices(offs_n, KV_LEN if CHECK_BLOCK_BOUNDARY else None)
tmp0 = (qk)
tmp1 = (n)
tmp2 = tl.load(in_ptr8 + tmp1)
tmp3 = tmp2.to(tl.float32)
tmp4 = tmp0 + tmp3
post_mod_scores = tmp4
if CHECK_BLOCK_BOUNDARY:
mask_mod_output = tl.where(offs_n < KV_LEN, mask_mod_output, False)
# apply mask for partially unmasked blocks
post_mod_scores = tl.where(mask_mod_output, post_mod_scores, float("-inf"))
```
Since the mask_mod is applied after we generate the scores and indeed masks out this invalid scores.
### Possible solutions
1. We can update the graph we generate for score_mods, so that they include the masking from mask_mod. We would like to only apply this masking on non FULL_BLOCKS and only when we read from input buffers. This would likley look something like
```Py
valid_load_mask = mask_mod(...) if do masking else tl.full(index, True, tl.bool)
inpt = tl.load(buf + index, mask=valid_load_mask, other = 0.0)
```
This is pretty trivial for users to do on their end, but a little trickier to do for arbitrary score_mod captures. But not trivial to do for only the non FULL BLOCKS
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @yanboliang @BoyuanFeng
| true
|
2,961,145,000
|
Update gloo submodule
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
That updates its CMake minimum version(via https://github.com/facebookincubator/gloo/pull/424 ) and removes cmake-4.0.0 workarounds for gloo
| true
|
2,961,142,030
|
[dynamo] Bad error message when trying to compile async functions due to bad bytecode jump targets
|
williamwen42
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: compile ux"
] | 0
|
MEMBER
|
Encountered this when trying to compile unsupported async bytecodes.
Repro:
```python
import torch
async def fn():
return 1
torch.compile(fn, backend="eager", fullgraph=True)()
```
Output:
```
(/data/users/williamwen/py312-env) [williamwen@devgpu020.odn1 /data/users/williamwen/pytorch (84684e93)]$ python playground.py
Traceback (most recent call last):
File "/data/users/williamwen/pytorch/playground.py", line 8, in <module>
torch.compile(fn, backend="eager", fullgraph=True)()
File "/data/users/williamwen/pytorch/torch/_dynamo/eval_frame.py", line 658, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 1453, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 1131, in _compile
raise InternalTorchDynamoError(
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 1080, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 782, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 818, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 758, in transform
propagate_inst_exn_table_entries(instructions)
File "/data/users/williamwen/pytorch/torch/_dynamo/bytecode_transformation.py", line 965, in propagate_inst_exn_table_entries
indexof[inst.exn_tab_entry.end],
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: KeyError: Instruction(opcode=121, opname='RETURN_CONST', arg=1, argval=1, offset=6, starts_line=5, is_jump_target=False, positions=Positions(lineno=5, end_lineno=5, col_offset=11, end_col_offset=12), target=None, exn_tab_entry=InstructionExnTabEntry(start=Instruction(opname=RESUME, offset=4), end=Instruction(opname=RETURN_CONST, offset=6), target=Instruction(opname=CALL_INTRINSIC_1, offset=8), depth=0, lasti=True), argrepr=None)
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,961,135,303
|
[cuDNN][SDPA][WIP] cuDNN >= 9.8.0 should support seqlen 1
|
eqy
|
closed
|
[
"module: cudnn",
"module: cuda",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: sdpa"
] | 2
|
COLLABORATOR
|
Still testing edge cases
cc @csarofeen @ptrblck @xwang233 @msaroufim
| true
|
2,961,130,405
|
[WIP] standalone torch.inductor.compile API
|
zou3519
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150317
[no-ci]
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,961,064,782
|
assert on all_reduce_event only if it's not CPU device.
|
Ritesh1905
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Summary: For CPU based runs, `all_reduce_event` would be None since this is the result of the `all_reduce_stream.record_event()`, which does not do much other than returning None when device type is CPU.
Test Plan: CI
Differential Revision: D72176406
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,961,019,783
|
Update periodic.yml to test NVIDIA GPU hosted runner
|
zhe-thoughts
|
closed
|
[
"open source",
"topic: not user facing"
] | 2
|
NONE
|
Update periodic.yml to test NVIDIA GPU hosted runner
Fixes #ISSUE_NUMBER
| true
|
2,960,998,471
|
[PT2][cutlass backend] No suitable Cutlass GEMM configs for max-autotune mode
|
efsotr
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
### 🐛 Describe the bug
```python
import os
os.environ["TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS"] = "ATEN,TRITON,CPP,CUTLASS"
# os.environ["TORCH_LOGS"] = "+dynamo"
import logging
import torch
import torch._inductor.config as config
import torch._inductor.codegen.cuda.gemm_template as gemm_template
gemm_template.log.setLevel(logging.INFO)
config.cuda.cutlass_dir = "../../cutlass"
config.debug = True
x = torch.randn(8192, 2048, dtype=torch.bfloat16, device="cuda")
y = torch.randn(2048, 2048, dtype=torch.bfloat16, device="cuda")
def gemm(a, b):
return torch.nn.functional.linear(a, b)
compiled_gemm = torch.compile(gemm, mode="max-autotune")
z = compiled_gemm(x, y)
```
```log
W0401 00:02:00.853000 3516130 site-packages/torch/_inductor/codegen/cuda/gemm_template.py:518] [0/0] No suitable Cutlass GEMM configs found, fallbacks used ( len(ops)=0, output_layout=FixedLayout('cuda', torch.bfloat16, size=[8192, 2048], stride=[2048, 1]), input_layouts=[FixedLayout('cuda', torch.bfloat16, size=[8192, 2048], stride=[2048, 1]), FixedLayout('cuda', torch.bfloat16, size=[2048, 2048], stride=[1, 2048])], input_strides=[[2048, 1], [1, 2048]] )
```
First, the `CUTLASSArgs` was missing the `exclude_kernels` and `instantiation_level` attributes. After I found the code and filled in the missing parts, this issue appeared.
Then, through further investigation, I found that when sm < 90, there are no available CUTLASS 3.x GEMM templates, only CUTLASS 2.x GEMM templates, and it just so happens that max-autotune only uses CUTLASS 3.x GEMM templates when tuning mm.
### Versions
```
torch 2.5.1+cu124
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,960,978,614
|
Lint rule for always using std::optional?
|
clee2000
|
closed
|
[
"module: ci",
"module: lint",
"triaged"
] | 2
|
CONTRIBUTOR
|
There were a couple of PRs to use std::optional instead of c10::optional and a few other similar functions. Is there a lint rule we can make to prevent people from continuing to use c10::optional?
Some internal failures might have been caught if this lint rule did exist: https://github.com/pytorch/pytorch/pull/150129#issuecomment-2766739155
cc @seemethere @malfet @pytorch/pytorch-dev-infra @r-barnes since I think they were the one to do some of the changes
| true
|
2,960,946,019
|
[WIP] Refactor CUDAAllocatorConfig to reuse AllocatorConfig
|
guangyey
|
open
|
[
"open source",
"release notes: cpp",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151298
* #138222
* __->__ #150312
* #149601
| true
|
2,960,933,781
|
[Inductor] Synchronize type annotations between torch and triton
|
penguin-wwy
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 13
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,960,923,800
|
cd: Fix naming for windows arm64 libtorch builds
|
seemethere
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150310
Apparently the magical incantation to name these correctly lies in the
build_variant variable otherwise it silently does nothing.
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,960,874,595
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 7
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39687880634).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_int16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,960,856,869
|
Update ExecuTorch pin to latest viable/strict 3/28/2025
|
mergennachin
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
From latest viable/strict: https://hud.pytorch.org/hud/pytorch/executorch/viable%2Fstrict/1?per_page=50
Fixes https://github.com/pytorch/pytorch/issues/144480
This commit has important CI stability fixes, such as https://github.com/pytorch/executorch/pull/9561 and https://github.com/pytorch/executorch/pull/9634
| true
|
2,960,834,655
|
test dynamo
|
Sunnie912
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Summary: Testing failures in the github
Differential Revision: D72172374
| true
|
2,960,808,257
|
[Hierarchical Compile] Apply deduplication after output node creation
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150306
* #150305
* #150304
* #150303
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,960,808,085
|
[Hierarchical Compile] Add cycle detection to graph region expansion
|
mlazos
|
closed
|
[
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150306
* __->__ #150305
* #150304
* #150303
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,960,807,909
|
[Hierarchical Compile] Add cycle detection function for debug
|
mlazos
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150306
* #150305
* __->__ #150304
* #150303
Remove print
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,960,807,748
|
[Hierarchical Compile] Remove spammy debug log
|
mlazos
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150306
* #150305
* #150304
* __->__ #150303
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,960,768,915
|
allow collectives to be DCEd during collective optimizations, fix bad partitioner save decision
|
bdhirsh
|
open
|
[
"oncall: distributed",
"module: inductor",
"ciflow/inductor",
"release notes: AO frontend"
] | 1
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/146693. There are more details in the issue discussion.
I tested the repro and confirmed that we no longer save the allgather'd tensor for backward. Going to move a (smaller version of) that repro into a test and update soon.
Here's a tlparse pair of the generated graphs in the test case I wrote, with and without the config:
without config https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/hirsheybar/86f0cbaf-ba47-4896-941b-d3ea75ac85f0/custom/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
with config: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/hirsheybar/0ad9a0ea-6413-4195-9373-572977896e29/custom/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
You can see that without the config, we save `wait_tensor_7` for backward. With it, we save `primals_1` for backward and compute the allgather directly in the bw graph
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150355
* __->__ #150302
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,960,590,390
|
[Build] Fix XPU builds inside venv
|
pytorchbot
|
closed
|
[
"open source",
"release notes: build",
"topic: bug fixes",
"topic: not user facing",
"ciflow/xpu"
] | 2
|
COLLABORATOR
|
Update the torch-xpu-ops commit to [3ee2bd2f13e1ed17a685986ff667a58bed5f2aa5](https://github.com/intel/torch-xpu-ops/commit/3ee2bd2f13e1ed17a685986ff667a58bed5f2aa5)
- Fix the build error if users build torch xpu through python virtual environment. It was due to that torch-xpu-ops uses `${PYTHON_EXECUTABLE}` to get python path. However, `${PYTHON_EXECUTABLE}` is the sytem python path, while the pytorch root cmake is using the Python_EXECUTABLE ([Here](https://github.com/pytorch/pytorch/blob/420a9be743f8dd5d6296a32a1351c1baced12f1f/tools/setup_helpers/cmake.py#L310)) https://github.com/intel/torch-xpu-ops/issues/1461
- code diff (https://github.com/intel/torch-xpu-ops/compare/026b2c8c7c92a7b2cec5d26334006e3423251cc6..3ee2bd2f13e1ed17a685986ff667a58bed5f2aa5)
- base commit: 026b2c8c7c92a7b2cec5d26334006e3423251cc6
- new commit: 3ee2bd2f13e1ed17a685986ff667a58bed5f2aa5
| true
|
2,960,456,392
|
Update torch-xpu-ops commit pin to 3ee2bd2
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 10
|
COLLABORATOR
|
Update the torch-xpu-ops commit to [3ee2bd2f13e1ed17a685986ff667a58bed5f2aa5](https://github.com/intel/torch-xpu-ops/commit/3ee2bd2f13e1ed17a685986ff667a58bed5f2aa5)
| true
|
2,960,416,808
|
DISABLED AotInductorTest.FreeInactiveConstantBufferRuntimeConstantFoldingCuda (build.bin.test_aoti_inference)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 28
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=AotInductorTest.FreeInactiveConstantBufferRuntimeConstantFoldingCuda&suite=build.bin.test_aoti_inference&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39682713246).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `AotInductorTest.FreeInactiveConstantBufferRuntimeConstantFoldingCuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Expected equality of these values:
initMemory - DATASIZE - 2 * FOLDEDDATASIZE
Which is: 21856321536
updateMemory1
Which is: 22390374400
/var/lib/jenkins/workspace/test/cpp/aoti_inference/test.cpp:544: C++ failure
```
</details>
Test file path: `` or `test/run_test`
Error: Error retrieving : 400, test/run_test: 404
cc @clee2000 @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
| true
|
2,960,416,709
|
DISABLED test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_float64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 7
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_float64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39679712735).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_copy_with_multi_dtypes__foreach_copy_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,960,410,195
|
Disable cache and utilization stats uploading steps on s390x
|
AlekseiNikiforovIBM
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/s390"
] | 10
|
COLLABORATOR
|
There are no AWS credentials available on s390x runners. These steps are failing anyway due to that.
| true
|
2,960,400,404
|
[RFC] zentorch Integration
|
naveenthangudu
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 8
|
NONE
|
# zentorch Integration
## Table of Contents
- [1. Authors](#1-authors)
- [2. Summary](#2-summary)
- [3. Highlights](#3-highlights)
- [4. Motivation](#4-motivation)
- [4.1. Benchmarking Configuration](#41-benchmarking-configuration)
- [4.2. Single Core Performance Summary](#42-single-core-performance-summary)
- [4.3. Multi Core Performance Summary](#43-multi-core-performance-summary)
- [5. Proposal](#5-proposal)
- [5.1. User Flow](#51-user-flow)
- [5.2. Implementation](#52-implementation)
- [5.2.1. ZenDNN Library](#521-zendnn-library)
- [5.2.2. Graph Optimizations](#522-graph-optimizations)
- [5.2.3. Plugging into Torch 2.x Flow](#523-plugging-into-torch-2x-flow)
- [6. Plan](#6-plan)
- [6.1. Phase 1 - Optimizations of Recommender Systems for BF16 and FP32](#61-phase-1---optimizations-of-recommender-systems-for-bf16-and-fp32)
- [6.2. Phase 2 - Migration to New ZenDNN Library API and Architecture](#62-phase-2---migration-to-new-zendnn-library-api-and-architecture)
- [6.3. Phase 3 - Optimizations of Recommender Systems for INT8 Linear and woq_embedding_bag](#63-phase-3---optimizations-of-recommender-systems-for-int8-linear-and-woq-embedding_bag)
- [6.4. Phase 4 - Optimizations of NLPs and Generative LLM Workloads](#64-phase-4---optimizations-of-nlps-and-generative-llm-workloads)
## 1. Authors
@naveenthangudu, Sudarshan, @amukho, and Avinash
## 2. Summary
This document proposes an approach for integrating the ZenDNN library and Zentorch optimizations into PyTorch. This integration will enable inference optimizations for deep learning workloads on AMD EPYC™ CPUs. It will provide AMD EPYC™-focused optimizations in both the AOT Inductor (Torch Export) and the Inductor (Torch Compile) paths.
The integration will be carried out in three phases for Recommender Systems targeting [Phase 1](#61-phase-1---optimizations-of-recommender-systems-for-bf16-and-fp32) in PT 2.8 and others in 2.9:
* [Phase 1](#61-phase-1---optimizations-of-recommender-systems-for-bf16-and-fp32): Optimizations of Recommender Systems for BF16 and FP32
* [Phase 2](#62-phase-2---migration-to-new-zendnn-library-api-and-architecture): Migration to the new ZenDNN Library API and Architecture
* [Phase 3](#63-phase-3---optimizations-of-recommender-systems-for-int8-linear-and-woq-embedding_bag): Optimizations of Recommender Systems for INT8 linear and woq_embedding_bag
We will then add support for [NLP and LLM operators and optimizations](#64-phase-4---optimizations-of-nlps-and-generative-llm-workloads) in subsequent phases.
## 3. Highlights
* Can be optionally enabled by the end user through runtime flags.
* Optimized kernels from AOCL BLIS and beyond via the ZenDNN library.
* [ZenDNN Library](https://github.com/amd/ZenDNN) consists of kernels specifically optimized for AMD CPUs and includes [AOCL BLIS](https://github.com/amd/blis) as a submodule.
* ZenDNN Library also has the following dependencies, which will be reused from Torch:
* FBGEMM
* oneDNN
* Zentorch graph optimizations are added as FX passes on frozen graphs through the `zentorch_optimize` API.
## 4. Motivation
The main goal of this integration is to enable optimal inference performance on AMD EPYC™ CPUs for PyTorch. This will be achieved through Torch FX graph optimizations, which rewrite FX graphs intended for both the TorchInductor and AOTInductor backends to use AMD EPYC™-optimized ops from the [ZenDNN Library](https://github.com/amd/ZenDNN). To highlight the potential uplift using zentorch compared to PyTorch's Inductor on AMD EPYC™ CPUs, we benchmarked our Zentorch plugin using the PyTorch Mar 15th, 2025 Nightly build on a Genoa CPU (4th Generation AMD EPYC™ Processors) with scripts from the Torch Inductor performance dashboard.
### 4.1. Benchmarking Configuration
| Configuration | Value |
|-----------------------------|-------|
| PyTorch Version | '2025-03-15 Nightly' |
| CPU | 4th Generation AMD EPYC™ Genoa CPU |
| Number of Physical Cores | 96 |
| NUMA Nodes per Socket (NPS) | 1 |
| Model Data Type | FP32 |
### 4.2. Single Core Performance Summary
#### 4.2.1. Performance Results (2025-03-15 Nightly Release)
##### 4.2.1.1. Geometric Mean Speedup
| Compiler | torchbench | huggingface | timm_models |
|-----------|-----------:|------------:|------------:|
| inductor | 1.92x | 1.01x | 1.14x |
| zentorch | 1.58x | 1.05x | 2.04x |
##### 4.2.1.2. Mean Compilation Time (seconds)
| Compiler | torchbench | huggingface | timm_models |
|-----------|-----------:|------------:|------------:|
| inductor | 40.52 | 26.49 | 57.99 |
| zentorch | 23.97 | 19.99 | 46.41 |
##### 4.2.1.3. Peak Memory Footprint Compression Ratio (Higher Is Better)
| Compiler | torchbench | huggingface | timm_models |
|-----------|-----------:|------------:|------------:|
| inductor | 0.89x | 0.93x | 0.89x |
| zentorch | 0.76x | 0.89x | 0.90x |
### 4.3. Multi Core Performance Summary
#### 4.3.1. Performance Results (2025-03-15 Nightly Release)
##### 4.3.1.1. Geometric Mean Speedup
| Compiler | torchbench | huggingface | timm_models |
|-----------|-----------:|------------:|------------:|
| inductor | 1.09x | 0.97x | 1.07x |
| zentorch | 1.05x | 0.98x | 2.04x |
##### 4.3.1.2. Mean Compilation Time (seconds)
| Compiler | torchbench | huggingface | timm_models |
|-----------|-----------:|------------:|------------:|
| inductor | 20.08 | 25.92 | 46.31 |
| zentorch | 19.74 | 24.59 | 4.13 |
##### 4.3.1.3. Peak Memory Footprint Compression Ratio (Higher Is Better)
| Compiler | torchbench | huggingface | timm_models |
|-----------|-----------:|------------:|------------:|
| inductor | 0.87x | 0.93x | 0.90x |
| zentorch | 0.83x | 0.94x | 0.93x |
## 5. Proposal
### 5.1. User Flow
Two approaches are proposed for enabling the zentorch optimizations:
#### 5.1.1. Explicit Enablement via Configuration
Users can explicitly enable the zentorch optimizations through a configuration flag.
##### 5.1.1.1. With Inductor (torch.compile path)
The zentorch optimizations will be applied in both the model freezing path and within `torch.no_grad()` contexts:
```python
import torch
# Explicitly enable zentorch optimizations
torch._inductor.config.enable_zentorch = True
# Compile the model in evaluation mode
model = torch.compile(model.eval())
out = model(inp)
```
##### 5.1.1.2. With AOT Inductor (torch.export path)
```python
import torch
# Explicitly enable zentorch optimizations
torch._inductor.config.enable_zentorch = True
# Export and compile the model
exported_model = torch.export.export(model.eval(), (sample_input,))
so_model = torch._export.aot_compile(exported_model, (sample_input,))
```
#### 5.1.2. Automatic Detection Based on CPU Architecture
The zentorch optimizations can be automatically enabled when running on compatible AMD EPYC™ CPUs (Zen4 and Zen5 architectures). Users retain the ability to override this default behavior through explicit configuration.
##### 5.1.2.1. With Inductor (torch compile path)
The zentorch optimizations will be applied in both the model freezing path and within `torch.no_grad()` contexts:
```python
import torch
# Explicitly enable or disable zentorch optimizations
#torch._inductor.config.enable_zentorch = True/False
model = torch.compile(model.eval())
out = model(inp)
```
##### 5.1.2.2. With AOT Inductor (torch export path)
```python
import torch
# Explicitly enable or disable zentorch optimizations
#torch._inductor.config.enable_zentorch = True/False
exported_model = torch.export.export(model.eval(), (sample_input,))
so_model = torch._export.aot_compile(exported_model, (sample_input,))
```
### 5.2. Implementation
#### 5.2.1. ZenDNN Library
##### 5.2.1.1. Overview
ZenDNN is a library of accelerated primitives for deep learning inference workloads on AMD EPYC™ class CPUs. The library can be interfaced with and can be used as a backend by any deep learning framework.
##### 5.2.1.2. Design
#The ZenDNN library interfaces with PyTorch to run accelerated primitives on AMD EPYC™ platforms.
Various components of the library are as follows:
###### 5.2.1.2.1 Operators
An operator (or primitive) implements accelerated computation of a node, a fused computation, or a computation of a block of a DNN. For example, an operator may implement matrix-matrix multiplication for a linear graph node, a matrix multiplication followed by a series of elementwise computations (such as ReLU) as a fused computation, or a complete attention block for LLMs. In general, an operator implements multiple computational kernels. A computational kernel is selected based on various parameters such as input size, quantization level, supported ISA of the hardware, etc.
The ZenDNN library includes those operators for which there is an additive performance boost with optimizations targetted at EPYC. For all other operators,the stack falls back to x86 optimized primitives or operators available in the native framework, either through aten operators or the OneDNN library.
ZenDNN also interfaces with compute libraries such as AOCL and FBGEMM to utilize their compute kernels in its primitives.
###### 5.2.1.2.2 Runtime
The ZenDNN runtime can create command queues that schedule operators on different cores or group of cores. This feature is used to support concurrent primitive execution. It also consists of an auto-tuner that, given an operator, collects performance data of various kernels and selects the compute kernel with the best performance.
###### 5.2.1.2.3 Profiler
The ZenDNN profiler measures performance of an operator kernel. ZenDNN supports time-based profiling of an operator. Profiling data is written in profiler logs.
#### 5.2.2. Graph Optimizations
* Performant op replacements:
* [linear](#61-phase-1---optimizations-of-recommender-systems-for-bf16-and-fp32)
* [matmul](#61-phase-1---optimizations-of-recommender-systems-for-bf16-and-fp32)
* [bmm](#61-phase-1---optimizations-of-recommender-systems-for-bf16-and-fp32)
* [embedding_bag](#61-phase-1---optimizations-of-recommender-systems-for-bf16-and-fp32)
* [qlinear](#63-phase-3---optimizations-of-recommender-systems-for-int8-linear-and-woq-embedding_bag)
* [woq_embedding_bag](#63-phase-3---optimizations-of-recommender-systems-for-int8-linear-and-woq-embedding_bag)
* Elementwise op fusions:
* Fusion of linear(addmm_1dbias) or linear_nobias(mm) with
* [relu, sigmoid, tanh, add and mul](#61-phase-1---optimizations-of-recommender-systems-for-bf16-and-fp32)
* [gelu, silu and silu+mul](#64-phase-4---optimizations-of-nlps-and-generative-llm-workloads)
* Fusion of qlinear with
* [relu, sigmoid, tanh, add and mul](#63-phase-3---optimizations-of-recommender-systems-for-int8-linear-and-woq-embedding_bag)
* [gelu, silu and silu+mul](#64-phase-4---optimizations-of-nlps-and-generative-llm-workloads)
* Horizontal or parallel op fusions
* [Fusion of parallel embedding_bag ops](#61-phase-1---optimizations-of-recommender-systems-for-bf16-and-fp32)
* [Fusion of parallel woq_embedding_bag ops](#63-phase-3---optimizations-of-recommender-systems-for-int8-linear-and-woq-embedding_bag)
* Concat folding
* By modifying all input ops to concat to their out variants and by feeding them sliced tensors, concat will be done without additional memcopy.
* [Folding of concat into embedding_bag and linear variants](#61-phase-1---optimizations-of-recommender-systems-for-bf16-and-fp32)
* [Folding of concat into woq_embedding_bag and qlinear variants](#63-phase-3---optimizations-of-recommender-systems-for-int8-linear-and-woq-embedding_bag)
#### 5.2.3. Plugging into Torch 2.x Flow
##### 5.2.3.1. Approach 1
We are currently using this same approach in our zentorch plugin, which will enable a smooth integration path. This approach has two parts:
* Controlling the decompositions of ops
* Decompositions of ops such as gelu and linear are controlled as needed for zentorch.
* The zentorch graph optimizations are performed after AOT Autograd and before Inductor or AOT Inductor.

##### 5.2.3.2. Approach 2
Registering the zentorch graph optimizations in the post-grad pass stage will enable zentorch optimizations in both Inductor (torch compile) and AOT Inductor (torch export) flows.
* Integration with PyTorch's existing optimization pipeline
* Register zentorch optimizations as post-grad pass plugins
* Leverage PyTorch's existing pass infrastructure for better compatibility
* Benefits compared to Approach 1
* More seamless integration with both JIT and AOT compilation paths
* Reduced code duplication across Inductor and AOT Inductor flows
* Better maintainability as PyTorch's compiler architecture evolves

## 6. Plan
### 6.1. Phase 1 - Optimizations of Recommender Systems for BF16 and FP32
* Validate Approach 1
* Torch Compile path: Test Inductor integration feasibility
* Torch Export path: Test AOT Inductor path feasibility
* Validate Approach 2
* Test the feasibility of registering zentorch graph optimizations in post-grad pass
* PR1 - linear op and its unary post-op fusions
* Basic infrastructure
* Add ZenDNN library repo as a submodule into PyTorch build infrastructure
* Integrate graph ops support for linear, linear_relu, linear_sigmoid and linear_tanh
* Create op level unit tests
* Integrate zentorch_optimize function for op replacement and unary fusion
* Implement CPU vendor and architecture-based checks
* Add optional build flag to enable zentorch optimizations
* Add user override option for zentorch optimizations
* Integrate the zentorch_optimize into the torch 2.x flows
* Add unit tests for the graph passes
* PR2 - linear with mul and add
* Integrate graph ops support for linear_mul, linear_add and linear_mul_add
* Add the graph pass for binary elementwise fusions
* Create unit tests for the ops and fusions
* PR3 - bmm op and matmul op
* Integrate graph ops support for bmm and matmul
* Graph pass update for op replacement
* Create unit tests for the ops and op replacement
* PR4 - embedding_bag op, embedding_bag_group op and concat folding
* Integrate graph ops support for embedding_bag, embedding_bag_group
* Update linear ops for supporting folding of concat
* Integrate the graph pass for grouping of horizontal or parallel embedding bag ops
* Integrate the concat folding graph pass
* Create unit tests for the ops and fusions
* PR5 - Tuning zentorch Integration If Required
* Benchmark and tune the PyTorch integration to match the performance of the zentorch plugin for [DLRM model](https://github.com/facebookresearch/dlrm)
### 6.2. Phase 2 - Migration to new ZenDNN Library API and Architecture
* Update ZenDNN library integration to align with the new API and internal architecture.
* ZenDNN library becomes thin and lightweight by using oneDNN as a third-party library instead of managing it under the hood.
* Refactor the code added in [Phase 1](#61-phase-1---optimizations-of-recommender-systems-for-bf16-and-fp32) to ensure compatibility with the new ZenDNN library API.
* Validate the migration with unit tests and benchmarks.
* Ensure performance parity or improvements compared to the previous ZenDNN library integration.
* PR1 - Migration Changes
* Implement all required changes for the new ZenDNN library API and architecture in a single PR.
* Add unit tests and benchmarks to validate the migration.
### 6.3. Phase 3 - Optimizations of Recommender Systems for INT8 linear and woq_embedding_bag
* PR1 - qlinear and woq_embedding_bag ops support
* Integrate graph ops support for qlinear (Quantized Linear), woq_embedding_bag (Weight-Only Quantized embedding_bag)
* Integrate PyTorch 2.0 Export (PT2E) graph to the zentorch Qops conversion pass
* Create op and conversion pass unit tests
* PR2 - qlinear op and its fusions
* Integrate elementwise fusions of qlinear ops such as relu, sigmoid, tanh, mul and add
* Integrate the graph pass for unary and binary elementwise fusions
* Integrate requant optimization pass
* Create unit tests for the ops and fusions
* PR3 - woq_embedding_bag op, group_woq_embedding_bag op and concat folding
* Implement graph ops support for woq_embedding_bag and group_woq_embedding_bag
* Develop the graph pass for grouping of horizontal or parallel embedding bag ops
* Create unit tests for the ops and fusions
* PR4 - Tuning of zentorch integration if required
* Benchmark and tune the PyTorch integration to match the performance of the zentorch plugin for Quantized DLRMv2 model
### 6.4. Phase 4 - Optimizations of NLPs and Generative LLM Workloads
* The optimizations targeting NLPs (Natural Language Processing models) and Generative LLM (Large Language Model) workloads will be covered in a separate RFC.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,960,393,544
|
Add overload for __getitem__ of Sequentail to fix type hint
|
FFFrog
|
closed
|
[
"open source",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150295
As the title stated.
Realted issue:
https://github.com/pytorch/pytorch/issues/150257
| true
|
2,960,251,472
|
Make PyTorch buildable by CMake-4.x on s390x
|
AlekseiNikiforovIBM
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 7
|
COLLABORATOR
|
This is a continuation of
https://github.com/pytorch/pytorch/pull/150203
that fixes nightly build on s390x.
| true
|
2,960,144,601
|
Unify on dynamo_compile as the overall wait counter
|
ppanchalia
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Summary:
dynamo_compile for the most part has been accounting for compile time except autotuning.
all_compilation_types had earlier been injected on fx_codegen_and_compile, which was incorrect.
Add autotuining to dynamo and deprcate all_compilation_types counter.
Differential Revision: D72145447
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,960,004,959
|
[clang-tidy] Get rid-off dangerouse clang-tidy option
|
ivanmurashko
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Summary: WarningsAsErrors is a special option that should be applied carefully. At our case clang-tidy checks have a lot of false positives and as result WarningsAsErrors will prevent the corresponding diffs from landing
Test Plan: No special test required here
Differential Revision: D72159625
| true
|
2,959,971,510
|
Add differentiable ops hint message in Module docs
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes #101934
## Test Result
### Before

### After

| true
|
2,959,951,442
|
Remove torch functions that do not support device arguments from _device_constructor
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150290
As the title stated
In Addition:
- I have checked all the functions in _device_constructor and found ``torch.vander`` also don`t support device arguments
- Remove the duplicated function such as torch.ones and torch.asarray
Related issue:https://github.com/pytorch/pytorch/issues/150284
| true
|
2,959,932,100
|
[Dynamo][Misc] Apply typing hints for `codegen`
|
shink
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 12
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,959,845,593
|
[Intel GPU] Allow XPU backend in Quantize operators
|
yucai-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 14
|
CONTRIBUTOR
|
This modification is to support torch.quantize_per_channel() on XPU, otherwise it will cause a segmentation fault.
| true
|
2,959,817,103
|
[Inductor XPU] Support mkldnn fusion for XPU.
|
etaf
|
open
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150287
* #150286
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,959,816,979
|
[Inductor XPU] Refine `test_mkldnn_pattern_matcher.py` to be reusable for XPU.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150287
* __->__ #150286
This PR extracts some test cases from TestPatternMatcher into a newly created TestPatternMatcherGeneric, and uses instantiate_device_type_tests to make them reusable across multiple devices.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,959,786,360
|
CuSparse doesn't work on sparse tensor
|
qwerty10086
|
open
|
[
"module: sparse",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
I need to compute very large sparse matrix multiplications in my project, and I encountered the same issue as [#103820](https://github.com/pytorch/pytorch/issues/103820). It's mentioned that CUDA 12 provide two new algorithms with less memory occupation. I tested them on my matrices and they worked well, so now I'm writing a cpp extension to invoke `CUSPARSE_SPGEMM_ALG3`. However, I find it can't directly work on sparse tensor. To be specific, only the first `cusparseSpGEMM_workEstimation` works correctly. But if I read the data of `crow_indices`, `col_indices`, and `values` to CPU and copy them to GPU again, it works. To enable the copy operations, just modify the macro `COPY_MATRIX_DATA` to 1. Moreover, even if I invoke `CUSPARSE_SPGEMM_ALG1`, the issue still exists.
```cpp
#include <cusparse.h>
#include <torch/torch.h>
#define COPY_MATRIX_DATA 0
#define CHECK_CUSPARSE(func) \
{ \
cusparseStatus_t status = (func); \
if (status != CUSPARSE_STATUS_SUCCESS) { \
printf("CUSPARSE API failed at line %d with error: %s (%d)\n", \
__LINE__, cusparseGetErrorString(status), status); \
return 0; \
} \
}
template<typename T>
std::vector<T> toVector(const torch::Tensor& vTensor)
{
auto TensorCpu = vTensor.cpu();
auto NumElem = TensorCpu.numel();
std::vector<T> Data;
Data.reserve(NumElem);
for (int i = 0;i < NumElem;++i)
Data.emplace_back(TensorCpu[i].item<T>());
return Data;
}
int toCuSparseCsrMat(cusparseSpMatDescr_t* vCsrMat, const torch::Tensor& vCsrTensor)
{
#if COPY_MATRIX_DATA
auto RowOffsets = torch::from_blob(toVector<int>(vCsrTensor.crow_indices()).data(), {vCsrTensor.size(0) + 1}, torch::dtype(torch::kInt32)).cuda();
auto Columns = torch::from_blob(toVector<int>(vCsrTensor.col_indices()).data(), { vCsrTensor._nnz() }, torch::dtype(torch::kInt32)).cuda();
auto Values = torch::from_blob(toVector<float>(vCsrTensor.values()).data(), { vCsrTensor._nnz() }, torch::dtype(torch::kFloat32)).cuda();
CHECK_CUSPARSE(cusparseCreateCsr(
vCsrMat, vCsrTensor.size(0), vCsrTensor.size(1), vCsrTensor._nnz(),
RowOffsets.data_ptr(), Columns.data_ptr(), Values.data_ptr(),
CUSPARSE_INDEX_32I, CUSPARSE_INDEX_32I, CUSPARSE_INDEX_BASE_ZERO, CUDA_R_32F
));
#else
CHECK_CUSPARSE(cusparseCreateCsr(
vCsrMat, vCsrTensor.size(0), vCsrTensor.size(1), vCsrTensor._nnz(),
vCsrTensor.crow_indices().data_ptr(), vCsrTensor.col_indices().data_ptr(), vCsrTensor.values().data_ptr(),
CUSPARSE_INDEX_32I, CUSPARSE_INDEX_32I, CUSPARSE_INDEX_BASE_ZERO, CUDA_R_32F
));
#endif
return 1;
}
int computeCsrMatMul(const torch::Tensor& vA, const torch::Tensor& vB, torch::Tensor& voC, float vChunkFrac = 1.f)
{
torch::cuda::synchronize();
cusparseHandle_t Handle = nullptr;
CHECK_CUSPARSE(cusparseCreate(&Handle));
cusparseSpMatDescr_t MatA = nullptr, MatB = nullptr, MatC = nullptr;
if (!toCuSparseCsrMat(&MatA, vA))
return 0;
if (!toCuSparseCsrMat(&MatB, vB))
return 0;
CHECK_CUSPARSE(cusparseCreateCsr(
&MatC, vA.size(0), vB.size(1), 0,
nullptr, nullptr, nullptr,
CUSPARSE_INDEX_32I, CUSPARSE_INDEX_32I, CUSPARSE_INDEX_BASE_ZERO, CUDA_R_32F
));
cusparseSpGEMMDescr_t SpGEMM = nullptr;
CHECK_CUSPARSE(cusparseSpGEMM_createDescr(&SpGEMM));
float Alpha = 1.f;
float Beta = 0.f;
auto OpA = CUSPARSE_OPERATION_NON_TRANSPOSE;
auto OpB = CUSPARSE_OPERATION_NON_TRANSPOSE;
auto ComputeType = CUDA_R_32F;
auto Alg = CUSPARSE_SPGEMM_ALG3;
size_t BufferSize1 = 0;
CHECK_CUSPARSE(cusparseSpGEMM_workEstimation(
Handle, OpA, OpB,
&Alpha, MatA, MatB, &Beta, MatC,
ComputeType, Alg, SpGEMM,
&BufferSize1, nullptr
));
auto WorkBuffer1 = torch::zeros({ int64_t(BufferSize1) }, torch::dtype(torch::kInt8).device(torch::kCUDA));
CHECK_CUSPARSE(cusparseSpGEMM_workEstimation(
Handle, OpA, OpB,
&Alpha, MatA, MatB, &Beta, MatC,
ComputeType, Alg, SpGEMM,
&BufferSize1, WorkBuffer1.data_ptr()
));
int64_t NumProd = 0;
CHECK_CUSPARSE(cusparseSpGEMM_getNumProducts(SpGEMM, &NumProd));
std::cout << NumProd << std::endl;
size_t BufferSize3 = 0;
CHECK_CUSPARSE(cusparseSpGEMM_estimateMemory(
Handle, OpA, OpB,
&Alpha, MatA, MatB, &Beta, MatC,
ComputeType, Alg, SpGEMM, vChunkFrac,
&BufferSize3, nullptr, nullptr
));
auto WorkBuffer3 = torch::empty({ int64_t(BufferSize3) }, torch::dtype(torch::kInt8).device(torch::kCUDA));
size_t BufferSize2 = 0;
CHECK_CUSPARSE(cusparseSpGEMM_estimateMemory(
Handle, OpA, OpB,
&Alpha, MatA, MatB, &Beta, MatC,
ComputeType, Alg, SpGEMM, vChunkFrac,
&BufferSize3, WorkBuffer3.data_ptr(), &BufferSize2
));
WorkBuffer3 = torch::Tensor();
auto WorkBuffer2 = torch::empty({ int64_t(BufferSize2) }, torch::dtype(torch::kInt8).device(torch::kCUDA));
std::cout << BufferSize1 << " " << BufferSize2 << " " << BufferSize3 << std::endl;
CHECK_CUSPARSE(cusparseSpGEMM_compute(
Handle, OpA, OpB,
&Alpha, MatA, MatB, &Beta, MatC,
ComputeType, Alg, SpGEMM,
&BufferSize2, WorkBuffer2.data_ptr()
));
int64_t NumRowC = 0, NumColC = 0, NnzC = 0;
CHECK_CUSPARSE(cusparseSpMatGetSize(
MatC, &NumRowC, &NumColC, &NnzC
));
auto CrowIndicesC = torch::empty({ NumRowC + 1 }, torch::dtype(torch::kInt32).device(torch::kCUDA));
auto ColIndicesC = torch::empty({ NnzC }, torch::dtype(torch::kInt32).device(torch::kCUDA));
auto ValuesC = torch::empty({ NnzC }, torch::dtype(torch::kFloat32).device(torch::kCUDA));
CHECK_CUSPARSE(cusparseCsrSetPointers(
MatC, CrowIndicesC.data_ptr(), ColIndicesC.data_ptr(), ValuesC.data_ptr()
));
CHECK_CUSPARSE(cusparseSpGEMM_copy(
Handle, OpA, OpB,
&Alpha, MatA, MatB, &Beta, MatC,
ComputeType, Alg, SpGEMM
));
voC = torch::sparse_csr_tensor(CrowIndicesC, ColIndicesC, ValuesC, { NumRowC, NumColC }, torch::dtype(torch::kFloat).device(torch::kCUDA));
CHECK_CUSPARSE(cusparseSpGEMM_destroyDescr(SpGEMM));
CHECK_CUSPARSE(cusparseDestroySpMat(MatA));
CHECK_CUSPARSE(cusparseDestroySpMat(MatB));
CHECK_CUSPARSE(cusparseDestroySpMat(MatC));
CHECK_CUSPARSE(cusparseDestroy(Handle));
return 1;
}
int main()
{
auto A = torch::tensor(
{
{0, 0, 1, 2},
{3, 0, 4, 0},
{5, 6, 7, 0}
},
torch::dtype(torch::kFloat32).device(torch::kCUDA)).to_sparse_csr();
auto At = A.t().to_sparse_csr();
torch::cuda::synchronize();
torch::Tensor AtA;
if (!computeCsrMatMul(At, A, AtA))
return 1;
std::cout << AtA.crow_indices() << std::endl;
std::cout << AtA.col_indices() << std::endl;
std::cout << AtA.values() << std::endl;
return 0;
}
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 专业版 (10.0.22631 64 位)
GCC version: Could not collect
Clang version: 18.1.8
CMake version: version 3.31.5
Libc version: N/A
Python version: 3.10.0rc2 (tags/v3.10.0rc2:839d789, Sep 7 2021, 18:51:45) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 572.16
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2904
MaxClockSpeed: 2904
L2CacheSize: 2048
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.6.0+cu124
[pip3] torchaudio==2.6.0+cu124
[pip3] torchvision==0.21.0+cu124
[conda] Could not collect
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
2,959,759,778
|
torch.fill bug
|
0x45f
|
open
|
[
"triaged",
"module: python frontend"
] | 2
|
NONE
|
### 🐛 Describe the bug
```
import torch
import numpy as np
torch.set_default_device("cuda")
x = torch.randn(4, 5)
# y = torch.randn(4, 5)
out = torch.fill(x, 1)
print(out)
```
raise error:
```
Traceback (most recent call last):
File "/home/wangzhen/gems-ops/FlagGems/build/run.py", line 11, in <module>
out = torch.fill(x, 1)
File "/root/miniconda3/envs/gems-ops/lib/python3.10/site-packages/torch/utils/_device.py", line 104, in __torch_function__
return func(*args, **kwargs)
TypeError: fill() received an invalid combination of arguments - got (Tensor, int, device=torch.device), but expected one of:
* (Tensor input, Tensor value)
* (Tensor input, Number value)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.129.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.72
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.1.0
[conda] numpy 2.2.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @albanD
| true
|
2,959,706,801
|
Update slow tests
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 6
|
COLLABORATOR
|
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
| true
|
2,959,603,294
|
Add `label_smoothing` param in `nn.BCELoss` and `nn.BCEWithLogitsLoss`
|
zeshengzong
|
open
|
[
"open source",
"release notes: nn"
] | 4
|
CONTRIBUTOR
|
Fixes #91545
## Changes
- Add `label_smoothing` param and docs
- Add test case for `label_smoothing`
- Remove duplicate description in `nn.BCELoss` and `nn.BCEWithLogitsLoss`
## Test Result
```bash
pytest -s test/test_nn.py -k test_bce
```



| true
|
2,959,418,248
|
rename_privateuse1_backend arise CUPTI_ERROR_NOT_INITIALIZED error.
|
kstreee-furiosa
|
closed
|
[
"oncall: profiler",
"module: PrivateUse1"
] | 2
|
NONE
|
### 🐛 Describe the bug
Following code arise cuda profiling warnings and an error. In case I install torch==2.5.1+cpu, it works well without warnings and an error.
```python
import torch
# If comment out `torch.utils.rename_privateuse1_backend("test")`, works fine.
torch.utils.rename_privateuse1_backend("test")
with torch.profiler.profile(
activities=[torch.profiler.ProfilerActivity.CPU],
) as p:
pass
```
```
WARNING:2025-03-31 04:07:16 3245742:3245742 init.cpp:178] function cbapi->getCuptiStatus() failed with error CUPTI_ERROR_NOT_INITIALIZED (15)
WARNING:2025-03-31 04:07:16 3245742:3245742 init.cpp:179] CUPTI initialization failed - CUDA profiler activities will be missing
INFO:2025-03-31 04:07:16 3245742:3245742 init.cpp:181] If you see CUPTI_ERROR_INSUFFICIENT_PRIVILEGES, refer to https://developer.nvidia.com/nvidia-development-tools-solutions-err-nvgpuctrperm-cupti
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6348H CPU @ 2.30GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
CPU max MHz: 4200.0000
CPU min MHz: 1000.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens @albanD
| true
|
2,959,392,506
|
Discrepancy between eager and torch.compile (mode='max-autotune-no-cudagraphs') outputs under strict tolerance
|
tinywisdom
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
### 🐛 Describe the bug
I'm observing **output inconsistencies** between Eager mode and `torch.compile` when using `mode="max-autotune-no-cudagraphs"`, and this behavior persists in both stable and nightly builds(2.6.0.dev20241112+cu121).
In particular, the outputs differ beyond acceptable tolerances for some models, even when using **strict error thresholds** in `torch.allclose`:
```python
rtol=1e-2
atol=1e-3
```
This threshold is already quite tight, and I'd like to understand whether such discrepancies are expected in this compilation mode — or if this may indicate potential numerical instability in some backend optimizations.
### Reproducible Script
```python
import torch
import importlib.util
import os
def load_model_from_file(module_path, model_function_name="my_model_function"):
model_file = os.path.basename(module_path)[:-3]
spec = importlib.util.spec_from_file_location(model_file, module_path)
model_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(model_module)
model_function = getattr(model_module, model_function_name)
model = model_function()
return model
def compare_outputs(a: torch.Tensor, b: torch.Tensor, rtol=1e-2, atol=1e-3):
print("=== Output difference comparison ===")
if a.shape != b.shape:
print(f"[error] Tensor shapes do not match: {a.shape} vs {b.shape}")
return
diff = a - b
abs_diff = diff.abs()
rel_diff = abs_diff / (a.abs() + 1e-8)
total_elements = a.numel()
max_abs_err = abs_diff.max().item()
mean_abs_err = abs_diff.mean().item()
max_rel_err = rel_diff.max().item()
mean_rel_err = rel_diff.mean().item()
is_close = torch.isclose(a, b, atol=atol, rtol=rtol)
num_not_close = (~is_close).sum().item()
percent_not_close = 100.0 * num_not_close / total_elements
all_close = torch.allclose(a, b, atol=atol, rtol=rtol)
print(f"- Total elements: {total_elements}")
print(f"- Max absolute error : {max_abs_err:.8f}")
print(f"- Mean absolute error : {mean_abs_err:.8f}")
print(f"- Max relative error : {max_rel_err:.8f}")
print(f"- Mean relative error : {mean_rel_err:.8f}")
print(f"- Elements not close : {num_not_close}")
print(f"- Percentage not close : {percent_not_close:.4f}%")
print(f"- torch.allclose : {all_close}")
if __name__ == "__main__":
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
input_tensor = torch.rand(1, 3, 224, 224, dtype=torch.float32, device=device)
model_path = xxx.py
model = load_model_from_file(model_path).to(device).eval()
with torch.no_grad():
output_eager = model(input_tensor)
compiled_model = torch.compile(model, mode="max-autotune-no-cudagraphs")
with torch.no_grad():
output_compiled = compiled_model(input_tensor)
compare_outputs(output_eager, output_compiled)
```
### Model File
```python
# torch.rand(1, 3, 224, 224, dtype=input_dtype)
import torch
import torch.nn as nn
class LN(nn.Module):
def __init__(self, eps):
super().__init__()
self.eps = eps
def forward(self, x):
mean = x.mean(dim=-1, keepdim=True)
std = x.std(dim=-1, keepdim=True)
return (x - mean) / (std + self.eps)
class Foo(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x * 2
class MyModel(nn.Module):
def __init__(
self,
in_channels=3,
out_channels=64,
kernel_size=3,
stride=1,
padding=1,
dilation=1,
groups=1,
bias=True,
output_padding=None,
num_layers=0,
param1=None,
param2=None,
eps=1e-5
):
super().__init__()
# Core layers
self.conv_layers = nn.ModuleList([
nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias)
])
self.activation_layers = nn.ModuleList([
nn.ReLU()
])
self.pooling_layers = nn.ModuleList([
nn.MaxPool2d(kernel_size, stride, padding, dilation)
])
self.bn_layers = nn.ModuleList([
nn.BatchNorm2d(out_channels)
])
self.transposed_conv_layers = nn.ModuleList()
if output_padding is not None:
self.transposed_conv_layers.append(
nn.ConvTranspose2d(out_channels, in_channels, kernel_size, stride, padding, output_padding, dilation, groups, bias)
)
self.additional_layers = nn.ModuleList([nn.Identity() for _ in range(num_layers)])
self.custom_layer = LN(eps)
self.fc1 = nn.Linear(16, 16)
self.fc2 = nn.Linear(16, 16)
self.foo = Foo()
def forward(self, x):
for layer in self.conv_layers:
x = layer(x)
for act in self.activation_layers:
x = act(x)
for pool in self.pooling_layers:
x = pool(x)
for bn in self.bn_layers:
x = bn(x)
for tconv in self.transposed_conv_layers:
x = tconv(x)
for layer in self.additional_layers:
x = layer(x)
x = self.custom_layer(x)
x = x.view(-1, 16)
x = self.fc1(x)
x = self.foo(x)
x = self.fc2(x)
return x
def my_model_function():
return MyModel()
if __name__ == "__main__":
model = my_model_function()
input_tensor = torch.randn(1, 3, 224, 224)
output = model(input_tensor)
print(output.shape)
```
### Output Comparison
```text
=== Output difference comparison ===
- Total elements: 3211264
- Max absolute error : 0.45826077
- Mean absolute error : 0.00039007
- Max relative error : 2887.75512695
- Mean relative error : 0.00781209
- Elements not close : 8526
- Percentage not close : 0.2655%
- torch.allclose : False
```
### Versions
```text
PyTorch version: 2.5.0a0+gita8d6afb
Is debug build: True
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version: 550.78
cuDNN version: Probably one of the following:
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8.9.7
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6426Y
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 64 MiB (32 instances)
L3 cache: 75 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] optree==0.14.1
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.5.0a0+gita8d6afb
[conda] blas 1.0 mkl defaults
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344 defaults
[conda] mkl-include 2025.1.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h5eee18b_2 defaults
[conda] mkl-static 2025.1.0 pypi_0 pypi
[conda] mkl_fft 1.3.11 py310h5eee18b_0 defaults
[conda] mkl_random 1.2.8 py310h1128e8f_0 defaults
[conda] numpy 2.2.4 pypi_0 pypi
[conda] numpy-base 2.2.2 py310hb5e798b_0 defaults
[conda] optree 0.14.1 pypi_0 pypi
[conda] pytorch-triton 3.1.0+5fe38ffd73 pypi_0 pypi
[conda] torch 2.5.0a0+gita8d6afb dev_0 <develop>
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,959,392,351
|
[MPS] Add support for hermite_polynomial_h.
|
dcci
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 4
|
MEMBER
| null | true
|
2,959,381,652
|
softmax: add device check for xpu with half_to_float
|
weishi-deng
|
open
|
[
"open source",
"release notes: xpu",
"module: xpu"
] | 11
|
CONTRIBUTOR
|
To support "half_to_float" functionality on xpu devices, we add the device checks for xpu devices here.
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,959,332,725
|
Sparse tensor indexing not implemented, but partially supported by using index_select
|
zbh2047
|
open
|
[
"module: sparse",
"triaged"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
I want to index a sparse tensor with a dense index tensor `idx`. This setting is quite useful in extracting sub-matrices or obtaining samples from a sparse tensor dataset. However, I found this operation is not implemented, but partially supported in an inconsistent sense.
First, if an input sparse tensor a is a SparseCOO tensor and `idx` is 1-dimensional, the operation can be achieved via `torch.index_select(a, index=b, dim=0)`. However, direct indexing `a[b]` is not supported. Since they should be equivalent, the latter should be supported.
Second, if an input sparse tensor a is a SparseCOO tensor and `idx` is multi-dimensional, `index_select` cannot be used. However, the inner mechanism should be highly similar to indexing using 1-dimensional tensor so I think this case should be very easy to implement.
Third, if an input sparse tensor a is a SparseCSR tensor, then no matter what idx is 1-dimensional or multi-dimensional, neither index_select nor direct indexing is not supported yet. However, I think indexing along `dim=0` should be easily implemented and is a very important use case. For example, SparseCSR format can represent sparse dataset where each row is a sample, then indexing along `dim=0` corresponds to forming a batch.
In summary, I think the above three points are worth implementing, as they are basic usages and implementing them are straightforward.
### Alternatives
The only workaround is `index_select` but it only works for SparseCOO format with 1-dimensional indexing.
### Additional context
PyTorch version: 2.6
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
2,959,238,895
|
[AOTInductor] Add User Managed buffer for AOTI constant buffer.
|
muchulee8
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150276
Summary:
We add the functionality to allow users to directly pass in a at::Tensor
into AOTInductor, that would be used as the constant.
This user managed buffer skips the copying step in AOTInductor, and let
users to directly manage the memory usage themselve.
Test Plan:
LD_LIBRARY_PATH=/data/users/$USER/pytorch/build/lib
/data/users/$USER/pytorch/build/bin/test_aoti_inference
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
Differential Revision: [D72589514](https://our.internmc.facebook.com/intern/diff/D72589514)
| true
|
2,959,238,814
|
[AOTInductor] Introduce MaybeOwningAtenTensorHandle for ConstantMap
|
muchulee8
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150275
Summary:
We used RAIIAtenTensorHandle for ConstantMap, where RAIIAtenTensorHandle
is a unique_ptr, indicating that all memory handling is by the
AOTInductor internally.
In this PR, we introduce ConstantAtenTensorHandle which replaces
RAIIATenTensorHandle. This class holds a raw AtenTensorHandle, and also
owns a RAIIAtenTensorHandle if user decides to delegate memory
management to AOTInductor.
This is a prerequisite for user managed buffer, this PR, however only
introduces this class and make sure it works with existing AOTInductor
and has the default behavior identical as using RAIIAtenTensorHandle.
Test Plan:
Existing tests. No change should be introduced within this PR.
Reviewers:
Subscribers:
Tasks:
Tags:
| true
|
2,959,237,630
|
[AOTInductor] Free tensors in test
|
muchulee8
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150274
Summary:
This PR frees tensor that were new-ed within the test itself to prevent
memory leak.
Test Plan:
Fixing tests itself.
Reviewers:
Subscribers:
Tasks:
Tags:
| true
|
2,959,233,392
|
DISABLED test_tensor_with_grad_to_scalar_warning (__main__.TestTorch)
|
pytorch-bot[bot]
|
open
|
[
"module: autograd",
"triaged",
"module: flaky-tests",
"skipped",
"module: python frontend"
] | 1
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tensor_with_grad_to_scalar_warning&suite=TestTorch&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39662402641).
Over the past 3 hours, it has been determined flaky in 19 workflow(s) with 38 failures and 19 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tensor_with_grad_to_scalar_warning`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_torch.py", line 10840, in test_tensor_with_grad_to_scalar_warning
self.assertEqual(len(w), 1)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4094, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 1 but got 0.
Absolute difference: 1
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
python test/test_torch.py TestTorch.test_tensor_with_grad_to_scalar_warning
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_torch.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_torch.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @clee2000
| true
|
2,959,184,428
|
Update Doc for Intel XPU Profiling
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Updated below two pages for Intel XPU
https://pytorch.org/docs/stable/torch.compiler_profiling_torch_compile.html
https://pytorch.org/docs/stable/profiler.html
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @tstatler
| true
|
2,959,182,157
|
[fbcode]Removing `@NoIntBaseDeprecated` annotation in `evaluation.thrift` file
|
Sunnie912
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Summary: #buildall
Test Plan:
```
buck test 'fbcode//mode/opt' fbcode//caffe2/torch/fb/training_toolkit/applications/bulk_eval/tests:evaluator_test -- --exact 'caffe2/torch/fb/training_toolkit/applications/bulk_eval/tests:evaluator_test - test_setup_evaluation_utils (caffe2.torch.fb.training_toolkit.applications.bulk_eval.tests.evaluator_test.EvaluatorTest)'
```
Differential Revision: D72028940
| true
|
2,959,169,648
|
Add error check for out variant of tensordot function with requries_grad tensor
|
cz2h
|
closed
|
[
"module: bc-breaking",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: autograd",
"release notes: linalg_frontend",
"topic: bc breaking"
] | 13
|
CONTRIBUTOR
|
Fixes #147846. Previously there is no error out under out variant of`tensordot` while `requires_grad=True`. This can cause potential issue when out tensor is part of a computation graph.
Enforces the out variant of tensordot to run without setting `requries_grad=True`. Change same to #117067
cc @ezyang @gchanan
| true
|
2,959,132,004
|
[AOTInductor] Modify test for Memory tracking for memory-related
|
muchulee8
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150269
operations
Summary:
Fix the test for memory tracking. This PR does:
(1) Add tracking before and after for all memory-related operations.
Make sure the operation do indeed captures memory both in CUDA and
torch's CUDACachAllocator Make sure the operation do indeed captures
consumed memory both in CUDA and torch's CUDACachAllocator.
(2) Keep track of memory being reserved by CUDACacheAllocator in
torch and it's relationship with global CUDA memory consumption.
Test Plan:
This PR is adding tests.
Reviewers:
Subscribers:
Tasks:
Tags:
| true
|
2,959,031,738
|
Add a test for checking that the CUDA stubs directory is not in libcaffe2_nvrts.so's RPATH or RUNPATH
|
Flamefire
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
The CUDA stub directory must not appear in the rpath or RUNPATH of any library as that would make it unusable at runtime. This should no longer happen (it did before, see the previous PR) but we better check that it stays like that. See the referenced issue https://github.com/pytorch/pytorch/issues/35418
The test verifies this.
Closes https://github.com/pytorch/pytorch/issues/35418
See also https://github.com/pytorch/pytorch/pull/134669
| true
|
2,958,962,575
|
Gradient update with `differentiable=True` is slightly different from the default
|
dilithjay
|
closed
|
[
"module: autograd",
"module: optimizer",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
I observe a slight difference in the weights produced when `differentiable=True` as opposed to the default, specifically for a higher number of updates.
The following is the code to reproduce:
```python
import torch
from learn2learn import clone_module
lr = 0.01
n_updates = 100
# ------------------ Torch Differentiable ------------------
torch.manual_seed(1)
model = torch.nn.Linear(3, 1)
model_clone = clone_module(model)
for param in model_clone.parameters():
param.retain_grad()
optim = torch.optim.Adam(model_clone.parameters(), lr=lr)
x = torch.rand((n_updates, 3), requires_grad=True)
for i in range(n_updates):
b_x = x[i]
y = torch.rand((1,), requires_grad=True)
out = model_clone(b_x)
loss = ((out - y) ** 2).sum()
optim.zero_grad()
loss.backward(retain_graph=True)
optim.step()
params_1 = next(model_clone.parameters()).detach()
# ------------------ Torch ------------------
torch.manual_seed(1)
model = torch.nn.Linear(3, 1)
optim = torch.optim.Adam(model.parameters(), lr=lr)
x = torch.rand((n_updates, 3), requires_grad=True)
for i in range(n_updates):
b_x = x[i]
y = torch.rand((1,), requires_grad=True)
out = model(b_x)
loss = ((out - y) ** 2).sum()
optim.zero_grad()
loss.backward()
optim.step()
params_2 = next(model.parameters()).detach()
print("All close:", torch.allclose(params_1, params_2))
print("Difference:", params_1 - params_2)
```
The following is the observed output:
```
All close: False
Difference: tensor([[ 1.2219e-06, -7.2643e-07, -3.6880e-07]])
```
This difference is exaggerated for higher `lr` or higher `n_updates`.
Note that I'm using the `clone_module` function from the `learn2learn` library, but I don't believe this is the cause of the difference because when I set `differentiable=False` in the first case, the difference is 0.
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Gentoo Linux (x86_64)
GCC version: (Gentoo 12.3.1_p20230526 p2) 12.3.1 20230526
Clang version: Could not collect
CMake version: version 3.27.7
Libc version: glibc-2.37
Python version: 3.10.13 (main, Sep 18 2023, 17:18:13) [GCC 12.3.1 20230526] (64-bit runtime)
Python platform: Linux-5.14.0-362.24.2.el9_3.x86_64-x86_64-Intel-R-_Xeon-R-_CPU_E5-2683_v4_@_2.10GHz-with-glibc2.37
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 1
CPU(s) scaling MHz: 59%
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4190.33
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 8 MiB (32 instances)
L3 cache: 80 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4+computecanada
[pip3] optree==0.14.1
[pip3] torch==2.6.0+computecanada
[pip3] torch-tb-profiler==0.4.3
[pip3] torchopt==0.7.3
[pip3] torchvision==0.21.0+computecanada
[conda] Could not collect
```
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @vincentqb @jbschlosser @janeyx99 @crcrpar
| true
|
2,958,912,075
|
Pytorch nightly Cuda 12.8 - 'too many resources requested for launch' with multiple layers of LayerNorm after strided Conv1d
|
Pyr-000
|
closed
|
[
"high priority",
"module: cuda",
"triaged",
"Blackwell"
] | 19
|
NONE
|
### 🐛 Describe the bug
The current pytorch nightly build for CUDA 12.8: `2.8.0.dev20250327+cu128` yields the following error:
`RuntimeError: CUDA error: too many resources requested for launch`
When running a backward pass through a model with multiple strided Conv1d modules followed by LayerNorm modules.
The error does not seem to occur when the Conv1d modules have their stride set to 1. The error does not seem to occur when only one strided Conv1d followed by one LayerNorm is used.
(The issue does not occur when setting the device to CPU or replacing the LayerNorm module with e.g. RMSNorm)
Running the following example script:
```python
import torch
class PermuteModule(torch.nn.Module):
def __init__(self, permutation):
super(PermuteModule, self).__init__()
self.permutation = permutation
def forward(self, x:torch.Tensor) -> torch.Tensor:
assert len(x.shape) == len(self.permutation), f"Dimension mismatch! Unable to permute {len(x.shape)} dim input with a {len(self.permutation)} dim permutation!"
return x.permute(*self.permutation)
def test(n_layers:int, conv_stride:int):
_sequence = []
for _ in range(n_layers):
# Conv1d inputs are (N x C x L), LayerNorm expects (* x C). Dims must be permuted between modules.
_sequence += [
PermuteModule((0,2,1)),
torch.nn.Conv1d(in_channels=512, out_channels=512, groups=1, kernel_size=9, dilation=1, stride=conv_stride, padding=0, bias=False),
PermuteModule((0,2,1)),
torch.nn.LayerNorm(512),
torch.nn.ReLU()
]
model = torch.nn.Sequential(*_sequence).to(device="cuda")
data = torch.randn((100,2048,512), device="cuda")
out = model(data)
loss = torch.nn.functional.mse_loss(out, torch.rand_like(out))
loss.backward()
torch.autograd.set_detect_anomaly(True)
print(f"Torch version: {torch.__version__}")
print(f"layers=1, stride=1")
test(n_layers=1, conv_stride=1)
print(f"layers=2, stride=1")
test(n_layers=2, conv_stride=1)
print(f"layers=1, stride=2")
test(n_layers=1, conv_stride=2)
print(f"layers=2, stride=2")
test(n_layers=2, conv_stride=2)
# we will not reach this print statement.
print("DONE.")
```
Yields the output:
```bash
Torch version: 2.8.0.dev20250327+cu128
layers=1, stride=1
layers=2, stride=1
layers=1, stride=2
layers=2, stride=2
C:\Applications\Python\Python312\Lib\site-packages\torch\autograd\graph.py:824: UserWarning: Error detected in NativeLayerNormBackward0. Traceback of forward call that caused the error:
File "C:\TWD\sample.py", line 38, in <module>
test(n_layers=2, conv_stride=2)
File "C:\TWD\sample.py", line 24, in test
out = model(data)
File "C:\Applications\Python\Python312\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Applications\Python\Python312\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Applications\Python\Python312\Lib\site-packages\torch\nn\modules\container.py", line 240, in forward
input = module(input)
File "C:\Applications\Python\Python312\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Applications\Python\Python312\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Applications\Python\Python312\Lib\site-packages\torch\nn\modules\normalization.py", line 217, in forward
return F.layer_norm(
File "C:\Applications\Python\Python312\Lib\site-packages\torch\nn\functional.py", line 2910, in layer_norm
return torch.layer_norm(
(Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\torch\csrc\autograd\python_anomaly_mode.cpp:127.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "C:\TWD\sample.py", line 38, in <module>
test(n_layers=2, conv_stride=2)
File "C:\TWD\sample.py", line 26, in test
loss.backward()
File "C:\Applications\Python\Python312\Lib\site-packages\torch\_tensor.py", line 648, in backward
torch.autograd.backward(
File "C:\Applications\Python\Python312\Lib\site-packages\torch\autograd\__init__.py", line 353, in backward
_engine_run_backward(
File "C:\Applications\Python\Python312\Lib\site-packages\torch\autograd\graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: too many resources requested for launch
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
PyTorch version: 2.8.0.dev20250327+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro for Workstations (10.0.26100 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.26100-SP0
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 5090
Nvidia driver version: 572.83
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: AMD Ryzen 9 7950X3D 16-Core Processor
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 4201
MaxClockSpeed: 4201
L2CacheSize: 16384
L2CacheSpeed: None
Revision: 24834
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] torch==2.8.0.dev20250327+cu128
[pip3] torchaudio==2.6.0.dev20250330+cu128
[pip3] torchvision==0.22.0.dev20250330+cu128
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @ptrblck @eqy
| true
|
2,958,748,550
|
Graph break on Tensor._make_subclass
|
KareemMusleh
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 12
|
NONE
|
### 🐛 Describe the bug
I am having the following problem
```python
from torch import nn
import torch
torch_compile_options = {
"epilogue_fusion" : True,
"max_autotune" : True,
"shape_padding" : True,
"trace.enabled" : True,
"triton.cudagraphs" : False,
}
class a(nn.Linear):
def __init__(self, b):
super().__init__(128, 128)
self.b = b
class b(nn.Parameter):
def __new__(cls, data):
self = torch.Tensor._make_subclass(cls, data)
return self
A = a(b(torch.randn(12, 12)))
@torch.compile(fullgraph = True, dynamic = True, options = torch_compile_options)
def test():
out = 3 * A.b
return out
test()
```
Throws the following error `Unsupported: call_method UserDefinedObjectVariable(b) __rmul__ [ConstantVariable(int: 3)] {} `
Is there a way around it?
### Versions
using torch2.6 cuda12.4 in colab. Tried it also with nightly cuda 12.6 and it also fails
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.28
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.11
[pip3] optree==0.14.1
[pip3] pynvjitlink-cu12==0.5.2
[pip3] torch==2.6.0+cu124
[pip3] torchaudio==2.6.0+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,958,742,336
|
Refresh expected results.
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150264
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,958,689,615
|
[submodule] Bump ITTAPI to 3.25.5
|
cyyever
|
closed
|
[
"triaged",
"open source",
"oncall: profiler",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"topic: build"
] | 18
|
COLLABORATOR
|
It hasn't been updated for 3 years. And also to remove CMake 4 workaround.
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,958,668,229
|
Fake mode mismatch when doing nested compile + tensor subclass
|
gau-nernst
|
open
|
[
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: pt2-dispatcher"
] | 2
|
NONE
|
### 🐛 Describe the bug
```python
import torch
import torch.nn.functional as F
from torch import Tensor, nn
class MyEmbedding(nn.Module):
def __init__(self, num_embeds, embed_dim):
super().__init__()
self.weight = nn.Parameter(torch.randn(num_embeds, embed_dim))
def forward(self, x):
return F.embedding(x, self.weight)
@torch.compile
def _dequantize(int4_data: Tensor, scales: Tensor) -> Tensor:
# int4_data: (..., N / 2), in int8
# scales: (..., N / block_size)
int8_data = torch.stack([int4_data << 4 >> 4, int4_data >> 4], dim=-1)
fp32_data = int8_data.float().view(*scales.shape, -1) * scales.unsqueeze(-1)
return fp32_data.flatten(-2).to(scales.dtype)
class Int4Tensor(Tensor):
@staticmethod
def __new__(cls, int4_data: Tensor, scales: Tensor):
shape = int4_data.shape
return Tensor._make_wrapper_subclass(
cls,
shape[:-1] + (shape[-1] * 2,),
dtype=scales.dtype,
device=scales.device,
)
def __init__(self, int4_data: Tensor, scales: Tensor) -> None:
self.int4_data = int4_data
self.scales = scales
def __tensor_flatten__(self):
return ["int4_data", "scales"], []
@classmethod
def __tensor_unflatten__(cls, tensor_data_dict, tensor_attributes, outer_size=None, outer_stride=None):
return cls(*tensor_data_dict.values(), *tensor_attributes)
def __repr__(self):
fields = dict(
shape=tuple(self.shape),
block_size=self.int4_data.shape[-1] * 2 // self.scales.shape[-1],
device=self.device,
)
fields_str = ", ".join(f"{k}={v}" for k, v in fields.items())
return f"{self.__class__.__name__}({fields_str})"
@classmethod
def __torch_function__(cls, func, types, args=(), kwargs=None):
kwargs = kwargs or dict()
if func is F.embedding:
input: Tensor = args[0]
weight: Int4Tensor = args[1]
return _dequantize(F.embedding(input, weight.int4_data), F.embedding(input, weight.scales))
with torch._C.DisableTorchFunctionSubclass():
return func(*args, **kwargs)
@classmethod
def __torch_dispatch__(cls, func, types, args, kwargs):
aten = torch.ops.aten
if func is aten.detach.default:
x: Int4Tensor = args[0]
return Int4Tensor(x.int4_data, x.scales)
msg = f"{cls.__name__} dispatch: {func} is not implemented"
for i, arg in enumerate(args):
msg += f"\n- args[{i}]={arg}"
for k, v in kwargs.items():
msg += f"\n- {k}={v}"
raise NotImplementedError(msg)
if __name__ == "__main__":
embedding = MyEmbedding(100, 32).cuda() # this doesn't work
# embedding = nn.Embedding(100, 32).cuda() # this works
embedding.weight = nn.Parameter(
Int4Tensor(
torch.randint(-128, 127, size=(100, 16), dtype=torch.int8, device="cuda"),
torch.randn(100, 1, device="cuda"),
)
)
embedding.compile()
embedding(torch.randint(100, size=(2,), device="cuda"))
# this also works
model = nn.Sequential(
nn.Embedding(100, 32),
nn.ReLU(),
).cuda()
model[0].weight = nn.Parameter(
Int4Tensor(
torch.randint(-128, 127, size=(100, 16), dtype=torch.int8, device="cuda"),
torch.randn(100, 1, device="cuda"),
)
)
model.compile()
model(torch.randint(100, size=(2,), device="cuda"))
```
I'm implementing int4 quant via tensor subclass. I want to get reasonable perf without full-model compile by compiling the dequant function only, since full-model compile might not always be possible or during debugging when full-model compile is slow. And I want it to be compatible with full-model compile too, so I can squeeze all the perf when I can get full-model compile working.
In most cases, nested torch.compile works well. However, under certain circumstances, like the snippet above, I'm getting fake mode mismatch. I don't understand the internals of torch.compile well enough to know why this error is happening. `MyEmbedding` is just to create an MRE. The actual model code is a bit more complicated.
Some non-ideal alternatives:
- Don't decorate dequant function with torch.compile -> bad "eager" perf (non full-model compile to be exact)
- Wrap compiled dequant function in a custom op -> works, but the compiler losses some optimization opportunity
### Error logs
<details>
<summary>Error message</summary>
```
torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function <function embedding at 0x7b5102b39d00>(*(FakeTensor(..., device='cuda:0', size=(2,), dtype=torch.int64), Int4Tensor(shape=(100, 32), block_size=32, device=cuda:0)), **{}): got BackendCompilerFailed('backend='inductor' raised:
AssertionError: fake mode (<torch._subclasses.fake_tensor.FakeTensorMode object at 0x7b505185cb90>) from tracing context 0 doesn't match mode (<torch._subclasses.fake_tensor.FakeTensorMode object at 0x7b50518b3490>) from fake tensor input 0
fake mode from tracing context 0 allocated at:
File "/home/thien/code/gemma3-int4/debug.py", line 93, in <module>
embedding(torch.randint(100, size=(2,), device="cuda"))
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
return fn(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1453, in __call__
return self._torchdynamo_orig_callable(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1234, in __call__
result = self._inner_convert(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1080, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 782, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 818, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 736, in transform
tracer.run()
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3500, in run
super().run()
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 819, in wrapper
return inner_fn(self, inst)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2933, in CALL
self._call(inst)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2927, in _call
self.call_function(fn, args, kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1170, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/variables/torch.py", line 1213, in call_function
tensor_variable = wrap_fx_proxy(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2325, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2391, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2487, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 3130, in get_fake_value
ret_val = wrap_fake_exception(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2644, in wrap_fake_exception
return fn()
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 3131, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 3287, in run_node
return node.target(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/nn/functional.py", line 2516, in embedding
return handle_torch_function(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/overrides.py", line 1743, in handle_torch_function
result = torch_func_method(public_api, types, args, kwargs)
File "/home/thien/code/gemma3-int4/debug.py", line 62, in __torch_function__
return _dequantize(F.embedding(input, weight.int4_data), F.embedding(input, weight.scales))
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
return fn(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1453, in __call__
return self._torchdynamo_orig_callable(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1234, in __call__
result = self._inner_convert(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1080, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 782, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 818, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 736, in transform
tracer.run()
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3500, in run
super().run()
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3701, in RETURN_VALUE
self._return(inst)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3686, in _return
self.output.compile_subgraph(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1145, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1429, in compile_and_call_fx_graph
backend_fake_mode = torch._subclasses.FakeTensorMode(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1228, in __init__
self._stack_trace = traceback.extract_stack()
fake mode from fake tensor input 0 allocated at:
File "/home/thien/code/gemma3-int4/debug.py", line 93, in <module>
embedding(torch.randint(100, size=(2,), device="cuda"))
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
return fn(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1453, in __call__
return self._torchdynamo_orig_callable(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1234, in __call__
result = self._inner_convert(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1080, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 782, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 818, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 715, in transform
tracer = InstructionTranslator(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3329, in __init__
output=OutputGraph(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 349, in __init__
fake_mode = torch._subclasses.FakeTensorMode(
File "/home/thien/uv_envs/dev2.7_nightly/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1228, in __init__
self._stack_trace = traceback.extract_stack()
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
')
```
</details>
### Versions
2.8.0.dev20250327+cu128
2.6.0 also has the same error
cc @chauhang @penguinwu @eellison @zou3519 @bdhirsh
| true
|
2,958,636,790
|
UNSTABLE pull / linux-jammy-py3-clang12-executorch / build
|
clee2000
|
closed
|
[
"module: ci",
"unstable"
] | 2
|
CONTRIBUTOR
|
> Please provide a brief reason on why you need to mark this job as unstable.
executorch is not yet compatible with cmake 4.0.0 but doesn't pin cmake, similar to https://github.com/pytorch/pytorch/pull/150158.
Marking this as unstable until we executorch updates?
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,958,593,616
|
[XPU] `torch.xpu.is_available()` fails on Intel Arc A770 on latest nightly.
|
simonlui
|
closed
|
[
"needs reproduction",
"triaged",
"module: regression",
"module: xpu"
] | 3
|
NONE
|
### 🐛 Describe the bug
Using the latest nightly 2.8.0.dev20250327+xpu build with my Intel Arc A770, I get the following:
```
Python 3.12.5 | Intel Corporation | (main, Sep 9 2024, 23:35:37) [GCC 14.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>>torch.xpu.is_available()
False
```
Using torch 2.8.0.dev20250321+xpu
```
Python 3.12.5 | Intel Corporation | (main, Sep 9 2024, 23:35:37) [GCC 14.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>>torch.xpu.is_available()
True
```
I am not sure what changed between torch 2.8.0.dev20250321+xpu and torch-2.8.0.dev20250325+xpu which seems to be the first build where it broke. There didn't seem to be any core dependencies changes so am confused as to what would break the reporting of availability with the build difference. Versions listed will be with the non-working configuration.
### Versions
```
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Fedora Linux 40 (Workstation Edition) (x86_64)
GCC version: (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3)
Clang version: 18.1.8 (Fedora 18.1.8-2.fc40)
CMake version: version 3.31.2
Libc version: glibc-2.39
Python version: 3.12.5 | Intel Corporation | (main, Sep 9 2024, 23:35:37) [GCC 14.1.0] (64-bit runtime)
Python platform: Linux-6.13.5-100.fc40.x86_64-x86_64-with-glibc2.39
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5950X 16-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 55%
CPU max MHz: 5084.0000
CPU min MHz: 550.0000
BogoMIPS: 6800.38
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] onnx==1.17.0
[pip3] optree==0.13.1
[pip3] pytorch-triton-xpu==3.3.0+git0bcc8265
[pip3] torch==2.8.0.dev20250327+xpu
[pip3] torchaudio==2.6.0.dev20250329+xpu
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.22.0.dev20250329+xpu
[conda] mkl 2025.0.1 pypi_0 pypi
[conda] mkl-dpcpp 2025.0.1 pypi_0 pypi
[conda] mkl-service 2.4.2 py312_0 https://software.repos.intel.com/python/conda
[conda] mkl_fft 1.3.11 py312h3948073_81 https://software.repos.intel.com/python/conda
[conda] mkl_random 1.2.8 py312hd605fbb_101 https://software.repos.intel.com/python/conda
[conda] mkl_umath 0.1.2 py312h481091c_111 https://software.repos.intel.com/python/conda
[conda] numpy 2.1.2 pypi_0 pypi
[conda] onemkl-sycl-blas 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-datafitting 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-dft 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-lapack 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-rng 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-sparse 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-stats 2025.0.1 pypi_0 pypi
[conda] onemkl-sycl-vm 2025.0.1 pypi_0 pypi
[conda] pytorch-triton-xpu 3.3.0+git0bcc8265 pypi_0 pypi
[conda] torch 2.8.0.dev20250327+xpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250329+xpu pypi_0 pypi
[conda] torchdiffeq 0.2.5 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250329+xpu pypi_0 pypi
```
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,958,509,083
|
[inductor][comms] fix node_summary for composite scheduler nodes
|
xmfan
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150074
* __->__ #150258
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,958,154,573
|
Type hint bug report relating to Sequential
|
erlebach
|
open
|
[
"module: typing",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
Consider the following code in Cursor (also in VSCode):
```python
from torch import nn
from torch.nn import Module, Sequential
dim = 10
heads = 10
num_kv_per_token = 10
to_adaptive_step = Sequential(
nn.Linear(dim, heads * num_kv_per_token),
nn.Linear(dim, heads * num_kv_per_token),
)
a = to_adaptive_step[0]
```
The variable `a` registers as `Sequential` when hovering, rather than `nn.Linear`. This might lead to errors since `Sequential` supports indexing, while `nn.Linear` doesn't. A developer would not expect `a` to be of type Sequential.
### Versions
Python version 3.10.16
```text
`python collect_env.py`: (did not work on my mac):
python collect_env.py
Collecting environment information...
Traceback (most recent call last):
File "/Users/erlebach/src/2024/titans-pytorch/gordon_memory_models/collect_env.py", line 694, in <module>
main()
File "/Users/erlebach/src/2024/titans-pytorch/gordon_memory_models/collect_env.py", line 677, in main
output = get_pretty_env_info()
File "/Users/erlebach/src/2024/titans-pytorch/gordon_memory_models/collect_env.py", line 672, in get_pretty_env_info
return pretty_str(get_env_info())
File "/Users/erlebach/src/2024/titans-pytorch/gordon_memory_models/collect_env.py", line 497, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
File "/Users/erlebach/src/2024/titans-pytorch/gordon_memory_models/collect_env.py", line 450, in get_pip_packages
for line in out.splitlines()
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
pyproject.toml: created with uv. Attached along with lock file (both files are converted to txt)
[pyproject.toml.txt](https://github.com/user-attachments/files/19521864/pyproject.toml.txt)
[uv.lock.txt](https://github.com/user-attachments/files/19521865/uv.lock.txt)
cc @ezyang @malfet @xuzhao9 @gramster
| true
|
2,958,084,238
|
[inductor] Fix inductor windows linker error
|
jansel
|
closed
|
[
"module: build",
"module: windows",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 13
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150256
Fixes #149889
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,957,945,267
|
[MPS] grad scaler
|
Isalia20
|
closed
|
[
"triaged",
"open source",
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"module: mps",
"release notes: mps"
] | 14
|
COLLABORATOR
|
Fixes #142397
Basic implementation is done. What's left:
- [x] Different dtype/device tensors in the TensorList
- [x] fast path for grouping the foreach kernel
- [x] Tests
Regarding tests, I found some tests in `test/test_torch.py` for GradScaler but I couldn't figure out what is the best way to enable the test for MPS device.
By removing `@onlyNativeDeviceTypes`, one enables the tests for MPS but also enables tests for all other devices which are not included in the native device types. If I put:
`instantiate_device_type_tests(TestTorchDeviceType, globals(), allow_mps=True)`
This enables lots of tests in that class for MPS which were not(?) being tested before? This part needs some clarification
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,957,864,225
|
RNN training is very slow on Intel xpu
|
JimmysAIPG
|
open
|
[
"triaged",
"module: xpu"
] | 7
|
NONE
|
### 🐛 Describe the bug
Hello, I am a newbie and I just switched from the cuda environment to the xpu environment for learning. I found that when I use GRU or LSTM to train the model, the speed is very slow in the xpu environment. Is there a problem?
Please refer to the attachment for the code
[imdb.txt](https://github.com/user-attachments/files/19516292/imdb.txt)
### Versions
I am using the docker environment:
REPOSITORY TAG IMAGE ID CREATED SIZE
intel/intel-extension-for-pytorch 2.6.10-xpu ed87f8f3c7e0 3 weeks ago 15.1GB
root@97f8d2344ae3:/home# python collect_env.py
[W329 08:10:54.485931320 OperatorEntry.cpp:154] Warning: Warning only once for all operators, other operators may also be overridden.
Overriding a previously registered kernel for the same operator and the same dispatch key
operator: aten::_validate_compressed_sparse_indices(bool is_crow, Tensor compressed_idx, Tensor plain_idx, int cdim, int dim, int nnz) -> ()
registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6
dispatch key: XPU
previous kernel: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:30477
new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/aten/generated/ATen/RegisterXPU.cpp:468 (function operator())
Collecting environment information...
PyTorch version: 2.6.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.12.20-061220-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i3-10100F CPU @ 3.60GHz
CPU family: 6
Model: 165
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 3
CPU max MHz: 4300.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 6 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Vulnerable
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.6.10+xpu
[pip3] numpy==2.2.3
[pip3] pytorch-triton-xpu==3.2.0
[pip3] torch==2.6.0+xpu
[pip3] torchaudio==2.6.0+xpu
[pip3] torchvision==0.21.0+xpu
[conda] Could not collect
root@97f8d2344ae3:/home# [W329 08:10:57.987819736 OperatorEntry.cpp:154] Warning: Warning only once for all operators, other operators may also be overridden.
Overriding a previously registered kernel for the same operator and the same dispatch key
operator: aten::_validate_compressed_sparse_indices(bool is_crow, Tensor compressed_idx, Tensor plain_idx, int cdim, int dim, int nnz) -> ()
registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6
dispatch key: XPU
previous kernel: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:30477
new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/aten/generated/ATen/RegisterXPU.cpp:468 (function operator())
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,957,821,440
|
pytorch for NVIDIA-5090
|
Reginald-L
|
open
|
[
"needs reproduction",
"module: binaries",
"module: cuda",
"triaged"
] | 4
|
NONE
|
I am installing the pytorch gpu version on an RTX5090 device, but I am getting an error:

here is my torch version:
Name: torch
Version: 2.8.0.dev20250327+cu128
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3-Clause
Location: /home/air/anaconda3/envs/kohya/lib/python3.12/site-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas-cu12, nvidia-cuda-cupti-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-runtime-cu12, nvidia-cudnn-cu12, nvidia-cufft-cu12, nvidia-cufile-cu12, nvidia-curand-cu12, nvidia-cusolver-cu12, nvidia-cusparse-cu12, nvidia-cusparselt-cu12, nvidia-nccl-cu12, nvidia-nvjitlink-cu12, nvidia-nvtx-cu12, pytorch-triton, setuptools, sympy, typing-extensions
Required-by: torchaudio, torchvision
here is my os info:

cc @seemethere @malfet @osalpekar @atalman @ptrblck @msaroufim @eqy
| true
|
2,957,777,398
|
RuntimeError: Subtracting Reconstructed Jagged Nested Tensor Fails with Shape Mismatch
|
xsgxlz
|
open
|
[
"module: docs",
"triaged",
"module: nestedtensor"
] | 2
|
NONE
|
### 🐛 Describe the bug
When a `torch.nested.nested_tensor` with `layout=torch.jagged` is converted to a padded tensor using `to_padded_tensor()` and then reconstructed back into a jagged `nested_tensor` using its offsets, performing a binary operation (like subtraction) between the original and the reconstructed nested tensor results in a `RuntimeError`.
The error message `cannot call binary pointwise function sub.Tensor with inputs of shapes (N, j1) and (N, j2)` suggests that although the tensors represent the same logical jagged structure and have the same number of top-level elements (N=2 in the example), their internal jagged shape representations are treated as incompatible by the binary operation dispatcher for nested tensors.
Expected behavior: The subtraction `nt - reconstructed_nt` should succeed, ideally resulting in a nested tensor of zeros, or raise a more specific error if subtraction between jagged tensors reconstructed this way isn't supported, rather than a generic shape mismatch error.
```python
import torch
# Sample code to reproduce the problem
tensor_list = [torch.zeros(3), torch.ones(2)]
nt = torch.nested.nested_tensor(tensor_list, layout=torch.jagged)
nt_dense = nt.to_padded_tensor(padding=0)
offset = nt.offsets()
# Reconstruct the list of tensors from the padded tensor using offsets
reconstructed_tensor_list = [nt_dense[i, :offset[i+1]-offset[i]] for i in range(len(offset)-1)]
# Create a new jagged nested tensor from the reconstructed list
reconstructed_nt = torch.nested.nested_tensor(reconstructed_tensor_list, layout=torch.jagged)
# This subtraction fails
print("Original Nested Tensor:", nt)
print("Reconstructed Nested Tensor:", reconstructed_nt)
result = nt - reconstructed_nt
print(result) # This line is not reached
```
```
Original Nested Tensor: NestedTensor(size=(2, j1), offsets=tensor([0, 3, 5]), contiguous=True)
Reconstructed Nested Tensor: NestedTensor(size=(2, j2), offsets=tensor([0, 3, 5]), contiguous=True)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[1], line 13
10 reconstructed_tensor_list = [nt_dense[i, :offset[i+1]-offset[i]] for i in range(len(offset)-1)]
11 # Create a new jagged nested tensor from the reconstructed list
12 reconstructed_nt = torch.nested.nested_tensor(reconstructed_tensor_list, layout=torch.jagged)
---> 13 result = nt - reconstructed_nt
14 print(result) # This line is not reached
File /path/to/your/env/lib/python3.x/site-packages/torch/nested/_internal/nested_tensor.py:353, in NestedTensor.__torch_function__(cls, func, types, args, kwargs)
351 pass
352 with torch._C.DisableTorchFunctionSubclass():
--> 353 return func(*args, **kwargs)
File /path/to/your/env/lib/python3.x/site-packages/torch/nested/_internal/nested_tensor.py:325, in NestedTensor.__torch_dispatch__(cls, func, types, args, kwargs)
323 fn = lookup_jagged(func, *args, **kwargs)
324 if fn is not None:
--> 325 return fn(*args, **kwargs)
327 # Poor man's redispatch for composite ops. This becomes relevant under inference
328 # mode, where disabling autograd key dispatch prevents decomposition.
329 dk = torch._C.DispatchKey.CompositeImplicitAutogradNestedTensor
File /path/to/your/env/lib/python3.x/site-packages/torch/nested/_internal/ops.py:307, in jagged_binary_pointwise(func, *args, **kwargs)
303 if raggedness_matches(a, b._size):
304 return NestedTensor(
305 func(a._values, b._values, *args[2:], **kwargs), **extract_kwargs(a)
306 )
--> 307 raise RuntimeError(mismatch_error_msg.format(func.__name__, a._size, b._size))
308 # either a is NT or b is NT at this point
309 a_is_nt = isinstance(a, NestedTensor)
RuntimeError: cannot call binary pointwise function sub.Tensor with inputs of shapes (2, j1) and (2, j2)
```
### Versions
```
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.2 | packaged by conda-forge | (main, Feb 17 2025, 14:10:22) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7643 48-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3640.9170
CPU min MHz: 1500.0000
BogoMIPS: 4599.97
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchode==1.0.0
[pip3] torchtyping==0.1.5
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] cuda-cudart 12.6.77 0 nvidia
[conda] cuda-cudart-dev 12.6.77 0 nvidia
[conda] cuda-cudart-dev_linux-64 12.6.77 0 nvidia
[conda] cuda-cudart-static 12.6.77 0 nvidia
[conda] cuda-cudart-static_linux-64 12.6.77 0 nvidia
[conda] cuda-cudart_linux-64 12.6.77 0 nvidia
[conda] cuda-cupti 12.6.80 0 nvidia
[conda] cuda-cupti-dev 12.6.80 0 nvidia
[conda] cuda-libraries 12.6.3 0 nvidia
[conda] cuda-libraries-dev 12.6.3 0 nvidia
[conda] cuda-nvrtc 12.6.85 0 nvidia
[conda] cuda-nvrtc-dev 12.6.85 0 nvidia
[conda] cuda-nvtx 12.6.77 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-opencl-dev 12.6.77 0 nvidia
[conda] cuda-runtime 12.6.3 0 nvidia
[conda] libcublas 12.6.4.1 0 nvidia
[conda] libcublas-dev 12.6.4.1 0 nvidia
[conda] libcufft 11.3.0.4 0 nvidia
[conda] libcufft-dev 11.3.0.4 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcurand-dev 10.3.7.77 0 nvidia
[conda] libcusolver 11.7.1.2 0 nvidia
[conda] libcusolver-dev 11.7.1.2 0 nvidia
[conda] libcusparse 12.5.4.2 0 nvidia
[conda] libcusparse-dev 12.5.4.2 0 nvidia
[conda] libnvjitlink 12.6.85 0 nvidia
[conda] libnvjitlink-dev 12.6.85 0 nvidia
[conda] numpy 2.2.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0+cu126 pypi_0 pypi
[conda] torchode 1.0.0 pypi_0 pypi
[conda] torchtyping 0.1.5 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @svekars @sekyondaMeta @AlannaBurke @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,957,731,885
|
[Release/2.7] Update torch-xpu-ops commit pin (For CI test)
|
xytintel
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/xpu",
"release notes: xpu"
] | 6
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [b18528c455d0297b89b255e93b86ff668069459f](https://github.com/intel/torch-xpu-ops/commit/b18528c455d0297b89b255e93b86ff668069459f), include
- Bugfix of performance issue relating to GRF configuration.
| true
|
2,957,723,095
|
ROCm: Add trailing comma for consistency in gfx architecture list
|
jagadish-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Adding trailing comma for consistency.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,957,720,209
|
[ROCm] change preferred blas lib defaults
|
pytorchbot
|
closed
|
[
"module: rocm",
"open source",
"ciflow/rocm"
] | 1
|
COLLABORATOR
|
Fixes #148883
Fixes #150155
Also adds at::BlasBackend:Default. Instinct cards prefer hipBLASLt, everything else prefers rocBLAS.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,957,712,319
|
dist.all_reduce should check if all tensors are same data type when using nccl
|
hyleepp
|
closed
|
[
"oncall: distributed"
] | 2
|
NONE
|
### 🐛 Describe the bug
Hello, I find that when I use nccl as backend, if the data type of a tensor on different devices do not match, then all reduce will give a very strange value (looks like overflow).
```python
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import os
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
dist.init_process_group(backend='nccl', rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def all_reduce_demo(rank, world_size):
setup(rank, world_size)
torch.cuda.set_device(rank)
dtype = torch.float32 if rank != 0 else torch.bfloat16
# dtype = torch.float16
tensor = torch.tensor([1e-1], dtype=dtype).cuda()
print(f"Rank {rank} before all_reduce: {tensor}")
dist.all_reduce(tensor, op=dist.ReduceOp.SUM)
print(f"Rank {rank} after all_reduce: {tensor}")
cleanup()
if __name__ == "__main__":
world_size = 3
mp.spawn(all_reduce_demo, args=(world_size,), nprocs=world_size, join=True)
```
Here we get
<img width="783" alt="Image" src="https://github.com/user-attachments/assets/29cedb22-816a-4b57-8d29-af86fbfc6cdf" />
And if we swtich from nccl to gloo, it will throw an error as expected
<img width="774" alt="Image" src="https://github.com/user-attachments/assets/1eecac9a-43b7-448f-af91-4394cf099072" />
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (GCC) 12.2.0
Clang version: 3.8.0 (tags/RELEASE_380/final)
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.11 (main, Apr 5 2023, 14:15:10) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.0-1.0.0.26-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.0.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 3100.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3.10] numpy 1.24.3
[pip3.10] nvidia-cublas-cu12 12.4.5.8
[pip3.10] nvidia-cuda-cupti-cu12 12.4.127
[pip3.10] nvidia-cuda-nvrtc-cu12 12.4.127
[pip3.10] nvidia-cuda-runtime-cu12 12.4.127
[pip3.10] nvidia-cudnn-cu12 9.1.0.70
[pip3.10] nvidia-cufft-cu12 11.2.1.3
[pip3.10] nvidia-curand-cu12 10.3.5.147
[pip3.10] nvidia-cusolver-cu12 11.6.1.9
[pip3.10] nvidia-cusparse-cu12 12.3.1.170
[pip3.10] nvidia-nccl-cu12 2.21.5
[pip3.10] nvidia-nvjitlink-cu12 12.4.127
[pip3.10] nvidia-nvtx-cu12 12.4.127
[pip3.10] torch 2.5.1
[pip3.10] torchaudio 2.5.1
[pip3.10] torchvision 0.20.1
[pip3.10] transformer-engine-torch 1.13.0
[pip3.10] triton 3.1.0
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,957,707,161
|
[MPSInductor] Specify `max_total_threads_per_threadgroup`
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150247
When generating reduction kernel, otherwise compiler can unroll loops too much that kernel could not be launched for the intended threadgroup size
Extend `c10::metal::max` to accept different dtypes
Together this fixes `test_large_broadcast_reduction`
TODO:
- Explore different threadgroup_sizes for best perf
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,957,707,117
|
[BE] Fix signed/unsigned comparison warning
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150247
* __->__ #150246
One will see them only if compilation fails, but still
| true
|
2,957,640,076
|
Add cmake variable USE_ROCM_CK
|
trixirt
|
open
|
[
"module: rocm",
"open source"
] | 6
|
NONE
|
To control the use of ROCm Composable Kernel usage.
CK is not compatible with all rocBLAS gpu's, so the user must explicitly choose to use CK.
Fixes #150187
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,957,575,223
|
Update type of `create_block_mask` to more accurately reflect things
|
Chillee
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"suppress-api-compatibility-check",
"suppress-bc-linter"
] | 4
|
COLLABORATOR
|
Fixes some mypy issues
| true
|
2,957,569,926
|
if blaslt fails, fall back to blas
|
jeffdaily
|
closed
|
[
"open source"
] | 2
|
COLLABORATOR
|
Fixes #150016.
This is implemented for both cublaslt and hipblaslt. gemm_and_bias on failure will fall back to unfused path. lt gemm on failure falls back to gemm even if gemm preference is set to lt.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150147
Approved by: https://github.com/malfet
| true
|
2,957,503,285
|
Feature Request: Enhance Nested Tensor Operations for Direct RoPE Application
|
xsgxlz
|
open
|
[
"triaged",
"module: nestedtensor"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
**Feature Proposal:**
We propose enhancing the operations available for `torch.NestedTensor` to facilitate the direct and efficient application of position-dependent transformations, specifically Rotary Positional Embeddings (RoPE), without extra unnecessary memory operations
This could involve a better broadcasting mechanism and more supported `torch.NestedTensor` operations. Applying RoPE to nested tensors now is not straightforward because each sequence has a different length so you have to repeat the RoPE cache `batch_size` times to properly apply it.
As all other parts in a standard transformer can be elegantly implemented with `torch.NestedTensor`, we believe it's time to include RoPE.
### Alternatives
If heavily improved `torch.NestedTensor`'s API is infeasible, we can implement a RoPE Module that supports both dense and nested tensors.
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,957,500,581
|
[decomps] Add decomposition for linalg_vector_norm
|
SS-JIA
|
open
|
[
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
## Context
`linalg_vector_norm` is required to run Vision Transformer models in ExecuTorch. Currently, model export + inference fails because ExecuTorch doesn't have a kernel for `linalg_vector_norm`.
However, there is a decomposition for the operator. Add the decomposition to the core decomp table to unblock ExecuTorch.
| true
|
2,957,457,596
|
Fix NVTX functions compatibility with torch.compile(fullgraph=True)
|
zsnoob
|
open
|
[
"triaged",
"open source",
"release notes: cuda",
"module: dynamo",
"release notes: dynamo"
] | 4
|
NONE
|
## Problem Solved
This PR resolves the incompatibility between NVTX functions and torch._dynamo. When attempting to use NVTX profiling tools within code compiled with `torch.compile(fullgraph=True)`, the following error occurs:
```
torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor int call_function <function range_push at 0x...>
```
The root cause is that `torch._dynamo` requires all function calls within a compiled graph to return tensor types, but NVTX functions return integers, objects, or None.
## Changes
- Added a global toggle system to enable/disable tensor returns for NVTX functions
- Implemented a decorator to handle type conversion automatically
- Enhanced all NVTX functions to support tensor return mode
- Added clear documentation and type annotations
- Maintained backward compatibility with existing code
## Impact on Existing Functionality
This change has **zero impact** on existing functionality when used normally. The default behavior remains unchanged, and all functions continue to return their original types.
Only when explicitly enabled via `torch.utils.nvtx.enable_tensor_returns()` will the functions return tensor types instead. This opt-in approach ensures no disruption to existing code.
## Testing
- Added comprehensive unit tests that verify:
- Default behavior is preserved
- Tensor return mode correctly converts all return types to tensors
- Switching between modes works as expected
- Example when use `torch.compile(fullgraph=True)' and non-tensor-return function
## Usage Example
```python
# Enable tensor returns for dynamo compatibility
torch.cuda.nvtx.enable_tensor_returns()
# Use NVTX functions in dynamo-compiled code
# All functions now return tensors
# with torch.compile context
with torch.cuda.nvtx.range("my_range"):
pass
# Disable tensor returns to restore original behavior
torch.cuda.nvtx.disable_tensor_returns()
```
## Within Similar Issues
Many issues have been reported regarding tensor returns when using `torch.compile(fullgraph=True)`. Many other projects(e.g. vLLM) use `fullgraph=True` transparently but will get stopped because the compatible issue. This compatibility problem affects numerous libraries and tools that integrate with PyTorch's compilation system. Based on the existing implementation, a more robust decorator could be designed that:
* Handles conversion transparently: Converts all return types (int, object, None) to appropriate tensor representations without manual intervention.
* Maintains type consistency: Uses PyTorch's native tensor metadata to ensure type information is preserved.
* Provides extensibility: Allows third-party libraries to register their non-tensor returning functions to receive the same treatment.
https://github.com/pytorch/pytorch/issues/123041
https://github.com/pytorch/pytorch/issues/122692
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @janeyx99 @eqy @syed-ahmed @IlyasMoutawwakil @ezyang @YangQun1
| true
|
2,957,455,316
|
[state dict] add strict check when there are more keys in global than local state
|
mori360
|
open
|
[
"oncall: distributed",
"release notes: distributed (checkpoint)"
] | 4
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/149516
In broadcast_from_rank0, currently there's no strict check in loading global state dict to local.
If there are any keys that in global but not in local, they will be added into local state_dict no matter strict is True or False
Here are the logic after this PR:
Global state dict
rank0: {"0.weight":..., "1.weight":...}
rank1: None
Local state dict
rank0: {"0.weight":...}
rank1: {"0.weight":...}
-> set_model_state_dict(options=StateDictOptions(broadcast_from_rank0=True,))) ->
if strict is True
loaded model state dict:
rank0: {"0.weight":...}
rank1: {"0.weight":...}
if strict is False
loaded model state dict:
rank0: {"0.weight":..., "1.weight":...}
rank1: {"0.weight":...}
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,957,426,301
|
remove guard _size_oblivious from expand and make it more resilient to data dependent errors.
|
laithsakka
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150238
* #148809
When we do not know that requested_length == x, we do not have to fail we can make it runtime assert with sym_or.
address #150235 https://github.com/pytorch/pytorch/issues/128645
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,957,408,362
|
[MPS] Fix dot/mm for conj_tensors
|
pytorchbot
|
closed
|
[
"open source",
"release notes: mps",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150157
- Distinguish between conjugated/non_conjugated inputs by appending conjugation to the operator key
- For matmul or dot, add `conjugateWithTensor:name:` calls before running the op
- Enable testing for conjugated ops by passing `include_conjugated_inputs` to opinfo
- Filter `include_conjugated_inputs` argument from `sample_inputs_window` (probably should have landed as separate PR)
- Preserve conj property when gathering the views, that fixes `cov` operator
Fixes https://github.com/pytorch/pytorch/issues/148156
| true
|
2,957,404,986
|
[dynamic shapes] rewrite expand with guard_or_false
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Rewrites the expand decomposition to avoid unbacked errors, assuming the general path where `input shape == output shape or input shape == 1`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,957,383,626
|
Fix _Waitcounter decorator and dd backward pass wait counter
|
ppanchalia
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Summary:
This will log a wait counter with for backward compile and fixes weirdness with nested context managers.
Since the old wait counters added through dynamo_timed were never created with the nesting issue. I am also changing the key nomenclature from `pytorch.dynamo_timed` to `pytorch.wait_counter`. We want to use the same nomenclature, to make it easy to find keys.
Reviewed By: jamesjwu
Differential Revision: D72032055
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,957,377,947
|
[CI] Fix log artifact not containing test logs attempt 2
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
Take two of https://github.com/pytorch/pytorch/pull/149577 since it didn't work
| true
|
2,957,373,430
|
[aten] 8 bytes aligned vector loads for bf16 and fp16 dtypes in torch.cat
|
zhaozhul
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda",
"topic: performance"
] | 8
|
CONTRIBUTOR
|
Enable aligned vector loading for 2 bytes datatypes in torch.cat. Specifically:
1. reduce the vector length to 8 bytes for 2-byte types (fp16, bf16 etc)
2. enable through a conditional template
The reason why 8-byte vector loading was chosen for fp16 and bf16:
16-byte load results in heavier register overheads (i.e. 4 register per load for fp32 -> 8 register per load for fp16). Therefore, to employ the benefits of vectorized loading, we reduced ALIGNED_VEC_LOAD_BYTES to 8 for fp16 and bf16
### perf testing:
before:
```
torch-cat-D1-30108-D2-624-D3-772-dtype-torch.float32:
B pt_eager copy
0 100.0 0.022621 0.036162
1 1000.0 0.133616 0.207051
2 10000.0 1.326848 1.848768
3 20000.0 2.744544 3.692128
torch-cat-D1-30108-D2-624-D3-772-dtype-torch.bfloat16:
B pt_eager copy
0 100.0 0.022434 0.035477
1 1000.0 0.140608 0.144518
2 10000.0 1.303792 1.229584
3 20000.0 2.668288 2.436160
```
after:
```
torch-cat-D1-30108-D2-624-D3-772-dtype-torch.float32:
B pt_eager copy
0 100.0 0.022608 0.036328
1 1000.0 0.133861 0.207399
2 10000.0 1.325120 1.847136
3 20000.0 2.726528 3.693184
torch-cat-D1-30108-D2-624-D3-772-dtype-torch.bfloat16:
B pt_eager copy
0 100.0 0.019942 0.035482
1 1000.0 0.084858 0.144544
2 10000.0 0.924384 1.230672
3 20000.0 1.944448 2.436480
```
### bw analysis:
bw on fp16/bf16 got increased by 40%-50% for large tensors
before:
```
Bandwidth (GB/s) for ((16384, 16384), 1) int8;fp16;fp32;int32;fp64;long|869.87|1382.74|1956.46|1952.73|1969.03|1963.66
Bandwidth (GB/s) for ((4194304,), 0) int8;fp16;fp32;int32;fp64;long|568.43|926.53|1589.20|1567.52|1771.54|1783.68
Bandwidth (GB/s) for ((16777216,), 0) int8;fp16;fp32;int32;fp64;long|752.07|1269.50|1894.86|1900.85|1954.10|1955.08
Bandwidth (GB/s) for ((33554432,), 0) int8;fp16;fp32;int32;fp64;long|807.08|1354.69|1960.48|1962.45|1972.73|1973.85
Bandwidth (GB/s) for ((134217728,), 0) int8;fp16;fp32;int32;fp64;long|864.02|1398.02|1963.43|1955.32|1963.37|1969.96
```
after:
```
Bandwidth (GB/s) for ((16384, 16384), 1) int8;fp16;fp32;int32;fp64;long|873.08|1892.16|1954.35|1962.51|1962.03|1965.98
Bandwidth (GB/s) for ((4194304,), 0) int8;fp16;fp32;int32;fp64;long|575.13|1242.45|1576.37|1571.30|1769.94|1790.22
Bandwidth (GB/s) for ((16777216,), 0) int8;fp16;fp32;int32;fp64;long|742.92|1734.57|1887.99|1897.62|1940.99|1959.25
Bandwidth (GB/s) for ((33554432,), 0) int8;fp16;fp32;int32;fp64;long|802.60|1865.45|1952.64|1947.53|1974.47|1973.48
Bandwidth (GB/s) for ((134217728,), 0) int8;fp16;fp32;int32;fp64;long|865.32|1939.07|1965.72|1963.25|1969.06|1968.72
```
### Perf testing code:
```
# pyre-strict
from typing import List, Optional, Tuple
import click
import pandas as pd
import torch
# @manual=//triton:triton
import triton
# CUDA_VISIBLE_DEVICEs=7 buck2 run @mode/opt //scripts/zhaozhu:cat_bench
@click.command()
@click.option("--data-type", type=str, default="bf16")
@click.option("--return-result", type=bool, default=False)
def main(
data_type: str,
return_result: bool,
) -> Optional[Tuple[List[triton.testing.Benchmark], List[pd.DataFrame]]]:
torch.backends.cudnn.allow_tf32 = True
torch.backends.cuda.matmul.allow_tf32 = True
if data_type == "fp32":
dtype = torch.float32
elif data_type == "fp16":
dtype = torch.float16
elif data_type == "bf16":
dtype = torch.bfloat16
else:
raise ValueError(f"Unsupported data type: {data_type}.")
D1 = int(torch.randint(low=10000, high=50000, size=(1,)).item())
D2 = int(torch.randint(low=100, high=1000, size=(1,)).item())
D3 = int(torch.randint(low=500, high=1000, size=(1,)).item())
configs: List[triton.testing.Benchmark] = [
triton.testing.Benchmark(
x_names=["B"],
x_vals=[100, 1000, 10000, 20000],
line_arg="provider",
line_vals=["pt_eager", "copy"],
line_names=["pt_eager", "copy"],
styles=[("blue", "-"), ("green", "-"), ("red", "-")],
ylabel="ms",
plot_name=f"torch-cat-D1-{D1}-D2-{D2}-D3-{D3}-dtype-{dtype}",
args={
"D1": D1,
"D2": D2,
"D3": D3,
"dtype": dtype,
},
)
]
@triton.testing.perf_report(configs)
def bench_cat(
B: int,
D1: int,
D2: int,
D3: int,
dtype: torch.dtype,
provider: str,
) -> float:
warmup = 10
rep = 3
tensors = []
a = torch.empty(
# (B, 30108),
(B, D1),
dtype=dtype,
device=torch.device("cuda"),
).uniform_(-1.0, 1.0)
b = torch.empty(
# (B, 624),
(B, D2),
dtype=dtype,
device=torch.device("cuda"),
).uniform_(-1.0, 1.0)
c = torch.empty(
# (B, 772),
(B, D3),
dtype=dtype,
device=torch.device("cuda"),
).uniform_(-1.0, 1.0)
tensors = [a, b, c]
total_cols: int = int(a.shape[1] + b.shape[1] + c.shape[1])
def torch_copy(
tensors: List[torch.Tensor], is_inplace: bool = True
) -> torch.Tensor:
f = torch.zeros([B, total_cols], dtype=dtype, device=torch.device("cuda"))
col_idx = 0
for t in tensors:
temp = f[:, col_idx : col_idx + t.shape[1]]
if is_inplace:
temp.copy_(t)
else:
f[:, col_idx : col_idx + t.shape[1]] = t
col_idx += t.shape[1]
return f
def torch_cat(tensors: List[torch.Tensor]) -> torch.Tensor:
return torch.cat(tensors, dim=1)
ref = torch_cat(tensors)
real = torch_copy(tensors, is_inplace=False)
torch.testing.assert_allclose(ref, real)
if provider == "pt_eager":
fn = lambda: torch_cat(tensors) # noqa E731
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
return ms
elif provider == "stack":
def torch_stack(tensors: List[torch.Tensor]) -> torch.Tensor:
return torch.stack(tensors, dim=1).view(-1, total_cols)
fn = lambda: torch_stack(tensors)
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
return ms
elif provider == "copy":
fn = lambda: torch_copy(tensors)
ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
return ms
else:
raise ValueError(f"unsupported provider: {provider}")
df = bench_cat.run(print_data=True, return_df=return_result)
if return_result:
return configs, df
if __name__ == "__main__":
main()
```
and bw analysis code is from: https://github.com/pytorch/pytorch/pull/102815?fbclid=IwZXh0bgNhZW0CMTEAAR1Rwclp_O1fknl1Litpm9GeY0ZZZovdCv8_kQfGf6Zy8LaoP9JhO0ZsutM_aem_BPCZEZda5OOMnzI9Mrlapg#issue-1737409146
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.