id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2 values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4 values | body stringlengths 7 62.5k ⌀ | is_title bool 1 class |
|---|---|---|---|---|---|---|---|---|
2,809,867,383 | Additional operators in operator benchmark | apakbin | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: benchmark"
] | 10 | CONTRIBUTOR | The list of added operators:
add_, addcmul, arange, baddbmm, bmm, clamp, div, div_, gelu, index_add, logical_and, mul_, sub_, topk, where
This pull request is the same as a previous one: https://github.com/pytorch/pytorch/pull/145121 which inadvertently got deleted while merging.
| true |
2,809,858,479 | Local config flags for torch.compile | zou3519 | open | [
"triaged",
"oncall: pt2"
] | 1 | CONTRIBUTOR | internal x-post: https://fb.workplace.com/groups/1075192433118967/posts/1589701705001368/
Pitch: a new decorator to allow user code change configs midway through tracing.
cc @chauhang @penguinwu | true |
2,809,805,707 | [torchbench] Inductor freezing bfloat16 conv folding needs high tolerance | IvanKobzarev | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145623
Issue:
https://github.com/pytorch/pytorch/issues/144888
Torchbench of timm lcnet_050 model fails on accuracy in case of `--frezing` `--inference` `--bfloat16`
`res_error==0.12`
If to turn off convolution inductor constant folding - `res_error==0.016`
`float16 error ~ 0.00669`
`float16 without conv folding ~ 0.0018`
convolution folding results in increase of error almost at one order of magnitude.
I think we should revisit and try to do something to improve the accuracy for conv folding.
E.g. For example doing conv folding at compilation time with float64?
At the moment I am adding counters to identify if convolution folding happened, and in case of bfloat16 and conv_folding - increase multiplier to the max level (10) to pass accuracy test.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,809,755,404 | [AOTI] Update test runner to use the new APIs | desertfire | closed | [
"oncall: distributed",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145622
Summary: Switch to the newer aoti_compile_and_package APIs. Some tests still kept using legacy APIs, and will follow up with internal test refactoring.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @ColinPeppler
Differential Revision: [D69306100](https://our.internmc.facebook.com/intern/diff/D69306100) | true |
2,809,730,971 | [ROCm] Create inductor-rocm-mi300 | amdfaa | closed | [
"module: rocm",
"open source",
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | - Adds an mi300 inductor workflow to main.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,809,719,763 | Remove `public_allowlist` from `TestPublicBindings.test_correct_module_names` and ensure private_allowlist-ed things are actually private | mikaylagawarecki | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | This passes locally, also sanity checked importing these modules on [colab](https://colab.research.google.com/drive/1edynWX1mlQNZIBxtb3g81_ZeTpAqWi19?usp=sharing)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145620
| true |
2,809,698,827 | [BE][Ez]: FURB148 - remove useless enumerate calls | Skylion007 | closed | [
"oncall: distributed",
"oncall: jit",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | COLLABORATOR | Remove useless enumerate calls
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,809,681,423 | torch.compiler.disable should have the option to raise an informative exception (other than `torch._dynamo.exc.Unsupported`) | vmoens | closed | [
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-logging"
] | 3 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
## Context
I'm working on making distributions compatible with compile. We expect that the validate step will never be compilable.
Ideally, users who run into that graph break (whether fullgraph=False and they're using TORCH_LOGS or using fullgraph=True and they look at the exception) should be guided to using `torch.distributions.Distribution.set_default_validate_args(False)` or similar to deactivate the validation step.
That's a reasonable thing to ask users to do if we tell them the pros and cons of doing that.
Currently, the error message just says "if statements are not supported" and if we `disable()` it, we'll get even less context.
In general, if there's a way out of a graph break and we can tell people about it, we should have the tools to provide informative error messages about the macro-context.
## Feature request
An option we talked about with @bdhirsh is to have some way of customize the Unsupported error message or provide a custom error (eg, ValueError or whatever) that tells people how to avoid the graph break.
Something like `validate = torch.compiler.disable(validate, custom_error=RuntimeError(msg))`
Before:
```
torch._dynamo.exc.UserError: Dynamic control flow is not supported at the moment.
```
After
```
RuntimeError: You are attempting to compile a distribution constructors with validate_args=True (default). To compile this without graph breaks, make sure to turn validate_args to False through the constructor or distributions.Distribution.set_default_validate_args.
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,809,679,029 | Remove tensorboard from public_allowlist in test_modules_can_be_imported | mikaylagawarecki | closed | [
"topic: not user facing"
] | 1 | CONTRIBUTOR | Testing as removing these doesn't fail locally
As a sanity check, also tested on colab that all these can be imported other than the two fixed by https://github.com/pytorch/pytorch/pull/145396
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145620
* __->__ #145617
| true |
2,809,648,651 | [Release-Only] Remove ptx from Linux CUDA 12.6 binary builds | atalman | closed | [] | 1 | CONTRIBUTOR | These increase binary size of Manywheel builds by about 200mb.
Todo: Add to release runbook to strip ``+PTX`` from all CUDA builds as release only changes.
Please note: We want to keep +PTX in our nightly builds but we don't want to have this functionality in release because of the significant binary size increase. | true |
2,809,592,541 | Back out "Fix triton masked loading for non-block tl.loads (#144782)" | ezyang | closed | [
"fb-exported",
"module: inductor",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Summary:
Original commit changeset: 77b1206ddb61
Original Phabricator Diff: D68509889
Bisect found this diff, it regresses compile time for Ads CMF model by 20%:
https://www.internalfb.com/lab/attribution/session/888f7684-91cd-4b8f-99ea-a71ce6db1ad6#jobid=45036005935867349
Test Plan:
Metric recovered on backout diff:
https://www.internalfb.com/servicelab/experiment/4700407575/
Reviewed By: ezyang
Differential Revision: D68605114
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,809,574,175 | [CD] Fix slim-wheel cuda_nvrtc import problem | pytorchbot | closed | [
"open source"
] | 6 | COLLABORATOR | Similar fix as: https://github.com/pytorch/pytorch/pull/144816
Fixes: https://github.com/pytorch/pytorch/issues/145580
Found during testing of https://github.com/pytorch/pytorch/issues/138340
Please note both nvrtc and nvjitlink exist for cuda 11.8, 12.4 and 12.6 hence we can safely remove if statement. Preloading can apply to all supporting cuda versions.
CUDA 11.8 path:
```
(.venv) root@b4ffe5c8ac8c:/pytorch/.ci/pytorch/smoke_test# ls /.venv/lib/python3.12/site-packages/torch/lib/../../nvidia/cuda_nvrtc/lib
__init__.py __pycache__ libnvrtc-builtins.so.11.8 libnvrtc-builtins.so.12.4 libnvrtc.so.11.2 libnvrtc.so.12
(.venv) root@b4ffe5c8ac8c:/pytorch/.ci/pytorch/smoke_test# ls /.venv/lib/python3.12/site-packages/torch/lib/../../nvidia/nvjitlink/lib
__init__.py __pycache__ libnvJitLink.so.12
```
Test with rc 2.6 and CUDA 11.8:
```
python cudnn_test.py
2.6.0+cu118
---------------------------------------------SDPA-Flash---------------------------------------------
ALL GOOD
---------------------------------------------SDPA-CuDNN---------------------------------------------
ALL GOOD
```
Thank you @nWEIdia for discovering this issue
cc @seemethere @malfet @osalpekar | true |
2,809,508,846 | [Inductor][Triton] Change propagated dtype for fp16/bf16 unwrapped 0d tensors | kundaMwiza | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Fixes TestInductorOpInfoCPU.test_comprehensive_max_binary_cpu_float16 and related tests for Triton CPU. TestInductorOpInfoCPU is currently not run in the CI. See https://github.com/pytorch/pytorch/pull/144389#issuecomment-2608050755 for some additional context.
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,809,502,764 | [BE] Bump huggingface pin | desertfire | closed | [
"Stale",
"topic: not user facing",
"ciflow/inductor",
"ciflow/inductor-periodic"
] | 2 | CONTRIBUTOR | null | true |
2,809,492,101 | Torch device backend autoload fix | kundaMwiza | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | This causes an import failure if an external backend imports a module that uses `torch._as_tensor_fullprec` when it is being loaded.
Fixes #ISSUE_NUMBER
| true |
2,809,471,982 | mmap fails on 64k page aarch64 systems for AOTI model loading | lukalt | open | [
"module: crash",
"module: arm",
"oncall: pt2",
"export-triaged",
"oncall: export",
"module: aotinductor"
] | 8 | NONE | ### 🐛 Describe the bug
The AOTI loader mmap's a file with an offset `weights_offset`. An asssertion ensures that `weights_offset` is a multiple of 16k. The offset in à `mmap` syscall needs to be a multiple of the page size though, causing this mmap to fail on kernels with 64k pages but not on 4k pages.
https://github.com/pytorch/pytorch/blob/629840e038ee623911bedc8fef1ab84acce5ba39/torch/csrc/inductor/aoti_runtime/model.h#L601
**Failing syscall with 64k page kernel:**
```
newfstatat(AT_FDCWD, "/pytorch-llama/torchchat/exportedModels/llama3.1.json", 0xffffec3d0838, 0) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/pytorch-llama/torchchat/exportedModels/llama3.1.so", O_RDONLY) = 3
mmap(NULL, 6144375816, PROT_READ|PROT_WRITE, MAP_PRIVATE, 3, 0xd8000) = -1 EINVAL (Invalid argument) # 0xd8000 is no multiple of 65536
```
**Same syscalls with 4k page kernel:**
```
newfstatat(AT_FDCWD, "/pytorch-llama/torchchat/exportedModels/llama3.1.json", 0xfffff418a778, 0) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/pytorch-llama/torchchat/exportedModels/llama3.1.so", O_RDONLY) = 3
mmap(NULL, 6144375816, PROT_READ|PROT_WRITE, MAP_PRIVATE, 3, 0xd8000) = 0xfff9d10a7000 # 0xd8000 is a multiple of 4096
```
**How to reproduce:**
Follow these steps https://learn.arm.com/learning-paths/servers-and-cloud-computing/pytorch-llama/pytorch-llama/ on an aarch64 system with 64k page size kernel. The error occurs during model loading when running `torchchat.py` (last step).
**Error message:**
```
PyTorch version 2.5.0.dev20240828+cpu available.
Warning: checkpoint path ignored because an exported DSO or PTE path specified
Warning: checkpoint path ignored because an exported DSO or PTE path specified
Using device=cpu
Loading model...
Time to load model: 0.05 seconds
Error: mmap() failed
Traceback (most recent call last):
File "/pytorch-llama/torchchat/build/builder.py", line 480, in _initialize_model
model.forward = torch._export.aot_load(
File "/usr/local/lib/python3.10/dist-packages/torch/_export/__init__.py", line 300, in aot_load
runner = torch._C._aoti.AOTIModelContainerRunnerCpu(so_path, 1) # type: ignore[call-arg]
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /pytorch/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 70
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/pytorch-llama/torchchat/torchchat.py", line 88, in <module>
generate_main(args)
File "/pytorch-llama/torchchat/generate.py", line 901, in main
gen = Generator(
File "/pytorch-llama/torchchat/generate.py", line 253, in __init__
self.model = _initialize_model(self.builder_args, self.quantize, self.tokenizer)
File "/pytorch-llama/torchchat/build/builder.py", line 484, in _initialize_model
raise RuntimeError(f"Failed to load AOTI compiled {builder_args.dso_path}")
RuntimeError: Failed to load AOTI compiled exportedModels/llama3.1.so
```
**GDB backtrace:**
```
Catchpoint 6 (call to syscall mmap), __GI___mmap64 (offset=884736, fd=3, flags=2, prot=3, len=6144375816, addr=<optimized out>) at ../sysdeps/unix/sysv/linux/mmap64.c:58
58 in ../sysdeps/unix/sysv/linux/mmap64.c
(gdb) bt
#0 __GI___mmap64 (offset=884736, fd=3, flags=2, prot=3, len=6144375816, addr=<optimized out>)
at ../sysdeps/unix/sysv/linux/mmap64.c:58
#1 __GI___mmap64 (addr=<optimized out>, len=6144375816, prot=3, flags=2, fd=3, offset=884736)
at ../sysdeps/unix/sysv/linux/mmap64.c:46
#2 0x0000e286e96d2fe8 in torch::aot_inductor::AOTInductorModelBase<torch::aot_inductor::AOTInductorModel>::load_constants() () from /pytorch-llama/torchchat/exportedModels/llama3.1.so
#3 0x0000e286e96ee31c in torch::aot_inductor::AOTInductorModelContainer::AOTInductorModelContainer(unsigned long, std::string const&, std::optional<std::string> const&) ()
from /pytorch-llama/torchchat/exportedModels/llama3.1.so
#4 0x0000e286e96cd0e8 in AOTInductorModelContainerCreateWithDevice ()
from /pytorch-llama/torchchat/exportedModels/llama3.1.so
#5 0x0000e287174d8888 in torch::inductor::AOTIModelContainerRunner::AOTIModelContainerRunner(std::string const&, unsigned long, std::string const&, std::string const&) ()
from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cpu.so
#6 0x0000e287174d97bc in torch::inductor::AOTIModelContainerRunnerCpu::AOTIModelContainerRunnerCpu(std::string const&, unsigned long) () from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cpu.so
#7 0x0000e2871c841618 in pybind11::cpp_function::initialize<pybind11::detail::initimpl::constructor<std::string const&, int>::execute<pybind11::class_<torch::inductor::AOTIModelContainerRunnerCpu>, , 0>(pybind11::class_<torch::inductor::AOTIModelContainerRunnerCpu>&)::{lambda(pybind11::detail::value_and_holder&, std::string const&, int)#1}, void, pybind11::detail::value_and_holder&, std::string const&, int, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<torch::inductor::AOTIModelContainerRunnerCpu>&&, void (*)(pybind11::detail::value_and_holder&, std::string const&, int), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::detail::is_new_style_constructor const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call&) ()
from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so
#8 0x0000e2871c2c0814 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) ()
from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so
#9 0x0000b02a971d4b54 in ?? ()
#10 0x0000b02a971cb100 in _PyObject_MakeTpCall ()
#11 0x0000b02a971e58c4 in ?? ()
#12 0x0000b02a971e1a28 in ?? ()
#13 0x0000b02a971cb568 in ?? ()
#14 0x0000e2871c2be5dc in pybind11_meta_call ()
from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so
#15 0x0000b02a971cb100 in _PyObject_MakeTpCall ()
#16 0x0000b02a971c2334 in _PyEval_EvalFrameDefault ()
#17 0x0000b02a971d57e8 in _PyFunction_Vectorcall ()
#18 0x0000b02a971c1bcc in _PyEval_EvalFrameDefault ()
#19 0x0000b02a971d57e8 in _PyFunction_Vectorcall ()
#20 0x0000b02a971bd764 in _PyEval_EvalFrameDefault ()
#21 0x0000b02a971ca164 in _PyObject_FastCallDictTstate ()
#22 0x0000b02a971e162c in ?? ()
#23 0x0000b02a971cb078 in _PyObject_MakeTpCall ()
#24 0x0000b02a971c1f68 in _PyEval_EvalFrameDefault ()
#25 0x0000b02a971d57e8 in _PyFunction_Vectorcall ()
#26 0x0000b02a971bd764 in _PyEval_EvalFrameDefault ()
#27 0x0000b02a972be070 in ?? ()
#28 0x0000b02a972bdef4 in PyEval_EvalCode ()
#29 0x0000b02a972f151c in ?? ()
#30 0x0000b02a972e9c38 in ?? ()
#31 0x0000b02a972f11cc in ?? ()
```
### Versions
PyTorch version: 2.5.0.dev20240820+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-<....>
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Vendor ID: ARM
Model name: Neoverse-V2
Model: 0
Thread(s) per core: 1
Core(s) per cluster: 72
Socket(s): -
Cluster(s): 1
Stepping: r0p0
Frequency boost: disabled
CPU max MHz: 3447.0000
CPU min MHz: 81.0000
BogoMIPS: 2000.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh bti
L1d cache: 4.5 MiB (72 instances)
L1i cache: 4.5 MiB (72 instances)
L2 cache: 72 MiB (72 instances)
L3 cache: 114 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-71
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.5.0.dev20240828+cpu
[pip3] torchao==0.4.0+git174e630a
[conda] Could not collect
cc @malfet @snadampal @milpuz01 @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | true |
2,809,368,895 | Kw argument `requires_grad` may not necessary in `torch.randint()`, `torch.randint_like()`, `torch.randperm()` | ILCSFNO | closed | [
"triaged",
"module: random"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
The docs of [`torch.randint()`](https://pytorch.org/docs/stable/generated/torch.randint.html#torch-randint), [`torch.randint_like()`](https://pytorch.org/docs/stable/generated/torch.randint_like.html#torch-randint-like), [`torch.randperm()`](https://pytorch.org/docs/stable/generated/torch.randperm.html#torch-randperm) show their returns as below:
> torch.randint: Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive).
> torch.randint_like: Returns a tensor with the same shape as Tensor input filled with random integers generated uniformly between low (inclusive) and high (exclusive).
> torch.randperm: Returns a random permutation of integers from 0 to n - 1.
They share the same kw arguments `requires_grad` as below:
> requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
They show that their return should be of integers. But for kw arguments `requires_grad`, only Tensors of floating point and complex dtype can require gradients, so may kw argument `requires_grad` is not necessary.
### Minified repro
```python
import torch
randint_result = torch.randint(0, 10, (100, 100), requires_grad=True) # failed
# randint_like_result = torch.randint_like(torch.randint(0, 10, (100, 100)), 0, 10, requires_grad=True) # failed
# randperm_result = torch.randperm(10, requires_grad=True) # failed
```
### Outputs
```txt
RuntimeError: Only Tensors of floating point and complex dtype can require gradients
```
### Versions
pytorch==2.5.0
torchvision==0.20.0
torchaudio==2.5.0
pytorch-cuda=12.1
cc @pbelevich @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @svekars @brycebortree @sekyondaMeta @mruberry @walterddr @mikaylagawarecki | true |
2,809,352,899 | ROCm+gcc 15 asserts | trixirt | open | [
"module: rocm",
"triaged"
] | 10 | NONE | ### 🐛 Describe the bug
Fedora 42 will have gcc 15.
Gcc 15's libstdc++ asserts in multiple places in the ROCm build.
The errors look like this
/usr/lib/gcc/x86_64-redhat-linux/15/../../../../include/c++/15/array:210:2: error: reference to __host__ function '__glibcxx_assert_fail' in __host__ __device__ function
210 | __glibcxx_requires_subscript(__n);
| ^
/usr/lib/gcc/x86_64-redhat-linux/15/../../../../include/c++/15/debug/assertions.h:39:3: note: expanded from macro '__glibcxx_requires_subscript'
39 | __glibcxx_assert(_N < this->size())
| ^
/usr/lib/gcc/x86_64-redhat-linux/15/../../../../include/c++/15/x86_64-redhat-linux/bits/c++config.h:2553:12: note: expanded from macro '__glibcxx_assert'
2553 | std::__glibcxx_assert_fail(); \
| ^
/home/trix/ai/pytorch/aten/src/ATen/hip/detail/OffsetCalculator.cuh:89:7: note: called by 'get'
89 | offsets[arg] = linear_idx;
| ^
/home/trix/ai/pytorch/aten/src/ATen/native/hip/MemoryAccess.cuh:213:45: note: called by 'load<std::tuple<double, double>>'
213 | auto offset = input_offset_calculator.get(linear_idx);
| ^
/home/trix/ai/pytorch/aten/src/ATen/native/hip/Loops.cuh:59:10: note: called by 'elementwise_kernel_helper<(lambda at /home/trix/ai/pytorch/aten/src/ATen/native/hip/ActivationHardtanhKernel.hip:27:3), at::native::memory::policies::unroll<std::array<char *, 3>, TrivialOffsetCalculator<2>, TrivialOffsetCalculator<1>, at::native::memory::LoadWithoutCast, at::native::memory::StoreWithoutCast, 4>>'
59 | policy.load(args, idx);
| ^
/home/trix/ai/pytorch/aten/src/ATen/native/hip/HIPLoops.cuh:148:5: note: called by 'vectorized_elementwise_kernel<16, (lambda at /home/trix/ai/pytorch/aten/src/ATen/native/hip/ActivationHardtanhKernel.hip:27:3), std::array<char *, 3>>'
148 | elementwise_kernel_helper(f, policy);
| ^
/usr/lib/gcc/x86_64-redhat-linux/15/../../../../include/c++/15/x86_64-redhat-linux/bits/c++config.h:2547:3: note: '__glibcxx_assert_fail' declared here
2547 | __glibcxx_assert_fail()
### Versions
Collecting environment information...
PyTorch version: 2.5.0a0+git446bca5
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42133-0
OS: Fedora Linux 42 (Workstation Edition Prerelease) (x86_64)
GCC version: (GCC) 15.0.1 20250114 (Red Hat 15.0.1-0)
Clang version: 19.1.6 (Fedora 19.1.6-2.fc42)
CMake version: version 3.31.4
Libc version: glibc-2.40.9000
Python version: 3.13.1 (main, Dec 9 2024, 00:00:00) [GCC 14.2.1 20241104 (Red Hat 14.2.1-6)] (64-bit runtime)
Python platform: Linux-6.13.0-0.rc7.20250114gitc45323b7560e.56.fc42.x86_64-x86_64-with-glibc2.40.9000
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon 780M (gfx1103)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42133
MIOpen runtime version: 3.3.0
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7940HS w/ Radeon 780M Graphics
CPU family: 25
Model: 116
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 53%
CPU max MHz: 5263.0000
CPU min MHz: 400.0000
BogoMIPS: 7984.65
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] torch==2.5.0a0+git446bca5
[conda] Could not collect
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,809,342,200 | Kw argument `dtype` less relative with the functions themselves | ILCSFNO | closed | [
"triaged",
"module: random",
"module: edge cases"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
The docs of [`torch.randint()`](https://pytorch.org/docs/stable/generated/torch.randint.html#torch-randint), [`torch.randint_like()`](https://pytorch.org/docs/stable/generated/torch.randint_like.html#torch-randint-like), [`torch.randperm()`](https://pytorch.org/docs/stable/generated/torch.randperm.html#torch-randperm) show their returns as below:
> torch.randint: Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive).
> torch.randint_like: Returns a tensor with the same shape as Tensor input filled with random integers generated uniformly between low (inclusive) and high (exclusive).
> torch.randperm: Returns a random permutation of integers from 0 to n - 1.
All of them show that their return should be of integers. They have the same kw arguments `dtype`, but when `dtype` is of float, they all can run well (expected TypeError).
### Minified repro
```python
import torch
dtype = torch.double # choice: torch.double, torch.half, torch.float
randint_result = torch.randint(0, 10, (100, 100), dtype=dtype)
randint_like_result = torch.randint_like(randint_result, 0, 10, dtype=dtype)
randperm_result = torch.randperm(10, dtype=dtype)
print('randint_result:', randint_result)
print('randint_like_result:', randint_like_result)
print('randperm_result:', randperm_result)
```
### Outputs
```txt
randint_result: tensor([[0., 4., 5., ..., 3., 3., 4.],
[6., 0., 2., ..., 0., 1., 4.],
[8., 3., 0., ..., 5., 6., 9.],
...,
[8., 9., 5., ..., 2., 8., 6.],
[2., 5., 4., ..., 5., 8., 4.],
[6., 1., 5., ..., 3., 8., 6.]], dtype=torch.float64)
randint_like_result: tensor([[5., 7., 6., ..., 0., 0., 0.],
[1., 0., 7., ..., 1., 6., 0.],
[1., 7., 0., ..., 9., 7., 8.],
...,
[9., 1., 4., ..., 5., 5., 5.],
[2., 1., 8., ..., 7., 9., 8.],
[8., 9., 1., ..., 1., 5., 3.]], dtype=torch.float64)
randperm_result: tensor([0., 2., 1., 5., 7., 6., 3., 9., 8., 4.], dtype=torch.float64)
```
### Versions
pytorch==2.5.0
torchvision==0.20.0
torchaudio==2.5.0
pytorch-cuda=12.1
cc @pbelevich @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @svekars @brycebortree @sekyondaMeta @mruberry @walterddr @mikaylagawarecki | true |
2,809,298,473 | [BE][CI] bump ruff to 0.9.8 | XuehaiPan | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 20 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145606
* #144546
| true |
2,809,243,357 | WIP error_prop sc | IvanKobzarev | closed | [
"Stale",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145605
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,809,214,360 | `torch.ops.aten.copy` causes SIGSEGV when handling sparse CSR tensors with invalid metadata | WLFJ | open | [
"module: sparse",
"module: crash",
"triaged"
] | 2 | NONE | ### 🐛 Describe the bug
Using torch.ops.aten.copy with sparse CSR tensors can cause a segmentation fault (SIGSEGV). The issue appears to stem from a lack of validation for the sparse tensor metadata (crow_indices, col_indices, and values). When the metadata contains invalid or uninitialized data (e.g., due to torch.randn generating sparse CSR tensors with incomplete initialization), torch.ops.aten.copy attempts to access this data directly, leading to undefined behavior and a crash.
example:
```python
import torch
print(torch.__version__)
# Create sparse CSR tensors with torch.randn
sym_0 = (5, 5)
sym_1 = torch.sparse_csr
var_1 = torch.randn(size=sym_0, layout=sym_1) # Generates sparse CSR tensor
var_2 = torch.randn(size=sym_0, layout=sym_1)
# Attempt to copy data
res = torch.ops.aten.copy(var_1, var_2)
print(res)
```
Observed behavior:
```
2.7.0.dev20250116+cu124
/home/user/test.py:8: UserWarning: Sparse CSR tensor support is in beta state.
If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues.
(Triggered internally at /pytorch/aten/src/ATen/SparseCsrTensorImpl.cpp:53.)
var_1 = torch.randn(size=sym_0, layout=sym_1)
fish: Job 2, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip | true |
2,809,193,129 | [dynamo] refactor dynamo__custom_eval_frame to C++, refactor SKIP_CODE[_RECURSIVE] | williamwen42 | closed | [
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146355
* __->__ #145603
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,809,044,809 | [ATen][CUDA][Transformers] Add Blackwell support to SDPA | Aidyn-A | closed | [
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: new features",
"topic: not user facing",
"module: core aten"
] | 4 | COLLABORATOR | This PR adds sm_100 and sm_120 archs to support SDPA (Flash Attention and Memory Efficient Attention) on Blackwell machines.
Special thanks to @Fuzzkatt for co-authoring these changes!
cc @ptrblck @msaroufim @eqy @manuelcandales @SherlockNoMad @angelayi @drisspg | true |
2,809,028,303 | Dmonakhov/mpi backend enablement | dmitry-monakhov | closed | [
"module: rocm",
"release notes: quantization",
"release notes: releng",
"fx",
"module: inductor",
"module: dynamo"
] | 2 | NONE | Pr should consists of two commits
- 8582d202b7 Switch to custom MPI aware build image
Switch to custom MPI aware build image
Major change: switch from pytorch/manylinux-builder to pytorch/manylinux2_28-builder
But pytorch/manylinux2_28-builder is now standard for mainline pytorch, we should be safe.
Custom image already build and uploaded manually
- 505376580e Add custom build environment
Add custom build environment
It order to enable MPI backend we need MPI libraries in our build environment
To make mpi aware build environment we use official pytorch/manylinux2_28-builder
as a base and install EFA/openmpi
But it seems I do not have permissions to modify workflows for this repo
```
https://github.com/poolsideai/pytorch
! [remote rejected] HEAD -> dmonakhov/mpi-backend-enablement (refusing to allow a Personal Access Token to create or update workflow `.github/workflows/poolside-nightly-build.yaml` without `workflow` scope)
error: failed to push some refs to 'https://github.com/poolsideai/pytorch'
```
So second patch attached below:
[0001-Switch-to-custom-MPI-aware-build-image.patch.txt](https://github.com/user-attachments/files/18533668/0001-Switch-to-custom-MPI-aware-build-image.patch.txt)
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,964,522 | Add device support for chunk_cat, all_gather_copy_in, and split_with_… | chen8491 | closed | [
"oncall: distributed",
"triaged",
"open source",
"ciflow/trunk",
"release notes: distributed (fsdp)"
] | 5 | NONE | …sizes_copy in _fsdp_collectives.py
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,808,953,599 | [NFC] Fix some minor typos. | c8ef | closed | [
"oncall: distributed",
"oncall: jit",
"open source",
"Merged",
"release notes: jit",
"topic: not user facing",
"module: inductor"
] | 4 | CONTRIBUTOR | cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,923,715 | add scalar inputs with out causes error in torch.compile | jthakurH | open | [
"triaged",
"oncall: pt2",
"module: decompositions",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 1 | NONE | ### 🐛 Describe the bug
```
import torch
def add_fn(params):
res = torch.add(**params)
return res
if __name__ == "__main__":
add_fn = torch.compile(add_fn)
params = {'other': 1.1, 'alpha': 0.4, 'input': 2, 'out': torch.tensor(1.)}
res = add_fn(params)
print(res)
```
It cases below error:
```
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] failed while attempting to run meta for aten.add.out
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] Traceback (most recent call last):
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] File "/tmp/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2016, in _dispatch_impl
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] r = func(*args, **kwargs)
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] File "/tmp/lib/python3.10/site-packages/torch/_ops.py", line 716, in __call__
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] return self._op(*args, **kwargs)
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] File "/tmp/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 273, in _fn
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] result = fn(*args, **kwargs)
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] File "/tmp/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 141, in _fn
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] result = fn(**bound.arguments)
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] File "/tmp/lib/python3.10/site-packages/torch/_refs/__init__.py", line 1087, in add
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] dtype = a.dtype if isinstance(a, TensorLike) else b.dtype # type: ignore[union-attr]
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] AttributeError: 'float' object has no attribute 'dtype'
Traceback (most recent call last):
File "/tmp/val.py", line 12, in <module>
res = add_fn(params)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__
result = self._inner_convert(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/tmp/lib/python3.10/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1680, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 897, in call_function
tensor_variable = wrap_fx_proxy(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2037, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2124, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2082, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2017, in get_fake_value
ret_val = wrap_fake_exception(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1574, in wrap_fake_exception
return fn()
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2018, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2150, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2132, in run_node
return node.target(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1241, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1695, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1342, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2016, in _dispatch_impl
r = func(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_ops.py", line 716, in __call__
return self._op(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 273, in _fn
result = fn(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 141, in _fn
result = fn(**bound.arguments)
File "/tmp/lib/python3.10/site-packages/torch/_refs/__init__.py", line 1087, in add
dtype = a.dtype if isinstance(a, TensorLike) else b.dtype # type: ignore[union-attr]
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method add of type object at 0x7f10670a3a20>(*(), **{'other': 1.1, 'alpha': 0.4, 'input': 2, 'out': FakeTensor(..., size=())}):
'float' object has no attribute 'dtype'
from user code:
File "/tmp/val.py", line 5, in add_fn
res = torch.add(**params)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.5.1
cc @chauhang @penguinwu @SherlockNoMad @zou3519 @bdhirsh @yf225 | true |
2,808,916,693 | Add `torch._foreach_copy_` doc | zeshengzong | closed | [
"triaged",
"open source",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Fixes #107162 #145355
Add docs for `torch._foreach_copy_`
**Test Result**

cc @janeyx99
| true |
2,808,831,401 | [Dynamo] compile torch.logit with different data types | leslie-fang-intel | open | [
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 2 | COLLABORATOR | ### 🐛 Describe the bug
When fixing https://github.com/pytorch/pytorch/issues/145379, I met a failure which seems related to dynamo. If test with `torch.float64`, below example works well. However, it fails with `torch.float32` with error
```
File "/home/leslie/community/pytorch/torch/_dynamo/variables/builder.py", line 2167, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/home/leslie/community/pytorch/torch/_dynamo/variables/builder.py", line 2233, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
File "/home/leslie/community/pytorch/torch/_dynamo/variables/builder.py", line 2329, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "/home/leslie/community/pytorch/torch/_dynamo/utils.py", line 2965, in get_fake_value
unimplemented(
File "/home/leslie/community/pytorch/torch/_dynamo/exc.py", line 361, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: data dependent operator: aten._local_scalar_dense.default; to enable, set torch._dynamo.config.capture_scalar_outputs = True
from user code:
File "/home/leslie/community/pytorch/torch/_dynamo/external_utils.py", line 48, in inner
return fn(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
Repro:
```
import torch
# dtype = torch.float64 # Pass
dtype = torch.float32 # Fail
input = torch.tensor(0.3, dtype=dtype)
eps = torch.tensor(0.9, dtype=dtype)
compiled = torch.compile(torch.logit, fullgraph=True)
print(compiled(input, eps))
```
### Versions
```
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+git95a92b5
[pip3] torchaudio==2.5.0a0+a95cfa8
[pip3] torchdata==0.10.0a0+2631c38
[pip3] torchmultimodal==0.1.0b0
[pip3] torchtext==0.17.0a0+1d4ce73
[pip3] torchvision==0.20.0a0+945bdad
[conda] mkl 2024.2.2 ha957f24_15 conda-forge
[conda] mkl-include 2024.2.2 ha957f24_15 conda-forge
[conda] numpy 1.26.0 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.7.0a0+git95a92b5 dev_0 <develop>
[conda] torchaudio 2.5.0a0+a95cfa8 dev_0 <develop>
[conda] torchdata 0.10.0a0+2631c38 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchtext 0.17.0a0+1d4ce73 dev_0 <develop>
[conda] torchvision 0.20.0a0+945bdad dev_0 <develop>
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 | true |
2,808,775,174 | [micro_pipeline_tp] support pattern matching row-wise scaled_mm with sharded scale | yifuwang | open | [
"oncall: distributed",
"open source",
"Stale",
"release notes: distributed (pipeline)",
"module: inductor",
"ciflow/inductor"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145595
* #145594
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,775,073 | [micro_pipeline_tp] add logging for all-gather-matmul fusion | yifuwang | open | [
"open source",
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,614,012 | Fix constants with non-functional operators | tugsbayasgalan | closed | [
"Merged",
"ciflow/trunk",
"fx",
"ciflow/inductor",
"release notes: export"
] | 12 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145593
Previously, in non-strict path, we always error when trying to inplace update a constant tensor because those constant tensors are not actually wrapped by functional tensors. This is correct behaviour in torch.compile, because dynamo makes all constant tensors into buffers and AOTDispatcher just lifts them and wraps them in functional tensors. However, in non-strict, there is no such step that registers constants as buffers so AOTDispatcher panics when it sees these dangling constant tensors when functioanalizing.
Due to recent change in the IR, this is no longer an issue in non-strict path because we don't call AOTDispatcher at training IR level, but now it is a problem for both strict and non-strict when we lower to inference. (lowering to inference is very similar to non-strict tracing) As a result, we have at least one external (https://github.com/pytorch/pytorch/issues/141336) and internal issues reported due to this difference.
To fix this, there are two ways:
1. Make functionalization be aware of constant tensors and map them to functional tensors on the fly. This makes functionalization invariant uglier and could potentially open up a gate for more nasty bugs.
2. Special handle this in export. This seems more aligned with what dynamo does today so i think we should do it this way. I think the current state could benefit from more refactors to make the run_deocmpositions to be more similar to strict export (because both of them now handle this constant registerinig logic) but it is bit complicated to do it now because strict export version of this logic is also not complete because it doesn't take into account of export graph renaming pass etc). I will follow up with more refactors after this PR (T213466691) to unblock users faster.
For future reference:
Why are we not doing "turning constants into non-persistent buffers and never de-register"? The reason is because in some internal models, they rely on module.to to reliably work to move params/buffers to correct device. As a result, buffers are moved while constants are not. In composibility meeting, we agreed that export won't do device agnostic tracing going forward (it will provide a way to specify FakeTensor in CPU that can be configured to be run on GPU), so after that is done, we can always turn constants into non-persistent buffers which will simplify export's constant handling.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D68610739](https://our.internmc.facebook.com/intern/diff/D68610739) | true |
2,808,612,495 | _pickle.UnpicklingError: invalid load key, ''. | sankexin | open | [
"oncall: distributed",
"triaged"
] | 1 | NONE | ### 🐛 Describe the bug
`import pickle
import torch
import io
_pickler = pickle.Pickler
_unpickler = pickle.Unpickler
tensor = torch.tensor([126, 188, 133, 30, 60, 138, 188], dtype=torch.uint8)
buf = tensor.numpy().tobytes()[:3]
_unpickler(io.BytesIO(buf)).load()`
tobytes is error:
[rank14]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2362, in _tensor_to_object
[rank14]: return _unpickler(io.BytesIO(buf)).load()
[rank14]: _pickle.UnpicklingError: invalid load key, '~'.
# need to modify:
tensor.numpy().tobytes()
to
pickle.dumps(tensor.numpy())
### Versions
python3.10
torch2.3.0
ubuntu20.04
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,808,595,587 | [CCA] remove TODO for hardware_destructive_interference_size | 1274085042 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12 | CONTRIBUTOR | @zyan0 @albanD @houseroad
| true |
2,808,593,345 | [dynamo][benchmarks] Stop benchmarking compile time of dead code | xmfan | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5 | MEMBER | FIXES https://github.com/pytorch/pytorch/issues/144775 frfr
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145590
See details on the problem: https://github.com/pytorch/pytorch/issues/144775#issuecomment-2611699385
We fixed some silent incorrectness, but it results in less nodes DCE'd. The benchmark iteration loop had some dead code which could contain side effect ops that aren't safe to DCE. The regression is expected.
This PR removes the compile time benchmarking of the dead code, which should reduce the noise of the benchmark and aligns with the benchmarking used by performance tests
New benchmark results:
```python
dev,name,batch_size,accuracy,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips,compilation_latency
cuda,BartForConditionalGeneration,1,pass,897,1,0,0,0,0,0,39.322364 # after https://github.com/pytorch/pytorch/pull/144319
cuda,BartForConditionalGeneration,1,pass,897,1,0,0,0,0,0,38.972257 # before https://github.com/pytorch/pytorch/pull/144319
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,808,564,725 | Open up PT UTs to cover additional devices | ankurneog | open | [
"triaged",
"open source",
"Stale",
"topic: not user facing",
"module: inductor"
] | 15 | CONTRIBUTOR | This is follow-up of https://github.com/pytorch/pytorch/pull/128584. Covering additional files for execution.
Based on the discussion we further had with the reviewers it is decided to remove ```onlyNativeDeviceTypes``` decorator to open these up for all devices.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,546,403 | [Custom Ops] Add a new API to allow users to register an autocast for the custom op | yanboliang | closed | [
"module: custom-operators",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145673
* __->__ #145588
Fixes #137033
| true |
2,808,480,832 | [dynamo] mark_dynamic not working as intended with input shapes | shreyansh26 | closed | [
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: aotdispatch"
] | 3 | NONE | ### 🐛 Describe the bug
```python
import torch
def fn(a, b):
return a.shape[0] * a * b
arg1 = torch.randn(4, 3)
torch._dynamo.mark_dynamic(arg1, 0, min=2, max=10)
compiled_fn = torch.compile(fn)
out = compiled_fn(arg1, torch.randn(4, 3))
new_arg1 = torch.randn(8, 3)
out = compiled_fn(new_arg1, torch.randn(8, 3))
```
Here, even after marking the first dimension to be dynamic, running the script gives an error and shows that the symbolic int was replaced as a constant (= 4). The issue also occurs with `torch.compile(fn, backend="eager")`
### Error logs
```
I0123 20:39:08.545000 2426289 site-packages/torch/fx/experimental/symbolic_shapes.py:3557] [0/0] create_symbol s0 = 4 for L['a'].size()[0] [2, 10] at nt/ssd1/shreyansh/home_dir/misc_experiments/pytorch_internals/dynamo/mark_dynamic.py:6 in fn (_dynamo/variables/builder.py:2710 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0"
I0123 20:39:08.556000 2426289 site-packages/torch/fx/experimental/symbolic_shapes.py:4857] [0/0] set_replacement s0 = 4 (range_refined_to_singleton) VR[4, 4]
I0123 20:39:08.557000 2426289 site-packages/torch/fx/experimental/symbolic_shapes.py:5106] [0/0] eval Eq(s0, 4) [guard added] at nt/ssd1/shreyansh/pytorch_internals/dynamo/mark_dynamic.py:6 in fn (_subclasses/fake_impls.py:785 in infer_size), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(s0, 4)"
I0123 20:39:11.898000 2426289 site-packages/torch/fx/experimental/symbolic_shapes.py:3646] [0/0] produce_guards
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] Error while creating guard:
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] Name: ''
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] Source: shape_env
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] Create Function: SHAPE_ENV
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] Guard Types: None
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] Code List: None
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] Object Weakref: None
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] Guarded Class Weakref: None
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] Traceback (most recent call last):
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_guards.py", line 281, in create
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] return self.create_fn(builder, self)
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 1836, in SHAPE_ENV
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] guards = output_graph.shape_env.produce_guards(
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4178, in produce_guards
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] raise ConstraintViolationError(
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['a'].size()[0])! For more information, run with TORCH_LOGS="+dynamic".
E0123 20:39:11.899000 2426289 site-packages/torch/_guards.py:283] [0/0] - Not all values of L['a'].size()[0] = L['a'].size()[0] in the specified range L['a'].size()[0] <= 10 are valid because L['a'].size()[0] was inferred to be a constant (4).
E0123 20:39:11.900000 2426289 site-packages/torch/_guards.py:285] [0/0] Created at:
E0123 20:39:11.900000 2426289 site-packages/torch/_guards.py:285] [0/0] File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 615, in transform
E0123 20:39:11.900000 2426289 site-packages/torch/_guards.py:285] [0/0] tracer = InstructionTranslator(
E0123 20:39:11.900000 2426289 site-packages/torch/_guards.py:285] [0/0] File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2670, in __init__
E0123 20:39:11.900000 2426289 site-packages/torch/_guards.py:285] [0/0] output=OutputGraph(
E0123 20:39:11.900000 2426289 site-packages/torch/_guards.py:285] [0/0] File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 317, in __init__
E0123 20:39:11.900000 2426289 site-packages/torch/_guards.py:285] [0/0] self.init_ambient_guards()
E0123 20:39:11.900000 2426289 site-packages/torch/_guards.py:285] [0/0] File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 463, in init_ambient_guards
E0123 20:39:11.900000 2426289 site-packages/torch/_guards.py:285] [0/0] self.guards.add(ShapeEnvSource().make_guard(GuardBuilder.SHAPE_ENV))
Traceback (most recent call last):
File "/mnt/ssd1/shreyansh/home_dir/misc_experiments/pytorch_internals/dynamo/mark_dynamic.py", line 12, in <module>
out = compiled_fn(arg1, torch.randn(4, 3))
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__
result = self._inner_convert(
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 796, in _compile_inner
check_fn = CheckFunctionManager(
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 2261, in __init__
guard.create(builder)
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_guards.py", line 281, in create
return self.create_fn(builder, self)
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 1836, in SHAPE_ENV
guards = output_graph.shape_env.produce_guards(
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10-torch25/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4178, in produce_guards
raise ConstraintViolationError(
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['a'].size()[0])! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of L['a'].size()[0] = L['a'].size()[0] in the specified range L['a'].size()[0] <= 10 are valid because L['a'].size()[0] was inferred to be a constant (4).
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.161.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4799.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @ezyang @bobrenjc93 @zou3519 @bdhirsh @yf225 | true |
2,808,430,027 | Revert "[compiled autograd] support Tensor Subclasses in AOTBackward (#144115)" | zou3519 | closed | [
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
This reverts commit 082c28c3c655984ce65c13336cff822db95ee470.
Reverted https://github.com/pytorch/pytorch/pull/144115 on behalf of https://github.com/izaitsevfb due to breaking internal tests T213390054 ([comment](https://github.com/pytorch/pytorch/pull/143296#issuecomment-2611224926))
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @xmfan | true |
2,808,429,978 | Revert "[compiled_autograd] Rename interface to pyinterface (#145495)" | zou3519 | closed | [] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
This reverts commit e1407f5aeb658c8c959d33158f465e975799a3d0.
Reverted https://github.com/pytorch/pytorch/pull/145495 on behalf of https://github.com/izaitsevfb due to reverted internally ([comment](https://github.com/pytorch/pytorch/pull/145495#issuecomment-2611194932)) | true |
2,808,421,962 | layer_norm_kernel.cu eliminate the need for divisions for default vector size | doru1004 | closed | [
"module: rocm",
"triaged",
"open source",
"Stale",
"release notes: cuda",
"ciflow/rocm"
] | 4 | CONTRIBUTOR | Eliminate the need for divisions in layernorm for default vector size.
The divisions performed in the online sum section can be replaced with immediate values as the vector size used is always 4. We special case this and leave the alternative in place in case other vector sizes are to be explored in the future.
For the combine step the division is always a power of 2 as long as the vector size used is a power of two so we can use shuffles to perform the division.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,808,366,233 | [inductor][4/N] triton support post-#5512, fix constexpr signatures | davidberard98 | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145515
* __->__ #145583
Prior to this PR, constexprs were appearing in signatures as `{.. "XBLOCK : tl.constexpr": "constexpr"}` when they really should appear as `{.. "XBLOCK": "constexpr"}`.
This PR represents the argument names as ArgName objects, which can optionally be marked as constexpr.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,364,301 | [CD] Fix slim-wheel cuda_nvrtc import problem | atalman | closed | [
"module: binaries",
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel",
"topic: binaries"
] | 5 | CONTRIBUTOR | Similar fix as: https://github.com/pytorch/pytorch/pull/144816
Fixes: https://github.com/pytorch/pytorch/issues/145580
Found during testing of https://github.com/pytorch/pytorch/issues/138340
Please note both nvrtc and nvjitlink exist for cuda 11.8, 12.4 and 12.6 hence we can safely remove if statement. Preloading can apply to all supporting cuda versions.
CUDA 11.8 path:
```
(.venv) root@b4ffe5c8ac8c:/pytorch/.ci/pytorch/smoke_test# ls /.venv/lib/python3.12/site-packages/torch/lib/../../nvidia/cuda_nvrtc/lib
__init__.py __pycache__ libnvrtc-builtins.so.11.8 libnvrtc-builtins.so.12.4 libnvrtc.so.11.2 libnvrtc.so.12
(.venv) root@b4ffe5c8ac8c:/pytorch/.ci/pytorch/smoke_test# ls /.venv/lib/python3.12/site-packages/torch/lib/../../nvidia/nvjitlink/lib
__init__.py __pycache__ libnvJitLink.so.12
```
Test with rc 2.6 and CUDA 11.8:
```
python cudnn_test.py
2.6.0+cu118
---------------------------------------------SDPA-Flash---------------------------------------------
ALL GOOD
---------------------------------------------SDPA-CuDNN---------------------------------------------
ALL GOOD
```
Thank you @nWEIdia for discovering this issue
cc @seemethere @malfet @osalpekar | true |
2,808,362,403 | [MPS][BE] Implement bilineard2d as shader | malfet | closed | [
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145581
That significantly improves performance and addresses correctness problem(to an extend permitted by reducing precision of scale factor computation to float32). uint8 scaling algorithm mimics CPU/Pillow implementation
https://github.com/python-pillow/Pillow/blob/569b785371aa717a004adb0166feb565bbb01b7b/src/libImaging/Resample.c#L306-L309
I.e. using fixed precision integral arithmetic and rounding results of horizontal interpolation back to integers before performing vertical one, which results in technically less accurate results.
But even with those changes, `atol`, `rtol` must be tweaked to `1, 0` when scale factor is `1/3` or `2/3` because of the difference of representation of those values as floats and doubles.
Changes in the performance could be measured using the following script
```python
import torch
import time
import subprocess
def benchmark(device, dtype):
# Create example inputs
x = torch.testing.make_tensor(1, 1, 2048, 2048, device=device, dtype=dtype)
sf = .5
# Check output
y = torch.nn.functional.interpolate(x, scale_factor=sf, mode="bilinear")
z = torch.nn.functional.interpolate(x.cpu(), scale_factor=sf, mode="bilinear")
outputs_match = torch.allclose(y.cpu(), z)
if not outputs_match:
atol = (y.cpu() - z).abs().max()
rtol = ((y.cpu() - z)[z!=0]/z[z!=0]).abs().max()
print(f"atol={atol} rtol={rtol}")
# Measure time manually
start_time = time.time() * 1000
for _ in range(1000):
y = torch.nn.functional.interpolate(x, scale_factor=sf, mode="bilinear")
torch.mps.synchronize
end_time = time.time() * 1000
manual_delta = (end_time - start_time)
average_time = f"{manual_delta:6.1f}"
return "True " if outputs_match else "False", average_time
outputs_match_list = []
average_time_list = []
for device in ["mps", "cpu"]:
for dtype in [torch.float32, torch.float16, torch.bfloat16, torch.uint8]:
outputs_match, average_time = benchmark(device, dtype)
outputs_match_list.append(str(outputs_match))
average_time_list.append(average_time)
brand_string = subprocess.check_output(['sysctl', '-n', 'machdep.cpu.brand_string']).decode("utf-8").strip()
print(f"\nBenchmarking Results (collected on {brand_string}):")
print("-"*40)
print("Device : MPS | CPU")
print("Dtype : FP32 | FP16 | BF16 | U8 | FP32 | FP16 | BF16 | U8")
print(f"Outputs Match : ", " | ".join(outputs_match_list))
print(f"Average Time (us) :", " |".join(average_time_list))
```
Benchmark results before
```
Benchmarking Results (collected on Apple M4 Pro):
----------------------------------------
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | U8 | FP32 | FP16 | BF16 | U8
Outputs Match : True | True | True | False | True | True | True | True
Average Time (us) : 277.3 | 197.2 | 188.0 | 163.5 | 302.8 | 248.1 | 308.7 | 650.9
```
After(almost **100x** perf gain):
```
Benchmarking Results (collected on Apple M4 Pro):
----------------------------------------
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | U8 | FP32 | FP16 | BF16 | U8
Outputs Match : True | True | True | True | True | True | True | True
Average Time (us) : 1.7 | 1.5 | 1.7 | 1.5 | 296.5 | 236.0 | 310.8 | 642.6
``` | true |
2,808,360,780 | torch crashes on ubuntu:24.04 during SDPA-CuDNN test | atalman | closed | [
"module: cudnn",
"triaged",
"module: sdpa"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
Please see issue: https://github.com/pytorch/pytorch/issues/138340 this happens with cu118, cu124 and cu126 binaries
Test:
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from dataclasses import dataclass
from torch.nn.attention import bias, sdpa_kernel, SDPBackend
@dataclass
class Config:
n_embd: int = 512
n_head: int = 8
n_layer: int = 6
n_ctx: int = 2048
bias: bool = False
class CausalSelfAttention(nn.Module):
def __init__(self, config):
super().__init__()
assert config.n_embd % config.n_head == 0
# key, query, value projections for all heads, but in a batch
self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd, bias=config.bias)
# output projection
self.c_proj = nn.Linear(config.n_embd, config.n_embd, bias=config.bias)
self.n_head = config.n_head
self.n_embd = config.n_embd
def forward(self, x):
B, T, C = x.size() # batch size, sequence length, embedding dimensionality (n_embd)
q, k, v = self.c_attn(x).split(self.n_embd, dim=2)
k = k.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
q = q.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
v = v.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
y = F.scaled_dot_product_attention(q, k, v, attn_mask=None, is_causal=True)
# HERE, WE NEED THIS CONTIGUOUS TO BE A NO-OP
# y = y.transpose(1, 2).contiguous().view(B, T, C)
y = y.transpose(1, 2).view(B, T, C)
y = self.c_proj(y)
return y
def test_attention(backend: SDPBackend):
config = Config()
Attention = CausalSelfAttention(config).to("cuda", dtype=torch.float16)
sample_input = torch.randn(1, 2048, config.n_embd, device="cuda", dtype = torch.float16)
with sdpa_kernel(backend):
try:
out = Attention(sample_input)
print("ALL GOOD")
except RuntimeError as e:
print("❗ NOT GOOD ❗")
print(e)
if __name__ == "__main__":
width = 100
print("SDPA-Flash".center(width, "-"))
test_attention(SDPBackend.FLASH_ATTENTION)
print("SDPA-CuDNN".center(width, "-"))
test_attention(SDPBackend.CUDNN_ATTENTION)
```
Observing crash like this:
```
.venv) root@b4ffe5c8ac8c:/pytorch/.ci/pytorch/smoke_test# python3 cudnn_test.py
---------------------------------------------SDPA-Flash---------------------------------------------
ALL GOOD
---------------------------------------------SDPA-CuDNN---------------------------------------------
Could not load library libnvrtc.so.12. Error: libnvrtc.so.12: cannot open shared object file: No such file or directory
Could not load library libnvrtc.so. Error: libnvrtc.so: cannot open shared object file: No such file or directory
Could not load library libnvrtc.so.12. Error: libnvrtc.so.12: cannot open shared object file: No such file or directory
Could not load library libnvrtc.so. Error: libnvrtc.so: cannot open shared object file: No such file or directory
Could not load library libnvrtc.so.12. Error: libnvrtc.so.12: cannot open shared object file: No such file or directory
Could not load library libnvrtc.so. Error: libnvrtc.so: cannot open shared object file: No such file or directory
❗ NOT GOOD ❗
cuDNN Frontend error: No valid engine configs for Matmul_MUL_GEN_INDEX_GEN_INDEX_CMP_GE_BINARY_SELECT_Reduction_SUB_EXP_Reduction_LOG_ADD_DIV_Matmul_
```
### Versions
2.6.0
cc @csarofeen @ptrblck @xwang233 @eqy | true |
2,808,341,937 | Spruce up docs for emulate_precision_casts | ezyang | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145579
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,324,103 | [MPS][BE] Turn `bicubic2d` into generic metal template | malfet | closed | [
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145581
* __->__ #145578
In preparation for more metal shaders to come | true |
2,808,316,857 | [inductor] Fix duplicate detection in _dynamic_scale_rblock | jansel | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142295
* #145671
* __->__ #145577
* #142026
Before this the code was doing nothing because Config doesn't define `__hash__` or `__eq__` (so it was based on object id).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,298,623 | [Inductor][CPP] fix torch logit decomposition | leslie-fang-intel | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145576
**Summary**
Fix issue https://github.com/pytorch/pytorch/issues/145379, current decomposition using `self = torch.clamp(self, lo, hi)` which gives wrong result when `lo` is larger than `hi` comparing to eager implementation: https://github.com/pytorch/pytorch/blob/cd68d549111a8c5d0e056bbb2922e6b37bf88841/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp#L165
Align their behavior in this PR.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_cpu_repro.py -k test_torch_logit
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,297,040 | [inductor][3/N] triton support post-#5512, tt.divisibility format | davidberard98 | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 8 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145515
* #145583
* __->__ #145575
1. Fix the tt.divisibility format in hints.py. Previously, it was `{((0,), (1,)): [["tt.divisibility", 16]]}`. Now it is `{(0,): [["tt.divisibility", 16]], (1,): [["tt.divisibility", 16]]}`. This was an oversight in the first PR I added. I've verified that we now get `{ tt.divisibility = 16 }` in the generated TTGIR.
2. Update the test_codegen_triton.py test to work with multiple triton versions (and test this divisibility format in the new triton version)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,291,099 | [compile / strict export] torch._dynamo.exc.Unsupported: CollectiveFunctionRewriteVariable can't support async_op=True for <function all_reduce at 0x7f40be5724d0> | henrylhtsang | open | [
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
Ran into this problem when trying all reduce with async op
```
import logging
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.export import export
def run(rank, size):
mod = Foo()
inps = (torch.randn(4, 4),)
mod(*inps)
# comment torch.compile to check export
torch.compile(mod, fullgraph=True)(*inps)
if rank == 0:
ep = export(mod, inps)
print(ep)
def init_process(rank, size, fn, backend="gloo"):
"""Initialize the distributed environment."""
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "29500"
torch.cuda.set_device(rank)
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(4, 3)
def forward(self, x):
y = self.linear(x).abs().clamp(max=1.0) * 2
work = dist.all_reduce(y, async_op=True)
work.wait()
return y
def main() -> None:
world_size = 2
processes = []
mp.set_start_method("spawn")
for rank in range(world_size):
p = mp.Process(target=init_process, args=(rank, world_size, run))
p.start()
processes.append(p)
for p in processes:
p.join()
if __name__ == "__main__":
main()
```
Thanks @pianpwk for his example.
### Versions
trunk
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,808,288,498 | [inductor/profiler] add kernel kwargs instrumentation | briancoutinho | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 20 | CONTRIBUTOR | ## About
As above, record the kernel launch kwargs. These tends to be contexpr arguments to triton kernels like block size etc.
## Test program
Note, install triton before proceeding (pip install triton)
triton_test.py>>>
```
import torch
from torch.profiler import profile, ProfilerActivity
def foo(x, y):
a = torch.sin(x)
b = torch.cos(y)
return a + b
def main():
x = torch.randn(10, 10).cuda()
y = torch.randn(10, 10).cuda()
opt_foo = torch.compile(foo)
z = opt_foo(x, y)
# Profile the kernel function on the GPU
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA], record_shapes=True
) as prof:
z = opt_foo(x, y)
# Export the trace to a file
prof.export_chrome_trace("my_kernel_trace.json")
if __name__ == "__main__":
main()
```
Run it and we should get a trace file my_kernel_trace.json
Output has triton event with the kernel_kwargs attribute.
```
{
"ph": "X", "cat": "cpu_op", "name": "triton_poi_fused_add_cos_sin_0", "pid": 2480815, "tid": 2480815,
"ts": 2045246693014.959, "dur": 75.662,
"args": {
...
"kernel_backend": "triton",
"num_warps": 4,
"kernel_kwargs": "XBLOCK=128", "num_stages": 1, "grid": "grid(100,)",
"kernel_file": "/tmp/torchinductor_bcoutinho/ow/cowpmkdpla4qfqj6jupnq4d7og7iz7eeb5wergubivubxd4xapor.py",
"kernel_hash": "cowpmkdpla4qfqj6jupnq4d7og7iz7eeb5wergubivubxd4xapor"
}
},
```
## Unit Test
Updated unit test:
```
pytest test/inductor/test_profiler.py -k test_pt2_triton_attributes
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,272,775 | [BE] mv test/inductor_skips/* to test/inductor_expected_failures/ | masnesral | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145572
Summary: I think skipping these tests is suboptimal. If we categorize as expected failures, then we'll see test failures when they start passing, which means they're more likely to be removed. As a skip, they quietly continue to skip. | true |
2,808,262,045 | [BE] Automate update stable_cuda version so that we can set it when introducing new cuda version | atalman | closed | [
"module: cuda",
"module: ci",
"triaged",
"topic: binaries"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
Currently We pin stable cuda version here :
https://github.com/pytorch/pytorch/blob/main/.github/scripts/generate_binary_build_matrix.py#L419
https://github.com/pytorch/pytorch/blob/main/.github/scripts/generate_binary_build_matrix.py#L376
https://github.com/pytorch/pytorch/blob/main/.github/workflows/docker-release.yml#L156
etc...
I would like to add CUDA_STABLE variable to https://github.com/pytorch/pytorch/blob/main/.github/scripts/generate_binary_build_matrix.py to be able to set it once during new CUDA version update and reuse it across the project.
### Versions
2.7.0
cc @ptrblck @msaroufim @eqy @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,808,250,448 | Enable CUDA 12.8.0, Disable CUDA 12.4 | tinglvv | open | [
"module: cuda",
"triaged"
] | 13 | COLLABORATOR | ### 🚀 The feature, motivation and pitch
CUDA 12.8.0 is out, adding to CI/CD.
Docker Images & Windows AMI Update
- [x] https://github.com/pytorch/pytorch/pull/145567
- [x] https://github.com/pytorch/pytorch/pull/145789
- [x] Magma build - https://github.com/pytorch/pytorch/pull/145765
- [x] https://github.com/pytorch/pytorch/pull/146019
- [x] Windows AMI - https://github.com/pytorch/test-infra/pull/6243
- [x] Windows magma build - https://github.com/pytorch/pytorch/pull/146653
- [x] https://github.com/pytorch/pytorch/pull/146906
CD Update
- [x] https://github.com/pytorch/pytorch/pull/145792
- [x] https://github.com/pytorch/pytorch/pull/147037
- [x] https://github.com/pytorch/pytorch/pull/146265
- [x] https://github.com/pytorch/test-infra/pull/6244
- [x] https://github.com/pytorch/pytorch/pull/146378
- [x] https://github.com/pytorch/test-infra/pull/6257
- [x] https://github.com/pytorch/test-infra/pull/6273
- [x] https://github.com/pytorch/pytorch/pull/146957
- [x] https://github.com/pytorch/pytorch/pull/146073
- [x] https://github.com/pytorch/test-infra/pull/6308
- [x] https://github.com/pytorch/pytorch/pull/147607
- [x] https://github.com/pytorch/pytorch/pull/148465
- [x] https://github.com/pytorch/pytorch/pull/148963
- [ ] https://github.com/pytorch/audio/issues/3877
CUDA 12.4 deprecation and CUDA 12.6 CI benchmarks
- [x] https://github.com/pytorch/test-infra/pull/6333
- [x] https://github.com/pytorch/pytorch/pull/148895
- [x] https://github.com/pytorch/pytorch/pull/148602
- [x] https://github.com/pytorch/pytorch/pull/148612
- [ ] https://github.com/pytorch/pytorch/issues/148699
### Alternatives
_No response_
### Additional context
_No response_
cc @atalman @malfet @ptrblck @msaroufim @eqy @nWEIdia | true |
2,808,245,753 | [aotinductor] update unbacked symint runtime assertion msg | ColinPeppler | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145569
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,808,244,087 | [mps] Hoist erfinv logic out of the kernel in preparation for moving. | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 4 | MEMBER | Will be used in inductor.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,240,467 | Add CUDA 12.8 installation and manylinux-cuda12.8 | tinglvv | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 19 | COLLABORATOR | Breaking https://github.com/pytorch/pytorch/pull/145557 into two parts.
Need to have manylinux-cuda12.8 in order to build magma.
Issue: https://github.com/pytorch/pytorch/issues/145570
cc @atalman @malfet @ptrblck @nWEIdia
| true |
2,808,233,453 | Advance docker release latest verison to cuda 12.4 | atalman | closed | [
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Fixed latest tag in ghcr.io to be cuda 12.4 docker image. Todo, Need to add it to : https://github.com/pytorch/builder/blob/main/CUDA_UPGRADE_GUIDE.MD
Will need to check if we can automate this by introducing cuda_stable variable or something like this. | true |
2,808,233,357 | Refactor fuzzer and add support for Dynamo | exclamaforte | closed | [
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 16 | CONTRIBUTOR | ## Summary:
Dynamo now works with config fuzzer.
For BE week, we also found and fixed 5 different bugs (in inductor):
- https://github.com/pytorch/pytorch/pull/145426
- https://github.com/pytorch/pytorch/pull/145523
- https://github.com/pytorch/pytorch/pull/145527
- https://github.com/pytorch/pytorch/pull/145532
- https://github.com/pytorch/pytorch/pull/145538
## Test Plan:
New Dynamo Unit tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,233,008 | [dynamo] Dynamo doesn't prune dead input cell object | StrongerXi | closed | [
"triaged",
"oncall: pt2",
"dynamo-triage-jan2025"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
As title. This is a minimal repro for something @yifuwang ran into.
```python
import torch
@torch.compile(fullgraph=True, backend="eager")
def f(x):
x = x.cos()
def inner():
return x.sin()
return inner()
f(torch.ones(10))
```
Running the above with `TORCH_LOGS="graph_code"` gives
```
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] TRACED GRAPH
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] ===== __compiled_fn_1 =====
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] /home/ryanguo99/repos/pytorch-39/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] def forward(self, L_x_: "f32[10][1]cpu"):
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] l_x_ = L_x_
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/ryanguo99/scratch/test-dict-cond.py:5 in f, code: x = x.cos()
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] x: "f32[10][1]cpu" = l_x_.cos(); l_x_ = None
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] # File: /home/ryanguo99/scratch/test-dict-cond.py:7 in inner, code: return x.sin()
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] sin: "f32[10][1]cpu" = x.sin()
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code] return (sin, x)
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
V0123 16:23:43.105702 1322915 torch/_dynamo/output_graph.py:1353] [0/0] [__graph_code]
```
Note that `x` doesn't really need to be returned; it's returned because Dynamo conservatively thinks `x` could still be used somewhere else. This is a very special case that only shows up under the interaction of root-frame input, cell objects, and mutation to the cell objects.
### Error logs
_No response_
### Versions
Python 3.12.5, main 5fd881a5b67
cc @chauhang @penguinwu | true |
2,808,202,573 | Unable to build pytorch after #143806 | zou3519 | closed | [
"module: ci",
"module: tests",
"triaged",
"module: infra",
"module: testing"
] | 2 | CONTRIBUTOR | cc @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @ZainRizvi @cyyever @kwen2501
Getting the following build error after #143806
```
/home/rzou/dev/ocu11/pt-ocu11/torch/csrc/distributed/c10d/FileStore.cpp:103:9: error: ‘c10d::{anonymous}::L
ock& c10d::{anonymous}::Lock::operator=(const c10d::{anonymous}::Lock&)’ cannot be overloaded with ‘c10d::{
anonymous}::Lock& c10d::{anonymous}::Lock::operator=(const c10d::{anonymous}::Lock&)’
103 | Lock& operator=(const Lock& other) = delete;
| ^~~~~~~~
/home/rzou/dev/ocu11/pt-ocu11/torch/csrc/distributed/c10d/FileStore.cpp:101:9: note: previous declaration ‘
c10d::{anonymous}::Lock& c10d::{anonymous}::Lock::operator=(const c10d::{anonymous}::Lock&)’
101 | Lock& operator=(const Lock&) = delete;
| ^~~~~~~~
[4116/6740] Building CXX object caffe2/CMakeFiles/to...cpu.dir/__/torch/csrc/jit/serialization/export.cpp.o
```
I'm running gcc 11.5.0, which seems recent enough. | true |
2,808,201,571 | [Not for land] hacking up mx | drisspg | closed | [
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145562
https://www.internalfb.com/intern/paste/P1717686991/
| true |
2,808,199,250 | Confusing as_storage_and_layout(x, want_contiguous=True) behavior | eellison | closed | [
"triaged",
"oncall: pt2",
"module: inductor",
"internal ramp-up task"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
The following two invocations are not equivalent:
```
x = ExternKernel.require_contiguous(x)
storage, old_layout = as_storage_and_layout(x, want_contiguous=True)
```
and
```
x = ExternKernel.realize_input(x)
storage, old_layout = as_storage_and_layout(x, want_contiguous=True)
```
This is because `as_storage_and_layout(x, want_contiguous=True)` will not behave well with `ReinterpretView`.
See:
```
# making the base of x contiguous or stride_ordered will not necessarily make
# the ReinterpretView either, so don't pass along those arguments
```
The comment comes from me (oops). this was two years ago and I forget some of the context but I think we were making unnecessary clones.
We should be able to translate what strides the base of `x` would require for the output ReinterpretView to be contiguous and then fix the layout.
### Versions
master
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | true |
2,808,191,048 | Error `RuntimeError: CUDA error: no kernel image is available for execution on the device` when doing `!=` operation on Jetson orin agx. | nickeisenberg | open | [
"module: cuda",
"triaged"
] | 2 | NONE | ### 🐛 Describe the bug
I am using Jetpack 6.2 with cuda12.4 on a Jetson orin agx developer kit. I am able to put tensors to the device, but for some reason I am getting an error when trying to do the `!=` operation.
I am using python 3.11 and I installed torch with the following
```bash
pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu124
```
If I run `nvcc --version` I get the following
```bash
eisenbnt@ubuntu:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:18:46_PST_2024
Cuda compilation tools, release 12.4, V12.4.99
Build cuda_12.4.r12.4/compiler.33961263_0
```
Here is some python that gives the error.
```python
>>> import torch
>>> x = torch.tensor([0, 1, 1]).to(0)
>>> print(x.device)
cuda:0
>>> x != 0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
>>> x != torch.tensor(0).to(0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
>>> x != torch.tensor([0, 0, 0]).to(0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
>>> torch.version.cuda
'12.4'
>>> torch.cuda.is_available()
True
```
cc @ptrblck @msaroufim @eqy | true |
2,808,167,068 | [dynamo][builtin-skipfile-cleanup] Remove collections | anijain2305 | closed | [
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145559
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,808,165,711 | [dynamo][builtin-skipfile-cleanup] Support tuple.__new__ | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145559
* #145753
* #145744
* #145723
* __->__ #145558
* #145547
* #145519
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,808,164,601 | Add CUDA 12.8 installation and Linux CD Docker images | tinglvv | closed | [
"triaged",
"open source",
"topic: not user facing"
] | 4 | COLLABORATOR | Add CUDA 12.8 installation script
Add Magma for CUDA 12.8
Add to Linux sbsa/x86 manywheel and libtorch Dockers
TODO: Cudnn Update to 9.7.0 once it is available
| true |
2,808,122,439 | Docs fonts are bold on Mac, in 2.7 | ad8e | closed | [
"module: docs",
"triaged"
] | 2 | CONTRIBUTOR | ### 📚 The doc issue
Fonts have changed. The all-bold is a bit distracting.
Fonts look normal on Linux and on 2.5, but are bold on Mac + 2.7.
The blue highlight demonstrates the active font associated to the text.
Firefox Macbook, Stable (2.5):
<img width="1393" alt="Image" src="https://github.com/user-attachments/assets/e0d278a4-eacc-4d2b-afcf-f0f0f94bf5e8" />
Firefox Macbook, Main (2.7):
<img width="1393" alt="Image" src="https://github.com/user-attachments/assets/a8e339c4-48f0-44ab-9c9b-6034a60e7435" />
Firefox Linux, Main (2.7):

### Suggest a potential alternative/fix
Up to you if the bold text is an intentional design choice.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke | true |
2,808,112,142 | need to document `FlopCounterMode` | stas00 | open | [
"module: docs",
"triaged"
] | 1 | CONTRIBUTOR | ### 📚 The doc issue
`FlopCounterMode` as demo'ed here https://gist.github.com/Chillee/07b36672a0ca2d1280e42b8d10f23174 needs to be documented please.
And thank you!
cc: @Chillee
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke | true |
2,808,108,991 | Fix dynamo use of `list[int]` in graph break | aorenste | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | This reintroduces the change backed out by #145393 and fixes the underlying problem.
Although using a BuiltinVariable was better than nothing when we saw a GenericAlias it had problems if there was a graph break and we had to reconstruct the original python code which BuiltinVariable did as a simple `list` instead of a `list[int]`.
This changes it to use a TypingVariable instead and then teaches TypingVariable how to reconstruct.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145554
* #145553
* #145552
* #145551
Original commit changeset: 77b9193acb23
python test/dynamo/test_repros.py ReproTests.test_graph_break_on_jit_isinstance
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true |
2,808,108,850 | Fix call to create_load_global | aorenste | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | There is no version of create_load_global() that takes three parameters - any use of this function will fail. I think this is probably the correct fix.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145554
* __->__ #145553
* #145552
* #145551
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true |
2,808,108,744 | Turn on mypy for _dynamo/variables/builtin.py | aorenste | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | The fact that mypy errors were ignored was hiding several bugs in builtin.py (for example the previous diff's incorrect override and use of `call_getattr`)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145554
* #145553
* __->__ #145552
* #145551
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true |
2,808,108,654 | Remove incorrect BuiltinVariable.call_hasattr() | aorenste | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9 | CONTRIBUTOR | BuiltinVariable.call_hasattr() overrides the base class - but actually behaves differently. The base is `obj.call_hasattr(tx, attr)` but BuiltinVariable's version is `<unused>.call_hasattr(tx, obj, attr)`.
The BuiltinVariable version is used as a pattern from `call_self_handler()` for `BuiltinVariable(hasattr)`. I think the other version is just used for internal `hasattr(obj, name)` so I renamed that one to `call_obj_hasattr`.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145554
* #145553
* #145552
* __->__ #145551
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true |
2,808,108,488 | If mypy fails it should report the error back to lintrunner | aorenste | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | This happened to me because I had a bad LD_LIBRARY_PATH and mypy was failing to run (.so load error) - but lintrunner was silent about the underlying problem.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145554
* #145553
* #145552
* #145551
* __->__ #145550
Differential Revision: [D68593081](https://our.internmc.facebook.com/intern/diff/D68593081) | true |
2,808,093,389 | [ca] add test_reset for 2.6 release validation | xmfan | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 21 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145549
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,069,509 | fix unbacked + view incorrectness | eellison | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 22 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145548
fix for https://github.com/pytorch/pytorch/issues/143498
We were incorrectly using contiguous strides for a non-contiguous tensor. There are two separate causes:
1. https://github.com/pytorch/pytorch/pull/110520 made it so we turn Views contiguous with unbacked symints becuase
`dynamic_reshape_indexer below will fail due to the size_hint's inability to process unbacked SymInts`. Seems like we should fix. Regardless - it will make the input contiguous if input is unbacked to workaround this.
2. We weren't actually making it contiguous! I filed an issue for this here: https://github.com/pytorch/pytorch/issues/145561.
This is still worth landing as a fix, even though we should those issues.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,044,805 | [dynamo][refactor] Move collections.namedtuple out of SkipFunctionVariable | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145559
* #145558
* __->__ #145547
* #145519
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,808,035,320 | Testing #144594 | huydhn | closed | [
"module: rocm",
"release notes: releng",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Run perf benchmark to test https://github.com/pytorch/pytorch/pull/144594. The commit on top is https://github.com/pytorch/pytorch/pull/145546/commits/a05e0ecee5c92d207fb509e25532a2a9635a3bb7
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,808,035,160 | [cutlass backend tests] Manually clear cache, test more tests in fbcode and limit configs in some tests | henrylhtsang | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8 | CONTRIBUTOR | Summary:
Manually clear cache:
You want to clear cache in most tests. Otherwise link command won't work and you have multiple .o files and you get something like `ld.lld: error: duplicate symbol: cuda_fused_0`.
test more tests in fbcode:
A few tests have been skipping in fbcode. Unskip them.
limit configs in some tests:
to reduce time spent on each test
Differential Revision: D68584071
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,808,020,092 | [RFC] Cuda support matrix for Release 2.7 | atalman | closed | [
"oncall: releng",
"triaged"
] | 13 | CONTRIBUTOR | Similar to : https://github.com/pytorch/pytorch/issues/138609
Opening this RFC to discuss CUDA version support for future PyTorch release 2.7:
Migration to CUDA 12.8 is planned for PyTorch Release 2.7.
Option 1 - CUDA 11.8 and CUDA 12.6 and 12.8
CUDA 11.8, CUDNN 9.1.0.70 - Same as Previous Release 2.6. No changes to CUDA 11.8 - Legacy version
CUDA 12.6 CUDNN 9.x - Version Released to Pypi - Stable version
CUDA 12.8 CUDNN 9.x - New Experimental version
Option 2
CUDA 12.6 CUDNN 9.x - Version Released to Pypi - Stable version
CUDA 12.8 CUDNN 9.x - New Experimental version
cc @ptrblck @msaroufim @malfet @nWEIdia @tinglvv @Skylion007 @albanD @ngimel
### Versions
2.7.0 | true |
2,808,011,083 | Testing #144594 | huydhn | closed | [
"module: rocm",
"release notes: releng",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Run perf benchmark on ROCm to test https://github.com/pytorch/pytorch/pull/144594, the commit on top of that PR is https://github.com/pytorch/pytorch/pull/145546/commits/a05e0ecee5c92d207fb509e25532a2a9635a3bb7
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @BLOrange-AMD | true |
2,807,995,198 | [BE] Type annotate wrapper_benchmark.py and cuda_combined_scheduling.py | BoyuanFeng | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,807,991,618 | Add sm10a to cpp extensions | drisspg | closed | [
"module: cpp-extensions",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145541
cc @malfet @zou3519 @xmfan | true |
2,807,990,919 | [BE][hop] make it easier to use speculate_subgraph | ydwu4 | closed | [
"Stale",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145540
Previously, it's tricky to construct the required inputs for speculate_subgraph as discussed in https://github.com/pytorch/pytorch/issues/144805 because we have to program against variable trackers. This PR turns speculate_subgraph into a hop and uses _make_inlined to get the subgraphs. Now HOP developers only need to program against normal tensors and uses the hop speculate_subgraph to trace the subgraph.
Using the flex attention hop as an example, now we can write the sub graph tracing code with following code:
```python
@_make_inlinable
def _create_scalar(query: torch.Tensor):
return query.new_empty([], dtype=torch.int32)
@_make_inlinable
def _create_scalars(query: torch.Tensor):
return (
_create_scalar(query),
_create_scalar(query),
_create_scalar(query),
_create_scalar(query),
)
# A normal python function that works on tensors and operators
@_make_inlinable
def _fn(query: torch.Tensor, fn: Callable):
# since these return tensors are created in speculate_subgraph
# it will not affect the current graph.
score, *_ = torch.ops.higher_order.speculate_subgraph(
_create_scalar, (query,)
)
(b, h, m, n), *_ = torch.ops.higher_order.speculate_subgraph(
_create_scalars, (query,)
)
return torch.ops.higher_order.speculate_subgraph(
fn, (score, b, h, m, n)
)
# This is the driving logic, essentially, it inlines into _fn and returns whatever _fn returns
# in this case, higher_order.speculate_subgraph's output is
# (tensor_var, tree_spec_var, UserdefinedObject(fx.Graph), UserdefinedObject(parent_proxy_to_child_proxy_map))
with TransformGetItemToIndex():
(
_body_output,
_body_tree_spec,
body_graph_var,
body_lifted_freevars_var,
) = _make_inlined(tx, _fn)(query, fn).unpack_var_sequence(tx)
```
The other benefit is that: we can avoid putting unnecessary tensor calls before the hop because they can be put in speculate_subgraph, which also addresses the issue in https://github.com/pytorch/pytorch/issues/144803
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,807,989,092 | Add accuracy issue support in AOTI Minifier | yushangdi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 6 | CONTRIBUTOR | Summary:
Add three more repro levels for AOTI minifier (level 2 already exists). They are the same as the existing dynamo minifier repro levels.
Now AOTI minifier can minify and repro programs that have numerical accuracy issues as well.
1: Dumps the original graph out to repro.py if compilation fails
2: Dumps a minifier_launcher.py if aoti fails.
3: Always dumps a minifier_launcher.py. Good for segfaults.
4: Dumps a minifier_launcher.py if the accuracy fails.
Refactor AOTI minifier unit tests to be cleaner and better re-use the existing minifier testing code. We do not need to manually patch {"aot_inductor.dump_aoti_minifier": True} to each test now, this config is generated in the test code.
Differential Revision: D68294638
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,807,977,586 | Make sure not using cpp wrapper when setting nvtx training annotation | exclamaforte | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 9 | CONTRIBUTOR | Longer term would be good to add as a feature to cpp_wrapper, but this makes sure it doesn't fail on main.
Not sure if this needs a test because it's not meant to compose, but will add one if necessary.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,807,971,539 | [dynamo] Properly model torch profiler context objects | StrongerXi | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 8 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145537
Prior to this patch, Dynamo conveniently modelled torch profiler context
objects (e.g., `torch.profiler.profile`) as `NullContextVariable`
because `torch.compile` ignore the effect of these profiler contexts.
However, the semantics of these profiler contexts diverges from
`contextlib.nullcontext` in the `__enter__` function, where the former
returns `self` and the latter returns `None`. This causes subtle error
as observed in #125021.
This patch adds back a `ProfilerContextVariable`, which addresses the
aforementioned semantic discrepency.
Fixes #125021.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,807,965,148 | Removes threadfence from topk kernel to improve AMD performance | ngimel | closed | [
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 5 | COLLABORATOR | Also marginally improves cuda perf
| true |
2,807,959,086 | [BE/mps] Mark input args as `constant` to prevent incorrect usage. | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 3 | MEMBER |
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,807,951,420 | Increase the number of perf benchmark shards | huydhn | closed | [
"Merged",
"topic: not user facing",
"test-config/default"
] | 7 | CONTRIBUTOR | Per the discussion on https://github.com/pytorch/pytorch/issues/140332#issuecomment-2610805551, this adds 2 more shards for HF, 2 more for TorchBench, and 1 more for TIMM.
| true |
2,807,919,138 | Remove det_singular OpInfo | soulitzer | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 8 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145399
* __->__ #145533
* #145531
* #145520
Fixes https://github.com/pytorch/pytorch/issues/93045 https://github.com/pytorch/pytorch/issues/93044
From previous discussion https://github.com/pytorch/pytorch/issues/93045#issuecomment-1477674083 the resolution is that we're okay with removing this.
Some older attempts:
- https://github.com/pytorch/pytorch/pull/102581
- https://github.com/pytorch/pytorch/pull/109249
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,807,909,472 | Make sure that benchmark_harness is set before running | exclamaforte | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3 | CONTRIBUTOR | Running torch compile with these options causes an error, because the benchmark code isn't generated but is still called.
```
options={'profile_bandwidth_output': 'foo', 'benchmark_harness': False}
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,807,895,111 | Disable slow gradcheck for nn.Transformer ModuleInfo | soulitzer | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145399
* #145533
* __->__ #145531
* #145520
Fixes https://github.com/pytorch/pytorch/issues/117140
| true |
2,807,891,041 | Work around buggy use_const_ref_for_mutable_tensors | ezyang | closed | [
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: bug fixes"
] | 13 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145080
* __->__ #145530
See https://github.com/pytorch/pytorch/issues/145522 for context
This doesn't fix the problem with use_const_ref_for_mutable_tensors and the boxed wrapper, instead it just gets all of our out kernels off of this flag so that the mutable matching pattern works correctly. I also add a check in torchgen to prevent people from making this mistake in the future.
Signed-off-by: Edward Z. Yang <ezyang@meta.com> | true |
2,807,890,661 | Module.to() fail in dynamo when swap_module_params_on_conversion is true | shunting314 | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
Repro:
```
import torch
from torch import nn
torch.__future__.set_swap_module_params_on_conversion(True)
@torch.compile
def use_emb():
emb = nn.Embedding(16, 8)
emb.weight.grad = torch.randn_like(emb.weight)
emb.to(dtype=torch.bfloat16)
# emb.weight.to(dtype=torch.bfloat16)
# emb.weight.grad.to(dtype=torch.bfloat16)
use_emb()
```
Error messages:
```
(pytorch) [shunting@devgpu011.cln5 ~/ws/pytorch (reset-dynamo-cache)]$ python ~/x.py
Compiled module path: /tmp/torchinductor_shunting/23/c23zg53biko6duc2zqhiic2qwydrsnct3cjd6izouxtkfa4xathn.py
Traceback (most recent call last):
File "/home/shunting/ws/pytorch/torch/nn/modules/module.py", line 958, in _apply
torch.utils.swap_tensors(param, param_applied)
File "/home/shunting/ws/pytorch/torch/utils/__init__.py", line 51, in swap_tensors
raise RuntimeError("Cannot swap t1 because it has weakref associated with it")
RuntimeError: Cannot swap t1 because it has weakref associated with it
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/shunting/x.py", line 15, in <module>
use_emb()
File "/home/shunting/ws/pytorch/torch/_dynamo/eval_frame.py", line 566, in _fn
return fn(*args, **kwargs)
File "/home/shunting/x.py", line 8, in use_emb
emb = nn.Embedding(16, 8)
File "/home/shunting/x.py", line 11, in torch_dynamo_resume_in_use_emb_at_8
emb.to(dtype=torch.bfloat16)
File "/home/shunting/ws/pytorch/torch/nn/modules/module.py", line 1353, in to
return self._apply(convert)
File "/home/shunting/ws/pytorch/torch/nn/modules/module.py", line 962, in _apply
raise RuntimeError(
RuntimeError: _apply(): Couldn't swap Embedding.weight
```
Most test failures exposed by https://github.com/pytorch/pytorch/pull/145306/files should have similar error stack.
Here is a even simpler repro without nn.Module:
```
import torch
from torch import nn
import os
x = torch.randn(5, requires_grad=True)
if os.environ.get("DBG") == "1":
import weakref
g_id_x = id(x)
orig_ref = weakref.ref
class MyRef(orig_ref):
def __init__(self, obj, *args, **kwargs):
if id(obj) == g_id_x: breakpoint()
super().__init__(obj, *args, **kwargs)
weakref.ref = MyRef
@torch.compile
def f(x):
y = x.to(dtype=torch.bfloat16)
torch.utils.swap_tensors(x, y)
f(x)
```
Dynamo would keep weakref to tensors when interpreting byte-code or creating guards. `torch.utils.swap_tensors` will fail if there are weakref's to input tensors.
### Versions
.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,807,886,424 | [utils] add try_import method for importing optional modules | mhorowitz | closed | [
"Merged",
"topic: not user facing"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143360
* __->__ #145528
| true |
2,807,881,984 | fix intermediate debug information with cpp_wrapper | exclamaforte | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 21 | CONTRIBUTOR | Summary: before fix, code like:
```cpp
aoti_torch_print_tensor_handle(buf0, "after_launch - triton_poi_fused_randn_0 - buf0");
aoti_torch_print_tensor_handle(buf1, "after_launch - triton_poi_fused_randn_0 - buf1");
printf("[ after_launch - triton_poi_fused_randn_0 - 0: %ld ]", 0); printf("
");
printf("[ after_launch - triton_poi_fused_randn_0 - 1228800L: %ld ]", 1228800L); printf("
");
```
was generated, which is a syntax error.
Test Plan:
New unit test.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,807,880,779 | [MPS] Add bilineard2d_aa implementation | malfet | closed | [
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 5 | CONTRIBUTOR | Interesting quirk of the algorithm, that is not very well documented, is that value of align_corners is ignored in antialias mode, see arguments of
https://github.com/pytorch/pytorch/blob/e8304f08fedc802a90f9361c30861f8c5aab946e/aten/src/ATen/native/cpu/UpSampleKernel.cpp#L747-L751
Error out on uint8 implementation(as it relies on a very fragile integer integer arithmetic), as it's not implemented on any other Accelerator devices at the moment. | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.