id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,917,834,772
|
scaled_dot_product_attention crashes on apple silicon
|
jjh42
|
closed
|
[
"module: crash",
"triaged",
"module: mps",
"module: sdpa"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This following python code fails and ends the process on macos 15.3.1 (M1 Pro).
```python
import torch
import torch.nn.functional as F
print(torch.__version__)
device = torch.device('mps')
B=2
T=3
n_kv_head = 2
n_q_head = 4
dim = 8
attn_mask = torch.ones((T, T)).to(device)
q = torch.rand(B, n_q_head, T, dim).to(device)
k = torch.rand(B, n_kv_head, T, dim).to(device)
v = torch.rand(B, n_kv_head, T, dim).to(device)
F.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask, enable_gqa=True)
```
with the following logs:
```
2.7.0.dev20250311
loc("mps_matmul"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/d187755d-b9a3-11ef-83e5-aabfac210453/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":43:0)): error: incompatible dimensions
loc("mps_matmul"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/d187755d-b9a3-11ef-83e5-aabfac210453/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":43:0)): error: invalid shape
LLVM ERROR: Failed to infer result type(s).
```
Changing device to CPU and it works fine. Setting n_kv_head to 4 also resolves the issue.
### Versions
I'm using uv the version script fails.
I've tested with python 2.6.0 and the 2025-03-11 nightly.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,917,805,444
|
nn.GaussianNLLLoss and F.gaussian_nll_loss do not work with scalar `var`
|
connor-krill
|
closed
|
[
"module: loss",
"triaged",
"module: python frontend"
] | 3
|
NONE
|
### 🐛 Describe the bug
The documentation for [nn.GaussianNLLLoss](https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html) states that the `var` input can be a scalar value, but an error occurs if a float is used. Similarly, the documentation for the functional version [nn.functional.gaussian_nll_loss](https://pytorch.org/docs/stable/generated/torch.nn.functional.gaussian_nll_loss.html) says `var` can be a scalar, but throws an error if a float is used.
# nn.GaussianNLLLoss
```python
import torch
import torch.nn as nn
loss = nn.GaussianNLLLoss()
input = torch.randn(5, 2, requires_grad=True)
target = torch.randn(5, 2)
var = 1.0
output = loss(input, target, var)
```
```
Traceback (most recent call last):
File "/Users/connorkrill/PycharmProjects/natural_hazards/burgers/scratch/torch_bug.py", line 8, in <module>
output = loss(input, target, var)
File "/opt/anaconda3/envs/natural_hazards/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/anaconda3/envs/natural_hazards/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/anaconda3/envs/natural_hazards/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 377, in forward
return F.gaussian_nll_loss(input, target, var, full=self.full, eps=self.eps, reduction=self.reduction)
File "/opt/anaconda3/envs/natural_hazards/lib/python3.10/site-packages/torch/nn/functional.py", line 2858, in gaussian_nll_loss
raise ValueError("var is of incorrect size")
ValueError: var is of incorrect size
```
# nn.functional.gaussian_nll_loss
```python
import torch
import torch.nn.functional as F
input = torch.randn(5, 2, requires_grad=True)
target = torch.randn(5, 2)
var = 1.0
output = F.gaussian_nll_loss(input, target, var)
```
```
Traceback (most recent call last):
File "/Users/connorkrill/PycharmProjects/natural_hazards/burgers/scratch/torch_bug.py", line 16, in <module>
output = F.gaussian_nll_loss(input, target, var)
File "/opt/anaconda3/envs/natural_hazards/lib/python3.10/site-packages/torch/nn/functional.py", line 2841, in gaussian_nll_loss
if var.size() != input.size():
AttributeError: 'float' object has no attribute 'size'
```
### Versions
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.7.3 (x86_64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.14 (main, May 6 2024, 14:47:20) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] hamiltorch==0.4.1
[pip3] numpy==1.26.4
[pip3] torch==2.2.2
[pip3] torchdiffeq==0.2.4
[pip3] torchinfo==1.8.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.17.2
[conda] hamiltorch 0.4.1 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.2.2 pypi_0 pypi
[conda] torchdiffeq 0.2.4 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.17.2 pypi_0 pypi
cc @albanD
| true
|
2,917,735,045
|
Deterministic support for adaptive_avg_pool2d_backward_cuda
|
gill179
|
open
|
[
"module: cuda",
"triaged",
"module: determinism",
"module: python frontend"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
UserWarning: adaptive_avg_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ..\aten\src\ATen\Context.cpp:83.)
Kindly add support to this
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @msaroufim @eqy @mruberry @kurtamohler @albanD
| true
|
2,917,647,320
|
[cherry-pick] [CI] Don't clean workspace when fetching repo (#147994)
|
atalman
|
closed
|
[
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Cherry-Pick the revert: [CI] Don't clean workspace when fetching repo (#147994)
| true
|
2,917,605,740
|
"asinh" operator is supported in ONNX, but conversion to ONNX fails?
|
yuecheng-ma
|
closed
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
According to the ONNX operator docs, this operator has been supported since version 9. But when exporting my PyTorch model to ONNX with opset version explicitly set to 20, I still get an 'unsupported operator' error. What could be the reason?

**my code:**

traceback:

### Versions
ENV:
ubuntu20.04
torch version: 2.6.0+cu124
onnx version: 1.17.0
| true
|
2,917,519,768
|
added fake tensor support for foreach_copy
|
pralay-das
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 22
|
CONTRIBUTOR
|
Fixes #149111
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,917,509,315
|
SubsetRandomSampler - changed iteration over tensor to iteration over list
|
arn4
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: dataloader"
] | 7
|
CONTRIBUTOR
|
Digging further the problem at https://github.com/UKPLab/sentence-transformers/pull/3261, it boils down to this expensive loop over a torch tensor. Looping over a list, like in RandomSampler, solves the issue.
| true
|
2,917,495,887
|
Remove runtime dependency on packaging
|
pytorchbot
|
closed
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Looks like after https://github.com/pytorch/pytorch/pull/148924
We are seeing this error in nightly test:
https://github.com/pytorch/pytorch/actions/runs/13806023728/job/38616861623
```
File "/Users/runner/work/_temp/anaconda/envs/test_conda_env/lib/python3.13/site-packages/torch/_inductor/pattern_matcher.py", line 79, in <module>
from .lowering import fallback_node_due_to_unsupported_type
File "/Users/runner/work/_temp/anaconda/envs/test_conda_env/lib/python3.13/site-packages/torch/_inductor/lowering.py", line 7024, in <module>
from . import kernel
File "/Users/runner/work/_temp/anaconda/envs/test_conda_env/lib/python3.13/site-packages/torch/_inductor/kernel/__init__.py", line 1, in <module>
from . import mm, mm_common, mm_plus_mm
File "/Users/runner/work/_temp/anaconda/envs/test_conda_env/lib/python3.13/site-packages/torch/_inductor/kernel/mm.py", line 6, in <module>
from packaging.version import Version
ModuleNotFoundError: No module named 'packaging'
```
Hence removing runtime dependency on packaging since it may not be installed by default
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,917,350,105
|
`torch.multinomial` fails under multi-worker DataLoader with a CUDA error: `Assertion cumdist[size - 1] > 0` failed
|
yewentao256
|
closed
|
[
"triaged",
"module: data",
"module: python frontend"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When using `torch.multinomial` in a Dataset/IterableDataset within a `DataLoader` that has multiple workers (num_workers > 0), an assertion error is thrown from a CUDA kernel:
```bash
pytorch\aten\src\ATen\native\cuda\MultinomialKernel.cu:112: block: [0,0,0], thread: [0,0,0] Assertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
```
A minimal reproducible example is below. Observe that if you switch `num_workers=2` to `num_workers=0`, the code runs successfully:
```py
import torch
import nltk
import nltk
import torch
from torch.utils.data import IterableDataset, DataLoader
class SimpleDataset(IterableDataset):
def __init__(self, data, word2idx, window_size, num_neg_samples, neg_sampling_dist):
self.data = data
self.word2idx = word2idx
self.window_size = window_size
self.num_neg_samples = num_neg_samples
self.neg_sampling_dist = neg_sampling_dist.to("cuda")
def __iter__(self):
for line in self.data:
tokens = nltk.word_tokenize(line)
token_ids = [
self.word2idx.get(token, self.word2idx["<unk>"]) for token in tokens
]
# neg_sampling_dist = self.neg_sampling_dist.to("cuda")
# torch.cuda.synchronize()
neg_sampling_dist = self.neg_sampling_dist.clone().detach().to('cuda')
torch.cuda.synchronize()
for i, center in enumerate(token_ids):
start = max(0, i - self.window_size)
end = min(len(token_ids), i + self.window_size + 1)
for j in range(start, end):
if i != j:
context = token_ids[j]
negative_context = torch.multinomial(
neg_sampling_dist,
self.num_neg_samples,
replacement=True,
)
yield center, context, negative_context
if __name__ == "__main__":
vocab = {"<unk>": 1, "word": 2, "example": 3, "test": 4}
word2idx = {word: idx for idx, word in enumerate(vocab.keys())}
vocab_size = len(word2idx)
freq_arr = torch.zeros(vocab_size, dtype=torch.float32)
for word, idx in word2idx.items():
freq_arr[idx] = vocab[word]
neg_sampling_dist = freq_arr / freq_arr.sum()
data = ["This is a test sentence.", "Another example of a sentence."]
dataset = SimpleDataset(
data,
word2idx,
window_size=1,
num_neg_samples=2,
neg_sampling_dist=neg_sampling_dist,
)
dataloader = DataLoader(dataset, batch_size=2, num_workers=2)
for batch in dataloader:
print(batch)
```
### Versions
PyTorch version: 2.4.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 (10.0.19045 64 位)
GCC version: (GCC) 12.2.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: N/A
Python version: 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070
Nvidia driver version: 561.19
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2592
MaxClockSpeed: 2592
L2CacheSize: 1536
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] microtorch==0.5.0
[pip3] minitorch==0.5
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] onnx==1.17.0
[pip3] onnx-tf==1.9.0
[pip3] optree==0.14.0
[pip3] pytorch-fid==0.3.0
[pip3] torch==2.4.1+cu118
[pip3] torch-fidelity==0.3.0
[pip3] torch-tb-profiler==0.4.3
[pip3] torchaudio==2.4.1+cu118
[pip3] torcheval==0.0.7
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.19.1+cu118
cc @andrewkho @divyanshk @VitalyFedyunin @dzhulgakov @albanD
| true
|
2,917,337,808
|
[MPS] Add `torch.special.bessel_[jy][01]` implementations
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149123
By copy-n-pasting functions from
https://github.com/pytorch/pytorch/blob/f59064f2b700860a16db1930c30a4691ab663401/aten/src/ATen/native/cuda/Math.cuh#L1463
With an ugly workaround for `bessel_y[01]` to avoid internal compiler exception on M1/M2 machines (see FB16863363 / https://gist.github.com/malfet/e7785e4b572e7740887a83a2386ef769 )
| true
|
2,917,155,986
|
Update the heuristic for AArch64 bmm/baddbmm
|
michalowski-arm
|
open
|
[
"module: cpu",
"triaged",
"open source",
"module: arm",
"release notes: linalg_frontend"
] | 6
|
CONTRIBUTOR
|
Updates heuristic for bmm/baddbmm and consolidates all heuristic logic in a single location
- The goal of the consolidation is to improve maintainability and readability of the heuristic logic. Instead of different parts scattered across two files, this patch centralizes everything inside `Matmul.cpp`, where there already exists heuristic-based selection for mkldnn.
- The logic of the check itself doesn't change (existing code is reused where possible) but a separate heuristic threshold for bmm/baddbmm is introduced based on newer, benchmarking data. Use the script below to see the performance improvement for bmm from the new heuristic:
```
import torch
import time
# Set below to True to use cases selected by only one of the hueristics.
USE_ONLY_DIVERGENT_TEST_CASES = True
BATCH_SIZES = [ 1, 8, 32, 64, 128, 256 ]
M_DIMS = [ 4, 8, 16, 32, 64, 256, 512 ]
N_DIMS = [ 4, 8, 16, 32, 64, 256, 512 ]
K_DIMS = [ 4, 8, 16, 32, 64, 256, 512 ]
ITERS = 50
def old_heuristic(m, n, k):
is_above_min_dims = m > 8 and n > 8 and k > 8
is_above_min_size = m*n*k > 8_192
return is_above_min_dims and is_above_min_size
def new_heuristic(b, m, n, k):
return b*b*m*n*k >= 4_194_304
def generate_test_cases():
test_cases = []
for b in BATCH_SIZES:
for m in M_DIMS:
for n in N_DIMS:
for k in K_DIMS:
if USE_ONLY_DIVERGENT_TEST_CASES:
if old_heuristic(m, n, k) != new_heuristic(b, m, n, k):
test_cases.append([b, m, n, k])
else:
test_cases.append([b, m, n, k])
return test_cases
def test(x, y):
for _ in range(5):
torch.bmm(x, y)
perf = 0.0
for _ in range(ITERS):
start = time.time()
torch.bmm(x, y)
end = time.time()
perf += (end - start) / ITERS
return perf
def main():
print(f"{'b':<10}{'m':<10}{'n':<10}{'k':<10}{'time (s)':10}")
cumulative_mean_time = 0.0
for b, m, n, k in generate_test_cases():
mean_time = test(torch.rand(b, m, n), torch.rand(b, n, k))
cumulative_mean_time += mean_time
print(f"{b:<10}{m:<10}{n:<10}{k:<10}{mean_time:10.3e}")
print(f"Cumulative mean time = {cumulative_mean_time:.4f} s")
if __name__ == "__main__":
main()
```
From the script we see that cumulative mean time from all test cases (at 16 threads) is:
- 1.6195 s for the old heuristic
- 0.7012 s for the new heuristic
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
2,917,058,303
|
DISABLED test_compile_body_aliasing_contents_backend_aot_eager (__main__.TestCompileTorchbind)
|
pytorch-bot[bot]
|
closed
|
[
"module: flaky-tests",
"skipped",
"oncall: pt2",
"oncall: export"
] | 10
|
NONE
|
Platforms: asan, linux, rocm, win, windows, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_body_aliasing_contents_backend_aot_eager&suite=TestCompileTorchbind&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38691506611).
Over the past 3 hours, it has been determined flaky in 33 workflow(s) with 66 failures and 33 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_body_aliasing_contents_backend_aot_eager`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `export/test_torchbind.py`
cc @clee2000 @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,916,941,531
|
Add `keepdim` parameter for `torch.nn.functional.cosine_similarity`
|
ringohoffman
|
open
|
[
"module: nn",
"triaged",
"actionable"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
A lot of aggregation functions computed over specific dimensions have a `keepdim` parameter so that you don't have to unsqueeze the output back into its original dimensionality. I think it would be nice if `cosine_similarity` did too.
https://github.com/pytorch/pytorch/blob/bdf57fb8f7ca51027488f6aabe85c350161e7acc/aten/src/ATen/native/Distance.cpp#L274
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,916,821,226
|
The device_id parameter of distributed.init_process_group will cause each process to occupy video memory on the first accessible GPU
|
Staten-Wang
|
closed
|
[
"oncall: distributed",
"triaged",
"bug"
] | 1
|
NONE
|
### 🐛 Describe the bug
The device_id parameter of distributed.init_process_group will cause each process to occupy video memory on the first accessible GPU.
For example, I set the environment variable to "CUDA_VISIBLE_DEVICES": "0,1" . After init_process_group is executed, rank 1 will also occupy some video memory on GPU 0. This is obviously not what I expected.
Before, I used torch.cuda.set_device(local_rank) to set the GPU and never used the device_id parameter. But when I updated pytorch, pytorch gave me this warning' [rank0]:[W313 18:22:43.453826616 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.'
The code is as follows:
```python
import os
import torch
import torch.multiprocessing as mp
from torch import distributed
def proc_main(local_rank):
torch.cuda.set_device(local_rank)
backend = 'nccl' if distributed.is_nccl_available() else 'gloo'
print(f'backend is {backend}')
dev = torch.device('cuda', local_rank)
distributed.init_process_group(
backend=backend,
init_method='env://',
world_size=torch.cuda.device_count(),
rank=local_rank,
device_id=dev
)
distributed.barrier()
distributed.destroy_process_group()
def main():
if distributed.is_available():
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '9987'
mp.spawn(proc_main, nprocs=torch.cuda.device_count())
if __name__ == '__main__':
main()
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090 Ti
GPU 1: NVIDIA GeForce RTX 3090 Ti
GPU 2: NVIDIA GeForce RTX 3090 Ti
GPU 3: NVIDIA GeForce RTX 3090 Ti
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
Stepping: 7
CPU max MHz: 4400.0000
CPU min MHz: 1200.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 49.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,916,821,224
|
Add dim parameter to torch.bucketize
|
Aure20
|
open
|
[
"triaged",
"needs design",
"module: python frontend"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Currently I need to modify a 2D tensor but I want to use different boundaries for different rows, what I am doing is to use list comprehension and bucketizing each row one by one and then stack the list again (see example). It would be convenient to have a dim parameter and therefore just need to write:
bucketized = torch.bucketize(values, boundaries, dim=1)
where the boundary tensor shape needs to match the shape along the dim we are operating.
I think to call coniguous just once before should be enough to have a good memory layout on the boundary tensor as well like in the 1D case.
### Alternatives
Example of the current solution I use:
values = torch.tensor([[-0.2, 1.5, 3.2], [2.1, 4.3, 0.5]])
boundaries = torch.tensor([[0.0, 1.0, 2.0, 3.0], [2.0, 3.0, 4.0, 5.0]])
bucketized = torch.stack([torch.bucketize(values[i], boundaries[i]) for i in range(values.shape[0])])
### Additional context
_No response_
cc @albanD
| true
|
2,916,593,394
|
Seeking minimal example to use `register_replacement` to inject kernels for both training and inference
|
mayank31398
|
closed
|
[
"module: docs",
"module: autograd",
"triaged",
"oncall: pt2",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
### 📚 The doc issue
Hi, it would be awesome if we can add a minimal example for this.
Lets say I want to replace:
```python
def forward(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
x = x * 3
z = x * F.silu(y)
return z
```
with a custom autograd function:
```python
class MyFunc(torch.autograd.Function):
def forward(ctx, x, y):
...
def backward(ctx, z_grad):
...
```
thanks!
cc @svekars @sekyondaMeta @AlannaBurke @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,916,516,388
|
[inductor][cpu]performance regression in 2025-03-10 nightly release
|
zxd1997066
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>fp32 static shape cpp wrapper </p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>lennard_jones</td>
<td>multiple</td>
<td>1000</td>
<td>0.899138</td>
<td>0.000328722</td>
<td>0.000295566441636</td>
<td>4.907769</td>
<td>1000</td>
<td>1.311658</td>
<td>0.000220639</td>
<td>0.00028940290946200003</td>
<td>4.916</td>
<td>0.69</td>
<td>0.98</td>
<td>0.67</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>fp32 dynamic shape cpp wrapper </p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>lennard_jones</td>
<td>multiple</td>
<td>1000</td>
<td>0.7305</td>
<td>0.000399273</td>
<td>0.0002916689265</td>
<td>4.941489</td>
<td>1000</td>
<td>1.041987</td>
<td>0.000281327</td>
<td>0.000293139076749</td>
<td>4.938169</td>
<td>0.7</td>
<td>1.01</td>
<td>0.7</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>amp static shape default wrapper max autotune</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>hf_Albert</td>
<td>single</td>
<td>1</td>
<td>1.308119</td>
<td>0.431678501</td>
<td>0.564686849049619</td>
<td>95.22961</td>
<td>1</td>
<td>1.597903</td>
<td>0.35871472</td>
<td>0.57319132723216</td>
<td>95.290552</td>
<td>0.82</td>
<td>1.02</td>
<td>0.83</td>
<td>1.0</td>
</tr>
<tr>
<td>torchbench</td>
<td>hf_GPT2_large</td>
<td>single</td>
<td>1</td>
<td>1.433638</td>
<td>6.326373821</td>
<td>9.069729911990798</td>
<td>141.727643</td>
<td>1</td>
<td>1.631708</td>
<td>5.627456199</td>
<td>9.182365299557892</td>
<td>129.7057</td>
<td>0.88</td>
<td>1.01</td>
<td>0.89</td>
<td>0.92</td>
</tr>
</table>
<p>amp static shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>hf_Albert</td>
<td>single</td>
<td>1</td>
<td>1.388928</td>
<td>0.412458401</td>
<td>0.572875021984128</td>
<td>54.402956</td>
<td>1</td>
<td>1.544128</td>
<td>0.36844618100000004</td>
<td>0.5689280645751681</td>
<td>53.764985</td>
<td>0.9</td>
<td>0.99</td>
<td>0.89</td>
<td>0.99</td>
</tr>
</table>
the bad commit: 165e33531c489c92c994a02f3e55ce3261c794e5
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench hf_Albert amp
Testing with inductor.
single-thread testing....
loading model: 0it [00:01, ?it/s]
cpu eval hf_Albert
running benchmark: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:47<00:00, 1.04it/s]
1.361x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,hf_Albert,1,1.361182,405.056800,41.416172,0.852280,111.418163,130.729574,438,1,0,0,0,0,1
```
the last good commit: 118a165ac58865ea0a42bc1dd6fe3e13c28af8a9
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench hf_Albert amp
Testing with inductor.
single-thread testing....
loading model: 0it [00:01, ?it/s]
cpu eval hf_Albert
running benchmark: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:45<00:00, 1.09it/s]
1.513x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,hf_Albert,1,1.512885,364.084261,45.160686,0.905931,111.499264,123.077018,438,1,0,0,0,0,1
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>373ffb19</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>5245304f1ecd4e78bd11f5a5efa8ce12f3b52826</td>
<td>main</td>
<td>ce2f680e0009550ef0dc594f375d542662fcb7e5</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh single inference performance torchbench hf_Albert amp
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/165e33531c489c92c994a02f3e55ce3261c794e5
[torchbench-hf_Albert-inference-amp-static-default-single-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/19226356/torchbench-hf_Albert-inference-amp-static-default-single-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129 @leslie-fang-intel
| true
|
2,916,498,055
|
[Profiler][HPU] Fix incorrect availabilities for HPU
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Fixes #148661
| true
|
2,916,392,783
|
[Intel GPU] Allow XPU backend in Depthwise_conv2d&3d operators
|
yucai-intel
|
open
|
[
"module: cpu",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"ciflow/xpu",
"release notes: xpu",
"module: xpu",
"ci-no-td"
] | 19
|
CONTRIBUTOR
|
This modification is to support XPU kernels for depthwise_conv2d and depthwise_conv3d.
Currently, when running depthwise_conv on XPU devices, it is calculated with Mkldnn via the ConvBackend::Overrideable path.
After this modification, depthwise_conv will be calculated directly using XpuDepthwise3d when the Mkldnn backend is disabled.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,916,245,741
|
[CI] Increase shards number for XPU ci UT tests
|
chuanqi129
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 1
|
COLLABORATOR
|
To reduce the ci time cost
| true
|
2,916,119,766
|
Failed to install PyTorch 2.7 based on python 3.13t(free-threaded) on Windows OS
|
jameszhouyi
|
closed
|
[] | 0
|
NONE
|
### 🐛 Describe the bug
**Reproduce steps:**
conda create -n nogil2 --override-channels -c conda-forge python-freethreading
conda activate nogil2
pip install torch torchvision torchaudio --pre --index-url https://download.pytorch.org/whl/nightly/cu128
ERROR: Cannot install torchvision==0.22.0.dev20250226+cu128, torchvision==0.22.0.dev20250227+cu128, torchvision==0.22.0.dev20250228+cu128, torchvision==0.22.0.dev20250301+cu128, torchvision==0.22.0.dev20250302+cu128, torchvision==0.22.0.dev20250303+cu128, torchvision==0.22.0.dev20250304+cu128, torchvision==0.22.0.dev20250306+cu128, torchvision==0.22.0.dev20250307+cu128, torchvision==0.22.0.dev20250308+cu128, torchvision==0.22.0.dev20250309+cu128, torchvision==0.22.0.dev20250310+cu128, torchvision==0.22.0.dev20250311+cu128 and torchvision==0.22.0.dev20250312+cu128 because these package versions have conflicting dependencies.
The conflict is caused by:
torchvision 0.22.0.dev20250312+cu128 depends on numpy
torchvision 0.22.0.dev20250311+cu128 depends on numpy
torchvision 0.22.0.dev20250310+cu128 depends on numpy
torchvision 0.22.0.dev20250309+cu128 depends on numpy
torchvision 0.22.0.dev20250308+cu128 depends on numpy
torchvision 0.22.0.dev20250307+cu128 depends on numpy
torchvision 0.22.0.dev20250306+cu128 depends on numpy
torchvision 0.22.0.dev20250304+cu128 depends on numpy
torchvision 0.22.0.dev20250303+cu128 depends on numpy
torchvision 0.22.0.dev20250302+cu128 depends on numpy
torchvision 0.22.0.dev20250301+cu128 depends on numpy
torchvision 0.22.0.dev20250228+cu128 depends on numpy
torchvision 0.22.0.dev20250227+cu128 depends on numpy
torchvision 0.22.0.dev20250226+cu128 depends on numpy
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip to attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows Server 2022 Datacenter Evaluation (10.0.20348 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.13.2 experimental free-threading build | packaged by conda-forge | (main, Feb 17 2025, 13:52:36) [MSC v.1942 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-2022Server-10.0.20348-SP0
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
----------------------
Name: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2594
MaxClockSpeed: 2594
L2CacheSize: 18432
L2CacheSpeed: None
Revision: 21767
----------------------
Name: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU1
CurrentClockSpeed: 2594
MaxClockSpeed: 2594
L2CacheSize: 18432
L2CacheSpeed: None
Revision: 21767
Versions of relevant libraries:
[pip3] No relevant packages
[conda] No relevant packages
| true
|
2,916,084,683
|
_foreach_copy_ doesn't support copy data between different devices (like cpu-cuda) in compile mode
|
pralay-das
|
closed
|
[
"triaged",
"module: mta",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Currently for _foreach_copy ops if our self and src tensorList have different device type (and vice versa), we are getting error in compile mode whereas in eager it is working fine.
<img width="899" alt="Image" src="https://github.com/user-attachments/assets/2cd89f95-4650-427c-aeb6-3567ec4e9082" />
my test case
```python
import torch
def test_foreach_copy():
h1 = [torch.randn(1,2), torch.randn(1,3)]
h2 = [torch.randn(1,2), torch.randn(1,3)]
def fn(h1, h2):
return torch.ops.aten._foreach_copy(h1, h2)
cpu_result = fn(h1, h2)
print(cpu_result)
fn = torch.compile(fn)
h1[0] = h1[0].to('cuda')
h1[1] = h1[1].to('cuda')
test_cuda = fn(h1, h2)
print("cuda result ", test_cuda[0])
print("cuda result ", test_cuda[1])
test_foreach_copy()
```
### Versions
related issue: https://github.com/pytorch/pytorch/issues/111351
cc @crcrpar @mcarilli @janeyx99 @chauhang @penguinwu
| true
|
2,916,071,553
|
[XPU] Enable Windows CI/CD test for XPU
|
chuanqi129
|
open
|
[
"module: ci",
"triaged",
"enhancement",
"module: xpu"
] | 3
|
COLLABORATOR
|
According https://github.com/pytorch/pytorch/issues/114850, the XPU linux CI/CD build and tests has been setup. Currently, the XPU Windows CI/CD only focus on torch build and some basic smoke tests, there is no real xpu test cases covered in CI/CD due to lack XPU Windows GHA runners. We're working on the runner solution and create this issue to track the progress of it.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,916,024,532
|
Super tiny fix typo
|
fzyzcjy
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: dynamo"
] | 6
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,915,985,341
|
Support return values in generators
|
fzyzcjy
|
open
|
[
"triaged",
"open source",
"module: dynamo",
"release notes: dynamo"
] | 4
|
CONTRIBUTOR
|
Fixes #149037
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,915,925,616
|
Update the baseline for max_autotune ci workflow
|
LifengWang
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng",
"module: dynamo"
] | 27
|
CONTRIBUTOR
|
Since the issue https://github.com/pytorch/pytorch/issues/148535 is fixed in PR https://github.com/pytorch/pytorch/pull/148923, update the baseline for max_autotune ci workflow.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,915,853,340
|
Migrate aten.split.Tensor from using Sharding Rule to Sharding Strategy
|
mrmiywj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"module: dtensor"
] | 6
|
CONTRIBUTOR
|
Summary:
Use Sharding Strategy for aten.split.Tensor instead of sharding rule
Test Plan:
pytest test/distributed/tensor/test_dtensor_ops.py -s -k split
Reviewers:
xilunwu
Subscribers:
Tasks:
Tags:
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,915,822,009
|
[inductor] post grad graph with scatter_upon_const_tensor lowering is not runnable
|
xmfan
|
closed
|
[
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 2
|
MEMBER
|
### 🐛 Describe the bug
tlparse: https://fburl.com/fxa5v5rk
See this post_grad_graph: https://fburl.com/k6zd56mh
If we directly execute this graph, it will error here:
```python
# post-grad graph
where_2: "i64[512*s0, 1][1, 1]cuda:0" = torch.ops.aten.where.self(ne_257, unsqueeze_3, full_default_1); unsqueeze_3 = full_default_1 = None
scatter_upon_const_tensor: "f32[512*s0, 30000][30000, 1]cuda:0" = torch__inductor_fx_passes_post_grad_scatter_upon_const_tensor(shape = [mul_37, 30000], background_val = 0, dtype = torch.float32, dim = 1, selector = where_2, val = -1.0); where_2 = None
# post_grad.py
def scatter_upon_const_tensor(
match: Match, shape, background_val, dtype, dim, selector, val
):
"""
Match the pattern of full+scatter into a pointwise.
TODO: Right now the scatter value must be a scalar. But we could support it
when it is a tensor as well.
"""
from torch._inductor import metrics
metrics.num_matches_for_scatter_upon_const_tensor += 1
selector_loader = selector.make_loader() # <-- errors because selector is a Tensor in this case
```
An implication of it not being runnable is that we can't trace it under compiled autograd. It affects most cudagraphs_dynamic HF models.
Error message:
```python
File "/home/xmfan/core/a/pytorch/torch/_dynamo/compiled_autograd.py", line 859, in runtime_wrapper
return compiled_fn(inputs, sizes, scalars, hooks, packed_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/fx/graph_module.py", line 830, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/fx/graph_module.py", line 406, in __call__
raise e
File "/home/xmfan/core/a/pytorch/torch/fx/graph_module.py", line 393, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<eval_with_key>.22", line 4, in forward
def forward(self, inputs, sizes, scalars, hooks, packed_data):
File "/home/xmfan/core/a/pytorch/torch/_inductor/fx_passes/post_grad.py", line 380, in scatter_upon_const_tensor
selector_loader = selector.make_loader()
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Tensor' object has no attribute 'make_loader'
```
### Versions
main
cc @chauhang @penguinwu @zou3519 @bdhirsh @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,915,820,743
|
ci: Fix check_binary gcc abi check
|
seemethere
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149102
* __->__ #149104
All of our binaries should be built with the cxx11-abi now so lets fix
this check to reflect reality.
I also noticed that this particular script is not used widely since this
issue should've been caught in nightlies a long time ago.
Maybe worth an investigation to just remove this script if it's not
actually being used.
Signed-off-by: Eli Uriegas <github@terriblecode.com>
| true
|
2,915,803,241
|
[FSDP2] Add set_reshard_after_forward
|
mori360
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/149029
Add `set_reshard_after_forward` to set `post_forward_mesh_info` so as to decide `_reshard_after_forward`
Add unit test similar to `test_fully_shard_communication_count`, the FSDPModule would perform as `._reshard_after_forward=True` after `.set_reshard_after_forward=True`, as well as setting to False
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,915,782,037
|
ci: Update linux_job references to v2
|
seemethere
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149102
* #149104
This is probably a bit overdue but trying to update these so we can
finally get rid of all the remnants that rely on non-manylinux2_28 stuff
and conda stuff
Signed-off-by: Eli Uriegas <github@terriblecode.com>
| true
|
2,915,747,530
|
DISABLED test_donated_buffer1_dynamic_shapes (__main__.DynamicShapesAotAutogradFallbackTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_donated_buffer1_dynamic_shapes&suite=DynamicShapesAotAutogradFallbackTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38675144507).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 14 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_donated_buffer1_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,915,737,456
|
Memory leak when using get_model_state_dict with FSDP-sharded models
|
mertyg
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 13
|
NONE
|
### 🐛 Describe the bug
I'm attempting to use the FSDP2 API to shard a model, extract its state dictionary (for potential future use), and then completely remove the model from memory. Extracting the state dict somehow causes there to remain references to the underlying model around, and there ends up being a memory leak. Below i'll reuse the test [here](https://github.com/pytorch/pytorch/blob/420a9be743f8dd5d6296a32a1351c1baced12f1f/test/distributed/_composable/fsdp/test_fully_shard_memory.py#L198) to demonstrate the issue.
When I add the step of using get_model_state_dict to extract the state dictionary (marked by `DIFF STARTS HERE` below) the model continues to occupy memory even after both the model and state dictionary are explicitly deleted. This differs from the behavior in the original test, where memory is properly released.
This functionality is important especially in cases where we'd like to iteratively load a model, perform computation, offload it to cpu, then reload it when it's necessary. If this procedure is repeated, it blows up the GPU memory.
Below is the code snippet to reproduce the behavior, you will see that the test fail as it is, but will not fail if you simply comment out the part that goes with `DIFF STARTS HERE`.
```python
import gc
import torch
from torch.distributed.fsdp import fully_shard
from torch.testing._internal.common_fsdp import FSDPTest
from torch.testing._internal.common_utils import run_tests
from torch.testing._internal.distributed._tensor.common_dtensor import (
ModelArgs,
Transformer,
TransformerBlock,
)
import os
import torch
import gc
from torch.distributed import init_process_group
from datetime import timedelta
from torch.distributed.checkpoint.state_dict import get_model_state_dict, StateDictOptions
class TestFullyShardMemory(FSDPTest):
@property
def world_size(self) -> int:
return min(2, torch.cuda.device_count())
def _get_peak_active_memory_mb(self) -> int:
mem_stats = torch.cuda.memory_stats()
return round(mem_stats["active_bytes.all.peak"] / 1e6)
def _get_curr_active_memory_mb(self) -> int:
mem_stats = torch.cuda.memory_stats()
return round(mem_stats["active_bytes.all.current"] / 1e6)
def test_fully_shard_del_memory(self):
base_mem_mb = self._get_peak_active_memory_mb()
vocab_size = 32
model_args = ModelArgs(
vocab_size=vocab_size, n_layers=3, dim=768, n_heads=12, weight_tying=False
)
model = Transformer(model_args)
# Initializing the model on CPU should not change the GPU memory usage
post_model_init_mem_mb = self._get_peak_active_memory_mb()
self.assertEqual(base_mem_mb, post_model_init_mem_mb)
for module in model.modules():
if isinstance(module, TransformerBlock):
fully_shard(module)
fully_shard(model)
unsharded_numel = sum(p.numel() for p in model.parameters())
sharded_numel = unsharded_numel // self.world_size
buffer_mb = 4
mem_mb = self._get_curr_active_memory_mb()
expected_mb = sharded_numel * 4 / 1e6 + buffer_mb
self.assertLessEqual(mem_mb - base_mem_mb, expected_mb)
### DIFF STARTS HERE ###
sdo = StateDictOptions(full_state_dict=True, cpu_offload=True, broadcast_from_rank0=True)
state_dict = get_model_state_dict(model, options=sdo)
del state_dict
### DIFF ENDS HERE ###
# Deleting the model should free all of the FSDP-managed GPU memory
del model
# Manually call garbage collection since there are ref cycles in FSDP
gc.collect()
torch.cuda.empty_cache()
mem_mb = self._get_curr_active_memory_mb()
print(f"Mem MB: {mem_mb}")
print(f"Base Mem MB: {base_mem_mb}")
self.assertEqual(mem_mb, base_mem_mb)
if __name__ == "__main__":
init_process_group(backend="nccl", timeout=timedelta(hours=24))
dst_rank = int(os.environ['RANK'])
dst_local_rank = int(os.environ['LOCAL_RANK'])
dst_world_size = int(os.environ['WORLD_SIZE'])
device = f'cuda:{dst_local_rank}'
run_tests()
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-163-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.54.03
cuDNN version: Probably one of the following:
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7543 32-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1499.953
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5600.18
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.5.1
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
2,915,721,044
|
[CI] Move ASAN jobs to clang-18
|
cyyever
|
open
|
[
"open source",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Use clang-18 for ASAN jobs.
FBGEMM has to be disabled because the following error
```
AddressSanitizer:DEADLYSIGNAL
#0 0x7f2c21dadef6 in fbgemm::EmbeddingSpMDMKernelSignature<float, long, long, float>::Type fbgemm::GenerateEmbeddingSpMDMWithStrides<float, long, long, float, false>(long, bool, bool, int, bool, bool, long, long, bool, bool, bool, bool)::'lambda1'(long, long, long, float const*, long const*, long const*, float const*, float*)::operator()(long, long, long, float const*, long const*, long const*, float const*, float*) const /var/lib/jenkins/workspace/third_party/fbgemm/src/EmbeddingSpMDM.cc:1146:14
#1 0x7f2c21dadef6 in float std::__invoke_impl<bool, fbgemm::EmbeddingSpMDMKernelSignature<float, long, long, float>::Type fbgemm::GenerateEmbeddingSpMDMWithStrides<float, long, long, float, false>(long, bool, bool, int, bool, bool, long, long, bool, bool, bool, bool)::'lambda1'(long, long, long, float const*, long const*, long const*, float const*, float*)&, long, long, long, float const*, long const*, long const*, float const*, float*>(std::__invoke_other, long&&, long&&, long&&, long&&, float const*&&, long const*&&, long const*&&, float const*&&, float*&&) /usr/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/invoke.h:61:14
#2 0x7f2c21dadef6 in std::enable_if<is_invocable_r_v<float, long, long, long, long, float const*, long const*, long const*, float const*, float*>, float>::type std::__invoke_r<bool, fbgemm::EmbeddingSpMDMKernelSignature<float, long, long, float>::Type fbgemm::GenerateEmbeddingSpMDMWithStrides<float, long, long, float, false>(long, bool, bool, int, bool, bool, long, long, bool, bool, bool, bool)::'lambda1'(long, long, long, float const*, long const*, long const*, float const*, float*)&, long, long, long, float const*, long const*, long const*, float const*, float*>(long&&, long&&, long&&, long&&, float const*&&, long const*&&, long const*&&, float const*&&, float*&&) /usr/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/invoke.h:114:9
#3 0x7f2c21dadef6 in std::_Function_handler<bool (long, long, long, float const*, long const*, long const*, float const*, float*), fbgemm::EmbeddingSpMDMKernelSignature<float, long, long, float>::Type fbgemm::GenerateEmbeddingSpMDMWithStrides<float, long, long, float, false>(long, bool, bool, int, bool, bool, long, long, bool, bool, bool, bool)::'lambda1'(long, long, long, float const*, long const*, long const*, float const*, float*)>::_M_invoke(std::_Any_data const&, long&&, long&&, long&&, float const*&&, long const*&&, long const*&&, float const*&&, float*&&) /usr/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/std_function.h:290:9
#4 0x7f2c125f68c0 in std::function<bool (long, long, long, float const*, long const*, long const*, float const*, float*)>::operator()(long, long, long, float const*, long const*, long const*, float const*, float*) const /usr/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/std_function.h:590:9
#5 0x7f2c125cb49b in std::enable_if<std::is_same_v<float, float>, void>::type at::native::(anonymous namespace)::index_select_add<float, long>(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor&, at::Tensor const&, bool, at::Tensor&, long, at::native::_EmbeddingBagKernelCacheImpl<at::native::_CallbackAndBlockSize<true, int, float>, at::native::_CallbackAndBlockSize<false, int, float>, at::native::_CallbackAndBlockSize<true, long, float>, at::native::_CallbackAndBlockSize<false, long, float>, at::native::_CallbackAndBlockSize<true, int, unsigned short>, at::native::_CallbackAndBlockSize<false, int, unsigned short>, at::native::_CallbackAndBlockSize<true, long, unsigned short>, at::native::_CallbackAndBlockSize<false, long, unsigned short>>*)::'lambda'(long, long)::operator()(long, long) const /var/lib/jenkins/workspace/aten/src/ATen/native/EmbeddingBag.cpp:420:26
#6 0x7f2c125cbcb5 in void at::parallel_for<std::enable_if<std::is_same_v<float, float>, void>::type at::native::(anonymous namespace)::index_select_add<float, long>(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor&, at::Tensor const&, bool, at::Tensor&, long, at::native::_EmbeddingBagKernelCacheImpl<at::native::_CallbackAndBlockSize<true, int, float>, at::native::_CallbackAndBlockSize<false, int, float>, at::native::_CallbackAndBlockSize<true, long, float>, at::native::_CallbackAndBlockSize<false, long, float>, at::native::_CallbackAndBlockSize<true, int, unsigned short>, at::native::_CallbackAndBlockSize<false, int, unsigned short>, at::native::_CallbackAndBlockSize<true, long, unsigned short>, at::native::_CallbackAndBlockSize<false, long, unsigned short>>*)::'lambda'(long, long)>(long, long, long, float const&)::'lambda'(long, long)::operator()(long, long) const /var/lib/jenkins/workspace/aten/src/ATen/Parallel-inl.h:36:9
#7 0x7f2c125cbcb5 in void at::internal::invoke_parallel<void at::parallel_for<std::enable_if<std::is_same_v<float, float>, void>::type at::native::(anonymous namespace)::index_select_add<float, long>(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor&, at::Tensor const&, bool, at::Tensor&, long, at::native::_EmbeddingBagKernelCacheImpl<at::native::_CallbackAndBlockSize<true, int, float>, at::native::_CallbackAndBlockSize<false, int, float>, at::native::_CallbackAndBlockSize<true, long, float>, at::native::_CallbackAndBlockSize<false, long, float>, at::native::_CallbackAndBlockSize<true, int, unsigned short>, at::native::_CallbackAndBlockSize<false, int, unsigned short>, at::native::_CallbackAndBlockSize<true, long, unsigned short>, at::native::_CallbackAndBlockSize<false, long, unsigned short>>*)::'lambda'(long, long)>(long, long, long, float const&)::'lambda'(long, long)>(long, long, long, float const&) (.omp_outlined_debug__) /var/lib/jenkins/workspace/aten/src/ATen/ParallelOpenMP.h:41:9
#8 0x7f2c125cbcb5 in void at::internal::invoke_parallel<void at::parallel_for<std::enable_if<std::is_same_v<float, float>, void>::type at::native::(anonymous namespace)::index_select_add<float, long>(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor&, at::Tensor const&, bool, at::Tensor&, long, at::native::_EmbeddingBagKernelCacheImpl<at::native::_CallbackAndBlockSize<true, int, float>, at::native::_CallbackAndBlockSize<false, int, float>, at::native::_CallbackAndBlockSize<true, long, float>, at::native::_CallbackAndBlockSize<false, long, float>, at::native::_CallbackAndBlockSize<true, int, unsigned short>, at::native::_CallbackAndBlockSize<false, int, unsigned short>, at::native::_CallbackAndBlockSize<true, long, unsigned short>, at::native::_CallbackAndBlockSize<false, long, unsigned short>>*)::'lambda'(long, long)>(long, long, long, float const&)::'lambda'(long, long)>(long, long, long, float const&) (.omp_outlined) /var/lib/jenkins/workspace/aten/src/ATen/ParallelOpenMP.h:25:1
#9 0x7f2c0293e052 in __kmp_invoke_microtask (/opt/conda/envs/py_3.10/lib/libiomp5.so+0x13e052) (BuildId: aeaedfeaee46a49fe8cb6a29e78b876ae77a7c20)
#10 0x7f2c028ba352 in __kmp_invoke_task_func (/opt/conda/envs/py_3.10/lib/libiomp5.so+0xba352) (BuildId: aeaedfeaee46a49fe8cb6a29e78b876ae77a7c20)
#11 0x7f2c028b9361 in __kmp_launch_thread (/opt/conda/envs/py_3.10/lib/libiomp5.so+0xb9361) (BuildId: aeaedfeaee46a49fe8cb6a29e78b876ae77a7c20)
#12 0x7f2c0293ecdb in _INTERNALdb99f3be::__kmp_launch_worker(void*) (/opt/conda/envs/py_3.10/lib/libiomp5.so+0x13ecdb) (BuildId: aeaedfeaee46a49fe8cb6a29e78b876ae77a7c20)
#13 0x7f2c3d4bd7b8 (/usr/lib/llvm-18/lib/clang/18/lib/linux/libclang_rt.asan-x86_64.so+0xf47b8) (BuildId: 4cd39e6608b20f2f5a148a941cd434e0cadcd3dc)
#14 0x7f2c3d12eac2 in start_thread nptl/pthread_create.c:442:8
#15 0x7f2c3d1c084f misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/lib/jenkins/workspace/third_party/fbgemm/src/EmbeddingSpMDM.cc:1146:14 in fbgemm::EmbeddingSpMDMKernelSignature<float, long, long, float>::Type fbgemm::GenerateEmbeddingSpMDMWithStrides<float, long, long, float, false>(long, bool, bool, int, bool, bool, long, long, bool, bool, bool, bool)::'lambda1'(long, long, long, float const*, long const*, long const*, float const*, float*)::operator()(long, long, long, float const*, long const*, long const*, float const*, float*) const
Thread T17 created by T0 here:
#0 0x7f2c3d4a5741 in pthread_create (/usr/lib/llvm-18/lib/clang/18/lib/linux/libclang_rt.asan-x86_64.so+0xdc741) (BuildId: 4cd39e6608b20f2f5a148a941cd434e0cadcd3dc)
#1 0x7f2c0293f61c in __kmp_create_worker (/opt/conda/envs/py_3.10/lib/libiomp5.so+0x13f61c) (BuildId: aeaedfeaee46a49fe8cb6a29e78b876ae77a7c20)
```
However, it's not easy to update the third-party FBGEMM.
| true
|
2,915,718,445
|
Add meta function for out variants of ones,zeros,empty
|
cz2h
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 9
|
CONTRIBUTOR
|
Open another PR to fix merge conflicts. Fixes https://github.com/pytorch/pytorch/issues/135832
For aten.ones, aten.zeros, followed this [link](https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit?tab=t.0#heading=h.64r4npvq0w0) to register meta functions.
For aten.empty.out, followed this [part](https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit?tab=t.0#heading=h.iy9lxhxhtl5v) to register a decomp for empty that handles the FakeTensor input.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,915,702,722
|
Aten arange behavior when dtype is int64 and step size is greater than range
|
satheeshhab
|
open
|
[
"triaged",
"actionable",
"module: python frontend"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
While testing corner cases on torch.arange, i see the following behavior when dtype is int64 and step size is greater than range.
On CPU, i get the following behavior for arange.
>> a = torch.arange(0, 0.5, 1, dtype=torch.int64)
>> a
tensor([], dtype=torch.int64)
>> a = torch.arange(0, 0.5, 1, dtype=torch.int32)
>>a
tensor([0], dtype=torch.int32)
Why is it that size of ‘a’ is 0 when dtype is int64 where as it is 1 for int32? Logically speaking the first element is anyways 0 and size should have been 1 even for int64 type, isn’t it?
### Versions
2025-03-13 05:10:24 (2.62 MB/s) - ‘collect_env.py’ saved [24353/24353]
Collecting environment information...
PyTorch version: 2.6.0+hpu_1.21.0-202.git603340c
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5 (ssh://git@github.com/habana-internal/tpc_llvm10 6423f90703886aa37631daf63eaf24f24df9ba3d)
CMake version: version 3.29.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] habana-torch-dataloader==1.21.0+git9d09025dd
[pip3] habana-torch-plugin==1.21.0+git9d09025dd
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.6.0+hpu.1.21.0.202.git603340c
[pip3] torch_tb_profiler==0.4.0
[pip3] torchaudio==2.5.1a0+1661daf
[pip3] torchdata==0.9.0+d4bb3e6
[pip3] torchmetrics==1.2.1
[pip3] torchtext==0.18.0a0+9bed85d
[pip3] torchvision==0.20.1a0+3ac97aa
[conda] Could not collect
cc @albanD
| true
|
2,915,649,453
|
How to determine which part of torch.compile undergoes recompiling after caching
|
janak2
|
open
|
[
"triaged",
"oncall: pt2"
] | 2
|
NONE
|
### 🐛 Describe the bug
Thanks for the helpful blog: https://dev-discuss.pytorch.org/t/how-to-bring-compile-time-down-to-zero-our-plans-and-direction-may-14th-edition/2089
I am currently caching all 3 stages of the compiler but only seeing ~50% reduction in compile time.
How do I determine which part of the compilation is not being properly cached or recompiled every time?
P.S. I am interested in finding which part of the process recompiles and any techniques to avoid recompilation not mentioned here: https://pytorch.org/docs/stable/torch.compiler_troubleshooting.html#dealing-with-recompilations
### Error logs
_No response_
### Versions
torch 2.5
CUDA 12.4
GPU = A10G
cc @chauhang @penguinwu
| true
|
2,915,621,727
|
Unrestrict some onlyCPU tests
|
cyyever
|
open
|
[
"open source",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Test these on all devices to avoid diverse behaviour.
| true
|
2,915,618,684
|
How to skip backward specific steps in torch.compile
|
janak2
|
open
|
[
"triaged",
"oncall: pt2"
] | 3
|
NONE
|
### 🐛 Describe the bug
I couldn't find much documentation around how we can skip backward specific-steps in torch.compile/AOT autograd.
Some info would be helpful.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu
| true
|
2,915,600,607
|
[Distributed] Treat third-party devices with `set_rng_state()` and `get_rng_state` as CUDA-like devices when calling `manual_seed()`
|
shink
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing",
"module: dtensor",
"module: accelerator"
] | 25
|
CONTRIBUTOR
|
Fixes #148858
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu @albanD @guangyey @EikanWang
| true
|
2,915,596,294
|
Remove runtime dependency on packaging
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Looks like after https://github.com/pytorch/pytorch/pull/148924
We are seeing this error in nightly test:
https://github.com/pytorch/pytorch/actions/runs/13806023728/job/38616861623
```
File "/Users/runner/work/_temp/anaconda/envs/test_conda_env/lib/python3.13/site-packages/torch/_inductor/pattern_matcher.py", line 79, in <module>
from .lowering import fallback_node_due_to_unsupported_type
File "/Users/runner/work/_temp/anaconda/envs/test_conda_env/lib/python3.13/site-packages/torch/_inductor/lowering.py", line 7024, in <module>
from . import kernel
File "/Users/runner/work/_temp/anaconda/envs/test_conda_env/lib/python3.13/site-packages/torch/_inductor/kernel/__init__.py", line 1, in <module>
from . import mm, mm_common, mm_plus_mm
File "/Users/runner/work/_temp/anaconda/envs/test_conda_env/lib/python3.13/site-packages/torch/_inductor/kernel/mm.py", line 6, in <module>
from packaging.version import Version
ModuleNotFoundError: No module named 'packaging'
```
Hence removing runtime dependency on packaging since it may not be installed by default
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,915,569,982
|
Ignore missing-field-initializers warnings of Gemm::Arguments constructors
|
cyyever
|
closed
|
[
"open source",
"release notes: cuda",
"ciflow/periodic"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,915,569,518
|
Better warning once for `cuDNN/MIOpen` not enabled
|
zeshengzong
|
open
|
[
"module: cudnn",
"module: tests",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
While running some tests, seems repeating error messages about `cuDNN/MIOpen` not enabled, maybe better to warn only once for users
```bash
pytest test/test_dataloader.py
```

### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+gitdb6fca9
Is debug build: True
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.12.0 | packaged by Anaconda, Inc. | (main, Oct 2 2023, 17:29:18) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6151 CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 4
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat md_clear flush_l1d arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.0
[pip3] optree==0.13.0
[pip3] pytorch_openreg==1.0
[pip3] torch==2.7.0a0+gitdb6fca9
[pip3] triton==3.1.0
[conda] mkl-include 2024.2.2 pypi_0 pypi
[conda] mkl-static 2024.2.2 pypi_0 pypi
[conda] numpy 2.1.0 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-openreg 1.0 dev_0 <develop>
[conda] torch 2.7.0a0+gitdb6fca9 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @csarofeen @ptrblck @xwang233 @eqy @mruberry @ZainRizvi
| true
|
2,915,502,100
|
Update Kineto Submodule
|
sraikund16
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary: We have made a lot of changes in Kineto this month. It is a good idea to update the submodule in now especially since the roctracer-sdk change will be very large
Test Plan: CI
Differential Revision: D71082829
| true
|
2,915,429,277
|
[ROCm][TunableOp] More TF32 support.
|
naromero77amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 5
|
COLLABORATOR
|
This PR includes additional enhancements to TF32 support in TunableOp.
- OpSignature now differentiates between float32 and tf32 data types.
- Offline tuning now supports TF32.
- Unit tests for online and offline tuning of TF32.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,915,429,092
|
[invoke_subgraph] Fake tensor prop caching
|
anijain2305
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148953
* #150036
* #149667
* __->__ #149087
Redoing https://github.com/pytorch/pytorch/pull/137808
| true
|
2,915,419,295
|
Fix B018 Useless Expressions in Multiple Files (#106571)
|
rocordemu
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"module: inductor",
"module: dynamo",
"release notes: distributed (checkpoint)"
] | 5
|
NONE
|
### Description
This PR addresses `flake8-bugbear` `B018` warnings ("Found useless expression") by removing unused tuple and constant expressions in three files. These fixes clean up the codebase, reducing potential confusion and aligning with the linting goals of #106571. As a first-time contributor (coming from Node.js and learning Python), I’m excited to help improve PyTorch’s code quality!
### Changes
- **`torch/_dynamo/variables/ctx_manager.py`**
- **Issue**: `Found useless Tuple expression. Consider either assigning it to a variable or removing it.`
- **Fix**: Removed unnecessary tuple wrapper `(...,)` around a statement, keeping the side-effecting call intact.
- **`torch/_inductor/cudagraph_trees.py`**
- **Issue**: `Found useless Tuple expression. Consider either assigning it to a variable or removing it.`
- **Fix**: Removed unnecessary tuple wrapper `(...,)` around a statement, keeping the side-effecting call intact.
- **`torch/distributed/checkpoint/default_planner.py`**
- **Issue**: `Found useless Constant expression. Consider either assigning it to a variable or removing it.`
- **Fix**: Added a `return` statement before the standalone `True` expression, making it a meaningful return value.
### Details
- **Related Issue**: Fixes #106571
- **Linting Tool**: Verified with `flake8` and `flake8-bugbear`.
- **Testing**: Ran `pytest` locally to ensure no functional changes—only cleanup.
### Notes
Thanks to `@spzala`, `@Skylion007`, and `@zou3519` for maintaining this awesome project! Any feedback on my fixes or PR process is welcome—I’m here to learn and contribute.
---
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,915,389,040
|
[AOTI] Re-enable AOTI cpp unit test
|
desertfire
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149085
Summary: test_inductor_aoti was removed by accident previously. Add it back.
| true
|
2,915,388,660
|
[WIP][dynamic shapes] use statically_known_true for _reshape_view_helper
|
pianpwk
|
closed
|
[
"fb-exported",
"release notes: fx",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Differential Revision: D71081192
| true
|
2,915,263,113
|
Support missing bitwise onnx ops (__rshift__, __lshift__)
|
nlgranger
|
closed
|
[
"module: onnx",
"triaged"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Some bitwise operations are not supported by onnx export (with or without dynamo).
So far I identified `__rshift__` and `__lshift__` -> [BitShift](https://github.com/onnx/onnx/blob/main/docs/Operators.md#BitShift)
Here is an mro of the failed export:
```py
import math
import torch
class Gray(torch.nn.Module):
nbits: int = 32
def forward(self, gray: torch.Tensor):
shifts = [(0x1 << i) for i in range((math.ceil(math.log(self.nbits, 2)) - 1), -1, -1)]
for shift in shifts:
gray ^= gray >> shift
return gray
torch.onnx.export(
Gray(), # model to export
(torch.randint(0, 100, [100], dtype=torch.long)), # inputs of the model,
"my_model.onnx", # filename of the ONNX model
dynamo=True, # True or False to select the exporter to use
verbose=False,
)
```
### Alternatives
- Try to manually declare each missing op via the `custom_translation_table` argument of [torch.onnx.export](https://pytorch.org/docs/stable/onnx_torchscript.html#torch.onnx.export) as [documented in the tutorial](https://pytorch.org/tutorials/beginner/onnx/onnx_registry_tutorial.html). But it is cumbersome, error messages are cryptic and I cannot seem to work around it.
- Use other supported functions, for bit shift this is easy as torch.bitwise_right_shift, dunno if other ops are missing.
### Additional context
- https://github.com/pytorch/pytorch/issues/126194
- https://github.com/pytorch/pytorch/pull/84496/files
| true
|
2,915,238,748
|
BC fix for AOTIModelPackageLoader() constructor defaults
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"ciflow/inductor",
"release notes: inductor",
"module: aotinductor"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149082
The default value for `run_single_threaded` was wrongly specified in the .cpp file instead of the header, breaking C++-side instantiation of `AOTIModelPackageLoader` with no arguments. This PR fixes this and adds a test for the use case of running with `AOTIModelPackageLoader` instead of `AOTIModelContainerRunner` on the C++ side.
cc @desertfire @chenyang78 @penguinwu @yushangdi @benjaminglass1
| true
|
2,915,235,946
|
DISABLED test_destruct_before_terminate_pg (__main__.ProcessGroupNCCLGroupTest)
|
pytorch-bot[bot]
|
closed
|
[
"oncall: distributed",
"module: flaky-tests",
"skipped"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_destruct_before_terminate_pg&suite=ProcessGroupNCCLGroupTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38650096870).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_destruct_before_terminate_pg`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 605, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 845, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 902, in _check_return_codes
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4094, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 0 but got -6.
Absolute difference: 6
Relative difference: inf
Expect process 1 exit code to match Process 0 exit code of 0, but got -6
```
</details>
Test file path: `distributed/test_c10d_nccl.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000
| true
|
2,915,235,934
|
DISABLED test_aoti_debug_printer_codegen_cuda (__main__.AOTInductorTestABICompatibleGpu)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: inductor, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_debug_printer_codegen_cuda&suite=AOTInductorTestABICompatibleGpu&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38660750148).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_debug_printer_codegen_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 12849, in new_test
return value(self)
File "/var/lib/jenkins/pytorch/test/inductor/test_aot_inductor.py", line 3854, in test_aoti_debug_printer_codegen
).run(code)
RuntimeError: Expected to find "before_launch - triton_poi_fused_0" but did not find it
Searched string:
Auto-tuning code written to /tmp/tmpq9ijp3kx/tmpw9diey4o.py
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
Output wrapper code:
From CHECK-COUNT-1: before_launch - triton_poi_fused_0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_aot_inductor.py AOTInductorTestABICompatibleGpu.test_aoti_debug_printer_codegen_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_aot_inductor.py`
cc @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,915,235,826
|
DISABLED test_wrap_pytree_kwargs_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 4
|
NONE
|
Platforms: linux, slow, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_wrap_pytree_kwargs_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38613577279).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_wrap_pytree_kwargs_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_higher_order_ops.py", line 453, in test_wrap_pytree_kwargs
self._test_wrap_simple(f, my_args_generator((x, y, (x, y))), arg_count)
File "/var/lib/jenkins/workspace/test/dynamo/test_higher_order_ops.py", line 191, in _test_wrap_simple
self.assertEqual(len(wrap_node.args), expected_num_wrap_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4094, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 4 but got 7.
Absolute difference: 3
Relative difference: 0.75
To execute this test, run the following from the base repo dir:
python test/dynamo/test_dynamic_shapes.py DynamicShapesHigherOrderOpTests.test_wrap_pytree_kwargs_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,915,208,884
|
Remove torch.export.export_for_inference
|
gmagogsfm
|
closed
|
[
"module: bc-breaking",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: bc breaking",
"release notes: export"
] | 9
|
CONTRIBUTOR
|
Summary: Remove torch.export.export_for_inference, it is redundant and can always be replaced with torch.export.export_for_training() + run_decompositions()
Test Plan: unit tests
Differential Revision: D71069057
cc @ezyang @gchanan
| true
|
2,915,205,359
|
Fix outdated docstring of torch.export.export regarding strict flag
|
gmagogsfm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Summary: Fix outdated docstring of torch.export.export regarding strict flag
Test Plan: None, doc only change
Differential Revision: D71068215
| true
|
2,915,153,450
|
[ROCm] Improve softmax performance
|
doru1004
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"ciflow/rocm"
] | 6
|
CONTRIBUTOR
|
This patch improves the performance of softmax for 2D tensors by:
using a softmax calculation which eliminates the increase of shared memory usage with the size of the tensor and relies on global memory accesses for the tensor data accesses while still using shared memory for the actual reduction step (the shared memory used for the reduction is constant and does not increase with tensor size).
for the final computation replacing the division by the sum with the multiplication of 1/sum. The 1/sum is computed as the last step of the warp reduction.
replace the use of the exp function with the __expf function.
The impact on numerical accuracy is within a 1e-5 for half precision and 1e-7 for full precision.
The impact on performance for MI300X is between 22% and 50% percentage improvement over current runtimes.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,915,135,434
|
xpu: target torch::xpurt not found linking with libtorch installed from XPU wheels
|
dvrogozh
|
closed
|
[
"module: cpp",
"triaged",
"module: xpu"
] | 6
|
CONTRIBUTOR
|
Consider that Pytorch XPU is installed on the newly configure system with:
```
# pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/xpu
# pip3 list | grep torch
torch 2.7.0.dev20250312+xpu
```
Further, consider the use case when someone works on C++ library/executable and wants to link with libtorch:
```
# touch sample.cpp
# cat CMakeLists.txt
cmake_minimum_required(VERSION 3.18)
project(sample)
find_package(Torch REQUIRED)
add_library(sample SHARED sample.cpp)
target_link_libraries(sample PUBLIC ${TORCH_LIBRARIES})
```
Trying to configure the above cmake script will lead to the following error:
```
$ cmake -DTorch_DIR=$(python3 -c "import torch; print(torch.utils.cmake_prefix_path)")/Torch .
CMake Warning at /home/dvrogozh/pytorch.xpu/lib/python3.12/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:22 (message):
static library kineto_LIBRARY-NOTFOUND not found.
Call Stack (most recent call first):
/home/dvrogozh/pytorch.xpu/lib/python3.12/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:125 (append_torchlib_if_found)
CMakeLists.txt:3 (find_package)
-- Configuring done (0.0s)
CMake Error at /home/dvrogozh/pytorch.xpu/lib/python3.12/site-packages/torch/share/cmake/Caffe2/Caffe2Targets.cmake:61 (set_target_properties):
The link interface of target "c10_xpu" contains:
torch::xpurt
but the target was not found. Possible reasons include:
* There is a typo in the target name.
* A find_package call is missing for an IMPORTED target.
* An ALIAS target is missing.
Call Stack (most recent call first):
/home/dvrogozh/pytorch.xpu/lib/python3.12/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:114 (include)
/home/dvrogozh/pytorch.xpu/lib/python3.12/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
CMakeLists.txt:3 (find_package)
```
The reason of the failure is that in the above scenario oneAPI environment was not installed and sourced. As a result [FindSYCLToolkit.cmake](https://github.com/pytorch/pytorch/blob/891ba2ec8a3e2e71137fab4a8e91940a19c8272b/cmake/Modules/FindSYCLToolkit.cmake) fails to find SYCL (due to way it's configured). Note also that after installing Pytorch XPU SYCL environment is actually available under pypi installation:
```
$ find ~/pytorch.xpu/ -iname sycl
/home/dvrogozh/pytorch.xpu/include/sycl
/home/dvrogozh/pytorch.xpu/include/sycl/CL/sycl
$ find ~/pytorch.xpu/ -iname sycl*.hpp
/home/dvrogozh/pytorch.xpu/include/syclcompat/syclcompat.hpp
/home/dvrogozh/pytorch.xpu/include/sycl/sycl_span.hpp
/home/dvrogozh/pytorch.xpu/include/sycl/detail/sycl_mem_obj_allocator.hpp
/home/dvrogozh/pytorch.xpu/include/sycl/ext/intel/esimd/detail/sycl_util.hpp
/home/dvrogozh/pytorch.xpu/include/sycl/sycl.hpp
/home/dvrogozh/pytorch.xpu/include/sycl/CL/sycl.hpp
/home/dvrogozh/pytorch.xpu/include/syclcompat.hpp
$ find ~/pytorch.xpu/ -iname libsycl*.so
/home/dvrogozh/pytorch.xpu/lib/libsycl.so
/home/dvrogozh/pytorch.xpu/lib/libsycl_ur_trace_collector.so
/home/dvrogozh/pytorch.xpu/lib/libsycl-preview.so
```
Thus, for the use cases when DPC++ compiler is not needed it technically should be possible to use XPU environment installed from wheels for build. **Do we need to fix FindSYCLToolkit.cmake to make such builds possible?**
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5
cc @jbschlosser @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,915,114,371
|
[FSDP2] Update ignored_params docstring and add unit test
|
mori360
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/148242
ignored_params won't be moved to devices in full_shard(), update docstring.
Add unit test `test_move_states_to_device_ignored_param_device` to show that ignored_params won't be moved during full_shard(), but would be after `model.cuda()`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,915,108,119
|
[AOTI][refactor] Split MiniArrayRef into a separate header
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149073
Summary: MiniArrayRef is a common utility and will be used by the libtorch-free AOTI.
Differential Revision: [D71064657](https://our.internmc.facebook.com/intern/diff/D71064657)
| true
|
2,915,102,098
|
[compile] Switch off inference_mode for fake prop while compiling
|
anijain2305
|
closed
|
[
"oncall: distributed",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148953
* __->__ #149072
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,915,092,170
|
[DTensor] Fix`slice_backward` strategy, add `select_int`, `select_backward` strategies
|
awgu
|
closed
|
[
"oncall: distributed",
"open source",
"release notes: distributed (dtensor)"
] | 2
|
COLLABORATOR
|
For `slice_backward`:
1. `slice_backward` was missing `schema_info` leading to a caching bug
2. We do not need to redistribute to replicate if a shard dim differs from the slice `dim`
For `select_int` and `select_backward`, we add strategies.
For `select_backward` and `slice_backward`, we need to specify that their concrete `input_sizes` arg (arg index 1) need to be modified by adding an entry in `self.op_to_shape_and_stride_idx`.
cc @H-Huang @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,915,088,508
|
[DTensor] Fix `local_map` with multi-threading
|
awgu
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (dtensor)"
] | 6
|
COLLABORATOR
|
Using `nonlocal device_mesh` is not safe with multi-threading
cc @H-Huang @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,915,025,635
|
[FlexAttention] Allow caching of backwards func
|
drisspg
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149069
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @Chillee @yanboliang @BoyuanFeng
| true
|
2,915,015,838
|
[do-not-land] add tests
|
xmfan
|
open
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149068
* #149067
* #149066
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,915,015,718
|
[do-not-land] test decorator changes
|
xmfan
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,915,015,595
|
[do-not-land] test eval_frame changes
|
xmfan
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149068
* #149067
* __->__ #149066
| true
|
2,914,898,833
|
[ONNX Export] dynamic_shapes ignored during model export.
|
spkgyk
|
closed
|
[
"module: onnx",
"triaged"
] | 9
|
NONE
|
### 🐛 Describe the bug
```python
from torch.export import Dim
from pathlib import Path
import onnx
import onnxruntime
import torch
model = model
model.load_state_dict(checkpoint.get("state_dict"), strict=True)
model.eval()
with torch.no_grad():
data = torch.randn(1, 3, 256, 256)
torch_outputs = model(data)
example_inputs = (data.cuda(),)
batch_dim = Dim("batch_size", min=1, max=16)
onnx_program = torch.onnx.export(
model=model.cuda(),
args=example_inputs,
dynamo=True,
input_names=["images"],
output_names=["logits"],
opset_version=20,
dynamic_shapes=({0: batch_dim},),
)
onnx_program.optimize()
onnx_program.save(str(ONNX_MODEL))
del onnx_program
del model
onnx_model = onnx.load(str(ONNX_MODEL))
onnx.checker.check_model(onnx_model)
num_nodes = len(onnx_model.graph.node)
print(f"Number of nodes in the ONNX model: {num_nodes}")
# Inspect inputs
print("Model Inputs:")
for inp in onnx_model.graph.input:
dims = [dim.dim_value if dim.HasField("dim_value") else dim.dim_param for dim in inp.type.tensor_type.shape.dim]
print(f"{inp.name}: {dims}")
# Inspect outputs
print("\nModel Outputs:")
for out in onnx_model.graph.output:
dims = [dim.dim_value if dim.HasField("dim_value") else dim.dim_param for dim in out.type.tensor_type.shape.dim]
print(f"{out.name}: {dims}")
del onnx_model
onnx_inputs = [tensor.numpy(force=True) for tensor in example_inputs]
ort_session = onnxruntime.InferenceSession(str(ONNX_MODEL), providers=["CPUExecutionProvider"])
onnxruntime_input = {input_arg.name: input_value for input_arg, input_value in zip(ort_session.get_inputs(), onnx_inputs)}
# ONNX Runtime returns a list of outputs
onnxruntime_outputs = ort_session.run(None, onnxruntime_input)[0]
assert len(torch_outputs) == len(onnxruntime_outputs)
for torch_output, onnxruntime_output in zip(torch_outputs, onnxruntime_outputs):
torch.testing.assert_close(torch_output.cpu(), torch.tensor(onnxruntime_output))
print("All tests passed")
```
Code runs with the output:
```
FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
[torch.onnx] Obtain model graph for `CellSamWrapper([...]` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `CellSamWrapper([...]` with `torch.export.export(..., strict=False)`... ✅
[torch.onnx] Run decomposition...
[torch.onnx] Run decomposition... ✅
[torch.onnx] Translate the graph into ONNX...
[torch.onnx] Translate the graph into ONNX... ✅
Applied 112 of general pattern rewrite rules.
Number of nodes in the ONNX model: 1059
Model Inputs:
images: [1, 3, 256, 256]
Model Outputs:
logits: [1, 3, 256, 256]
All tests passed
```
I then try and test with a batch size of 4:
```python
from pathlib import Path
import numpy
import onnx
import onnxruntime
ROOT = Path(__file__).resolve().parent.parent
ONNX_MODEL = ROOT / "model.onnx"
onnx_model = onnx.load(str(ONNX_MODEL))
onnx_inputs = [numpy.random.randn(4, 3, 256, 256).astype(numpy.float32)]
ort_session = onnxruntime.InferenceSession(str(ONNX_MODEL), providers=["CPUExecutionProvider"])
onnxruntime_input = {input_arg.name: input_value for input_arg, input_value in zip(ort_session.get_inputs(), onnx_inputs)}
# ONNX Runtime returns a list of outputs
onnxruntime_outputs = ort_session.run(None, onnxruntime_input)[0]
```
which produces the error:
```
Traceback (most recent call last):
File "onnx_test.py", line 19, in <module>
onnxruntime_outputs = ort_session.run(None, onnxruntime_input)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 270, in run
return self._sess.run(output_names, input_feed, run_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: images for the following indices
index: 0 Got: 4 Expected: 1
Please fix either the inputs/outputs or the model.
```
I have tried this on both Torch 2.6 and the nightly version. Am I doing something wrong?
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.9 (main, Feb 12 2025, 14:50:50) [Clang 19.1.6 ] (64-bit runtime)
Python platform: Linux-6.8.0-1021-gcp-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 570.86.10
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.41
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 4 MiB (4 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnx_graphsurgeon==0.5.6
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxscript==0.2.2
[pip3] pytorch-ignite==0.5.1
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[pip3] tritonclient==2.55.0
| true
|
2,914,889,687
|
[ca] clean up aot node deduping
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 5
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149229
* #149014
* __->__ #149064
rename the AOT nodes as we copy paste them into the CA graph
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,914,774,055
|
Consolidate torchbind fake class registration
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Summary: Remove duplicated fake class registration
Test Plan: CI
Differential Revision: D71052419
| true
|
2,914,730,532
|
Reserve customized modules in torch.compile's dynamo tracer
|
trajepl
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
## Description
Hi PyTorch team,
I’ve encountered an issue with torch.compile when working with customized modules. Specifically, torch.compile tends to step into customized modules and decompose them into built-in functions and modules. This leads to the loss of the original module information, which makes it difficult to replace the modules or analyze the computation graph in torch.fx.
torch.fx.symbolic_trace has a similar issue, but it provides the `is_leaf_module` interface ([docs](https://pytorch.org/docs/stable/fx.html#torch.fx.Tracer.is_leaf_module)) that allows developers to treat certain modules as leaf nodes and prevent them from being decomposed. This makes it possible to retain the module identity within the traced graph.
However, in my case, torch.fx.symbolic_trace doesn't work well because my model contains many custom operators that are not compatible with symbolic tracing.
## Feature Request
Would it be possible to introduce a similar `is_leaf_module` interface for torch.compile? This would give developers more control over how torch.compile treats custom modules, helping preserve module structure and improving the flexibility of graph transformations.
## Example
Here's a minimal example to illustrate the issue:
```python
import torch
import torch.nn as nn
class CustomModule(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return nn.functional.linear(x, torch.randn(10, 10))
class Model(nn.Module):
def __init__(self):
super().__init__()
self.custom = CustomModule()
def forward(self, x):
return self.custom(x)
model = Model()
compiled_model = torch.compile(model)
input = torch.randn(1, 10)
output = compiled_model(input)
print(output)
```
### Current Behavior
Now, the gx graph in torch.compile looks like:
```
opcode name target args kwargs
------------- ------ -------------------------------------------------------- ------------- --------
placeholder l_x_ L_x_ () {}
call_function randn <built-in method randn of type object at 0x7f87b3f94380> (10, 10) {}
call_function linear <built-in function linear> (l_x_, randn) {}
output output output ((linear,),) {}
```
### Expected Behavior
If torch.compile had an `is_leaf_module`-like interface, we could define a custom function to treat CustomModule as a leaf. For example
```python
def my_leaf_module_check_func(module):
return isinstance(module, CustomModule)
compiled_model = torch.compile(model, options={'is_leaf_module': my_leaf_module_check_func})
```
The expected graph would preserve the CustomModule identity:
```
opcode name target args kwargs
----------- ---------------- ---------------- ---------------------- --------
placeholder l_x_ L_x_ () {}
call_module l__self___custom L__self___custom (l_x_,) {}
output output output ((l__self___custom,),) {}
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,914,704,783
|
Best way to disable "fx graph cache hit for key"?
|
henrylhtsang
|
closed
|
[
"triaged",
"module: fx.passes",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
I have a possibly niche use case:
* I might rerun the same run a few times
* So I will run into "fx graph cache hit for key"
* I want to see precompilation and autotuning in the logs
* So I want to bypass fx graph cache
* Want to avoid having to C++ compile the kernel again (codecache does that), since C++ compile is long
* So I can't force disable caches
If I run the same run twice, I will see “fx graph cache hit for key” starting the second time. I tried disabling all the cache in configs (autotune_local_cache, autotune_remote_cache, bundled_autotune_remote_cache), but that didn't work.
I can get around it with something like
```
torch._inductor.config.cuda.cutlass_op_denylist_regex = uuid.uuid4().hex
```
since I know that config doesn’t take effect on my run.
Question:
Is there a better way to do it?
Is there any point in adding a knob to control it? Or am I better off sticking to my hack?
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,914,675,068
|
aot autograd cache causes TORCH_LOGS=aot to not print out the aot_graphs
|
zou3519
|
open
|
[
"module: logging",
"triaged",
"oncall: pt2",
"module: aotdispatch",
"compile-cache"
] | 4
|
CONTRIBUTOR
|
I think the main problem is that I don't know how to disable the aotautograd cache, but we should have some sort of recommended workflow for seeing the aot graphs in this case
cc @oulgen @jamesjwu @masnesral @chauhang @penguinwu @bdhirsh
| true
|
2,914,659,921
|
[inductor] Fix profiler tests with latest Triton
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149025
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,914,655,899
|
[pytorch] Fix duplicated Malloc/Free insertation when using IRBuilderBase::CreateMalloc/CreateFree in LLVM 18+
|
HighW4y2H3ll
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"NNC",
"ciflow/trunk",
"release notes: jit"
] | 4
|
CONTRIBUTOR
|
Summary:
Pytorch unitest hangs when jitting the Tensor kernel. The problem exists for LLVM version >= 18 due to this upstream change: https://github.com/llvm/llvm-project/commit/45bb45f2ae89df6c0e54ead2258764ec91f5f5f5
`IRBuilderBase::CreateCall` will insert the instruction into the BasicBlock by default. And we don't need to explicitly insert the instruction when compiling the tensor kernel.
Test Plan:
## Test with the release toolchain
```
buck test 'mode/dev' //caffe2/test:jit -- --exact 'caffe2/test:jit - test_concat_invariant (test_jit_fuser_te.TestTEFuserDynamic)'
```
## Test with the Buckified toolchain
Apply this D71046097 to select the LLVM libraries.
```
# Build tests
buck build 'mode/dev-asan' //caffe2/test:jit --show-output
```
```
# Run test (Change HASH and paths accordingly)
HASH="b755f1c435832a1e"
ENABLE_FLATBUFFER=0 FB_OVERRIDE_PYBIND11_GIL_INCREF_DECREF_CHECK=1 MKL_NUM_THREADS=1 NO_MULTIPROCESSING_SPAWN=0 OMP_NUM_THREADS=1 PYTORCH_TEST=1 PYTORCH_TEST_FBCODE=1 PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_DEV_DBG_ASAN=1 PYTORCH_TEST_WITH_TSAN=0 PYTORCH_TEST_WITH_UBSAN=1 SKIP_TEST_BOTTLENECK=1 TENSORPIPE_TLS_DATACENTER=test_dc TEST_PILOT=True TPX_IS_TEST_EXECUTION=true TPX_TIMEOUT_SEC=6000 \
buck-out/v2/gen/$HASH/caffe2/test/__jit__/jit.par --test-filter test_jit_fuser_te.TestTEFuserDynamic.test_concat_invariant
```
Differential Revision: D71046799
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,914,631,285
|
Symintify transpose_
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/148702
| true
|
2,914,558,296
|
[RLEASE ONLY CHANGES] Apply release only chnages to release 2.7
|
atalman
|
closed
|
[
"module: rocm",
"release notes: releng",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Same as: https://github.com/pytorch/pytorch/pull/143085
Generated by: ``scripts/release/apply-release-changes.sh``
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,914,546,559
|
fix untyped decorator lints
|
aorenste
|
open
|
[
"oncall: distributed",
"oncall: jit",
"release notes: quantization",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: distributed (checkpoint)",
"release notes: export"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149055
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,914,469,598
|
Store statically launchable CachingAutotuners inside CompiledFXGraph.triton_bundle
|
jamesjwu
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 24
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149054
This PR adds CachingAutotuners that are statically launchable to FXGraphCache's cache entry.
Regular CachingAutotuners, with triton kernels attached to them, are not very good to cache: they are very large, and take huge amounts of space since they track all of the various binary files, along with various metadata. We could probably figure out what information we could delete from the kernel and have it still work, but with StaticCudaLauncher, we no longer have to. Instead, we can cache every compiled triton kernel that is statically launchable.
Because StaticTritonCompileResult is serializable, and designed to have a very small memory footprint, we can save it into FXGraphCache without increasing the cache size significantly. We store it as a part of CompiledFxGraph.triton_bundle.
Then, on load, we repopulate the CachingAutotuner into our CompiledTritonKernel cache.
The upsides of this are many:
- We no longer need to call into a separate process on cache hit
- We can *guarantee* that the triton kernel we got from our cache entry is the one we use to launch again, so no worries about triton's own caching logic
- Once we achieve feature parity and all torch.compiled triton kernels are statically launchable, we can clean up a bunch of TritonBundler code and simplify the cache hit logic.
Fixes #149449
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,914,455,680
|
[CI] Fix xpu linux test permission issue and add ci docker image pull
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,914,444,493
|
Use schema as source of truth + support ones_like/empty_like
|
janeyx99
|
closed
|
[
"Merged",
"release notes: cpp",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
This change does 2 important things:
(a) Instead of relying on IValue type as source of truth, we use the schema as the source of truth, which is important as IValue types are overloaded and can ambiguously convert incorrectly. For example, a MemoryFormat will look like an int + get converted to an int64_t vs a MemoryFormat!
(b) This PR expands support for many more types to encompass way more schemas, e.g., Optional, Device, dtype, etc. The main win from this PR is the ability for aoti_torch_call_dispatcher to call TensorFactory ops like ones_like/empty_like!
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149230
* __->__ #149052
| true
|
2,914,405,440
|
Fix checkout on xpu?
|
clee2000
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,914,292,708
|
DISABLED test_cond_autograd_zeros_unused_branch_complex_compile_mode_compile (__main__.TestControlFlow)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"skipped"
] | 2
|
COLLABORATOR
|
Re-attempting skip from https://github.com/pytorch/pytorch/issues/148308
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://hud.pytorch.org/failure?name=rocm-mi300%20%2F%20linux-focal-rocm6.3-py3.10%20%2F%20test%20(default%2C%206%2C%206%2C%20linux.rocm.gpu.mi300.2)&jobName=undefined&failureCaptures=functorch%2Ftest_control_flow.py%3A%3ATestControlFlow%3A%3Atest_cond_autograd_zeros_unused_branch_complex_compile_mode_compile)).
Appears to be a flaky failure for ROCm.
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,914,289,934
|
`pytroch.distribute` should support "meta" device tensors
|
slitvinov
|
open
|
[
"oncall: distributed",
"triaged"
] | 0
|
NONE
|
This example fails. It would be a great debugging tool for checking the metadata of tensors, especially considering the difficulty of debugging distributed programs.
https://pytorch.org/docs/stable/meta.html
```
$ cat meta.py
import torch
import torch.distributed as dist
import torch.distributed.elastic.multiprocessing.errors
@dist.elastic.multiprocessing.errors.record
def main():
dist.init_process_group()
rank = dist.get_rank()
size = dist.get_world_size()
if rank == 0:
x = torch.tensor(123, device="meta")
dist.send(x, 1)
elif rank == 1:
x = torch.tensor(0, device="meta")
dist.recv(x, 0)
else:
x = None
for i in range(size):
if rank == i:
print(f"{x=}")
dist.barrier()
dist.destroy_process_group()
if __name__ == '__main__':
main()
$ OMP_NUM_THREADS=1 torchrun --nproc-per-node 2 meta.py
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/lisergey/deepseek/meta.py", line 25, in <module>
[rank1]: main()
[rank1]: File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
[rank1]: return f(*args, **kwargs)
[rank1]: File "/home/lisergey/deepseek/meta.py", line 15, in main
[rank1]: dist.recv(x, 0)
[rank1]: File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2482, in recv
[rank1]: work = irecv(tensor, src=src, group=group, tag=tag, group_src=group_src)
[rank1]: File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2420, in irecv
[rank1]: return group.recv([tensor], group_src, tag)
[rank1]: NotImplementedError: c10d::recv_: attempted to run this operator with Meta tensors, but there was no fake impl or Meta kernel registered. You may have run into this message while using an operator with PT2 compilation APIs (torch.compile/torch.export); in order to use this operator with those APIs you'll need to add a fake impl. Please see the following for next steps: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/lisergey/deepseek/meta.py", line 25, in <module>
[rank0]: main()
[rank0]: File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
[rank0]: return f(*args, **kwargs)
[rank0]: File "/home/lisergey/deepseek/meta.py", line 12, in main
[rank0]: dist.send(x, 1)
[rank0]: File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2450, in send
[rank0]: work = isend(tensor, group=group, tag=tag, group_dst=group_dst)
[rank0]: File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2375, in isend
[rank0]: return group.send([tensor], group_dst, tag)
[rank0]: NotImplementedError: c10d::send: attempted to run this operator with Meta tensors, but there was no fake impl or Meta kernel registered. You may have run into this message while using an operator with PT2 compilation APIs (torch.compile/torch.export); in order to use this operator with those APIs you'll need to add a fake impl. Please see the following for next steps: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
E0312 16:03:11.658000 158380 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 158382) of binary: /usr/bin/python
E0312 16:03:11.660000 158380 torch/distributed/elastic/multiprocessing/errors/error_handler.py:141] no error file defined for parent, to copy child error file (/tmp/torchelastic_q99anvvv/none_npf1ie59/attempt_0/0/error.json)
Traceback (most recent call last):
File "/home/lisergey/.local/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/run.py", line 918, in main
run(args)
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/run.py", line 909, in run
elastic_launch(
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
meta.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2025-03-12_16:03:11
host : home
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 158383)
error_file: /tmp/torchelastic_q99anvvv/none_npf1ie59/attempt_0/1/error.json
traceback : Traceback (most recent call last):
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "/home/lisergey/deepseek/meta.py", line 15, in main
dist.recv(x, 0)
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2482, in recv
work = irecv(tensor, src=src, group=group, tag=tag, group_src=group_src)
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2420, in irecv
return group.recv([tensor], group_src, tag)
NotImplementedError: c10d::recv_: attempted to run this operator with Meta tensors, but there was no fake impl or Meta kernel registered. You may have run into this message while using an operator with PT2 compilation APIs (torch.compile/torch.export); in order to use this operator with those APIs you'll need to add a fake impl. Please see the following for next steps: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-03-12_16:03:11
host : home
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 158382)
error_file: /tmp/torchelastic_q99anvvv/none_npf1ie59/attempt_0/0/error.json
traceback : Traceback (most recent call last):
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "/home/lisergey/deepseek/meta.py", line 12, in main
dist.send(x, 1)
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2450, in send
work = isend(tensor, group=group, tag=tag, group_dst=group_dst)
File "/home/lisergey/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2375, in isend
return group.send([tensor], group_dst, tag)
NotImplementedError: c10d::send: attempted to run this operator with Meta tensors, but there was no fake impl or Meta kernel registered. You may have run into this message while using an operator with PT2 compilation APIs (torch.compile/torch.export); in order to use this operator with those APIs you'll need to add a fake impl. Please see the following for next steps: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
============================================================```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,914,280,814
|
multinomial does not preserve dynamic dimension
|
xadupre
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
multinomial expects a fixed dimension for the number of samples. It should be dynamic.
```python
import torch
class Model(torch.nn.Module):
def forward(self, x, y):
return torch.multinomial(x, y.shape[0])
model = Model()
inputs = (
torch.tensor([[4, 5],[6,7]], dtype=torch.float32),
torch.tensor([0, 5], dtype=torch.int64),
)
model(*inputs)
DYNAMIC = torch.export.Dim.DYNAMIC
ep = torch.export.export(
model, inputs, dynamic_shapes={"x": {0: DYNAMIC, 1: DYNAMIC}, "y": {0: DYNAMIC}}
)
print(ep)
```
Raises an error:
```
- Not all values of RelaxedUnspecConstraint(L['y'].size()[0]) are valid because L['y'].size()[0] was inferred to be a constant (2).
```
### Versions
```
PyTorch version: 2.7.0.dev20250311+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] model-explorer-onnx==0.3.4
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-array-api==0.3.0
[pip3] onnx-extended==0.3.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-genai-cuda==0.6.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] onnxscript==0.3.0.dev20250301
[pip3] optree==0.14.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250311+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250311+cu126
[pip3] torchmetrics==1.6.2
[pip3] torchvision==0.22.0.dev20250311+cu126
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,914,273,279
|
Unsupported: call_method NNModuleVariable() register_forward_hook [NestedUserFunctionVariable()] {}
|
bhack
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
AOTI compiling and exporting https://github.com/cvlab-kaist/Chrono I had this issue in the log related to `register_forward_hook`
### Error logs
[error.log](https://github.com/user-attachments/files/19212384/error.log)
### Versions
2.6.0 and nightly
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,914,246,224
|
Enable modernize-use-default-member-init
|
cyyever
|
closed
|
[
"module: cpu",
"triaged",
"module: mkldnn",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 10
|
COLLABORATOR
|
``modernize-use-default-member-init`` prefers initialisation in class members, that make more ``= default`` constructors possible. Some violations or modernize rules have been fixed.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,914,234,459
|
NotImplementedError: aten::_log_softmax_backward_data with SparseCUDA backend
|
rangehow
|
open
|
[
"module: sparse",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
```python
class NDPTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False, num_items_in_batch=None):
input_ids = inputs.pop("input_ids")
attention_mask = inputs.pop("attention_mask")
cnt_list = inputs.pop(
"cnt_list"
)
labels= inputs.pop("label")
result = model(
input_ids=input_ids,
attention_mask=attention_mask,
)
model_logits = result.logits # bsz x seqlen x dim
ce_loss = CrossEntropyLoss(ignore_index=-100)
loss = ce_loss(model_logits, labels)
if return_outputs:
return loss, {"logits": model_logits}
else:
return loss
```
```bash
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/ruanjunhao/ndp/new_version/train.py", line 107, in <module>
trainer.train()
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/transformers/trainer.py", line 2241, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/transformers/trainer.py", line 3740, in training_step
self.accelerator.backward(loss, **kwargs)
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/accelerate/accelerator.py", line 2329, in backward
loss.backward(**kwargs)
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Could not run 'aten::_log_softmax_backward_data' with arguments from the 'SparseCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_log_softmax_backward_data' is only available for these backends: [CPU, CUDA, HIP, MPS, IPU, XPU, HPU, VE, MTIA, PrivateUse1, PrivateUse2, PrivateUse3, Meta, FPGA, MAIA, Vulkan, Metal, QuantizedCPU, QuantizedCUDA, QuantizedHIP, QuantizedMPS, QuantizedIPU, QuantizedXPU, QuantizedHPU, QuantizedVE, QuantizedMTIA, QuantizedPrivateUse1, QuantizedPrivateUse2, QuantizedPrivateUse3, QuantizedMeta, CustomRNGKeyId, MkldnnCPU, SparseCsrCPU, SparseCsrCUDA, SparseCsrHIP, SparseCsrMPS, SparseCsrIPU, SparseCsrXPU, SparseCsrHPU, SparseCsrVE, SparseCsrMTIA, SparseCsrPrivateUse1, SparseCsrPrivateUse2, SparseCsrPrivateUse3, SparseCsrMeta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
```
### Versions
torch 2.5.1+cu121
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
2,914,220,977
|
[v.2.7.0] Release Tracker
|
atalman
|
closed
|
[
"module: ci",
"triaged",
"release tracker"
] | 93
|
CONTRIBUTOR
|
We cut a [release branch](https://github.com/pytorch/pytorch/tree/release/2.7) for the 2.7.0 release.
Our plan from this point from this point is roughly:
* Phase 1 (until 3/31/25): work on finalizing the release branch
* Phase 2 (after 3/31/25): perform extended integration/stability/performance testing based on Release Candidate builds.
This issue is for tracking cherry-picks to the release branch.
## Cherry-Pick Criteria
**Phase 1 (until 3/31/25):**
Only low-risk changes may be cherry-picked from main:
1. Fixes to regressions against the most recent minor release (e.g. 2.6.x for this release; see [module: regression issue list](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22module%3A+regression%22+))
2. Critical fixes for: [silent correctness](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+correctness+%28silent%29%22), [backwards compatibility](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+bc-breaking%22+), [crashes](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+crash%22+), [deadlocks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+deadlock%22+), (large) [memory leaks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+memory+usage%22+)
3. Critical fixes to new features introduced in the most recent minor release (e.g. 2.6.x for this release)
4. Test/CI fixes
5. Documentation improvements
6. Compilation fixes or ifdefs required for different versions of the compilers or third-party libraries
7. Release branch specific changes (e.g. change version identifiers)
Any other change requires special dispensation from the release managers (currently @atalman, @malfet, @ZainRizvi ). If this applies to your change please write "Special Dispensation" in the "Criteria Category:" template below and explain.
**Phase 2 (after 3/31/25):**
Note that changes here require us to rebuild a Release Candidate and restart extended testing (likely delaying the release). Therefore, the only accepted changes are **Release-blocking** critical fixes for: [silent correctness](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+correctness+%28silent%29%22), [backwards compatibility](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+bc-breaking%22+), [crashes](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+crash%22+), [deadlocks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+deadlock%22+), (large) [memory leaks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+memory+usage%22+)
Changes will likely require a discussion with the larger release team over VC or Slack.
## Cherry-Pick Process
1. Ensure your PR has landed in master. This does not apply for release-branch specific changes (see Phase 1 criteria).
2. Create (but do not land) a PR against the [release branch](https://github.com/pytorch/pytorch/tree/release/2.7).
<details>
```bash
# Find the hash of the commit you want to cherry pick
# (for example, abcdef12345)
git log
git fetch origin release/2.7
git checkout release/2.7
git cherry-pick -x abcdef12345
# Submit a PR based against 'release/2.7' either:
# via the GitHub UI
git push my-fork
# via the GitHub CLI
gh pr create --base release/2.7
```
You can also use the `@pytorchbot cherry-pick` command to cherry-pick your PR. To do this, just add a comment in your merged PR. For example:
```
@pytorchbot cherry-pick --onto release/2.7 -c docs
```
(`-c docs` - is the category of your changes - adjust accordingly):
For more information, see [pytorchbot cherry-pick docs](https://github.com/pytorch/pytorch/wiki/Bot-commands#cherry-pick).
</details>
3. Make a request below with the following format:
```
Link to landed trunk PR (if applicable):
*
Link to release branch PR:
*
Criteria Category:
*
```
1. Someone from the release team will reply with approved / denied or ask for more information.
2. If approved, someone from the release team will merge your PR once the tests pass. **Do not land the release branch PR yourself.**
**NOTE: Our normal tools (ghstack / ghimport, etc.) do not work on the release branch.**
Please note HUD Link with branch CI status and link to the HUD to be provided here.
[HUD](https://hud.pytorch.org/hud/pytorch/pytorch/release%2F2.7)
## Download instructions for testing release candidate
### PIP CPU
Windows/Linux/MacOS:
``pip3 install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cpu``
### PIP CUDA 11.8, 12.6, 12.8
Windows/Linux:
``pip3 install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu118``
``pip3 install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu126``
``pip3 install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu128``
### PIP Linux Aarch6
``pip3 install torch==2.7.0 torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/test/cpu``
### PIP Linux CUDA 12.8 Aarch64
``pip3 install torch==2.7.0 torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/test/cu128``
### PIP ROCM 6.2.4
``pip3 install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/rocm6.2.4``
### PIP ROCM 6.3
``pip3 install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/rocm6.3``
### PIP XPU
Linux/Windows:
``pip3 install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/xpu``
### Libtorch
CPU Linux:
(cxx11 ABI): https://download.pytorch.org/libtorch/test/cpu/libtorch-cxx11-abi-shared-with-deps-latest.zip
CPU Windows:
Download here (Release version): https://download.pytorch.org/libtorch/test/cpu/libtorch-win-shared-with-deps-latest.zip
Download here (Debug version): https://download.pytorch.org/libtorch/test/cpu/libtorch-win-shared-with-deps-debug-latest.zip
CPU MacOS Arm64:
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-macos-arm64-latest.zip
CUDA 11.8, 12.6, 12.8 Linux
https://download.pytorch.org/libtorch/test/cu118/libtorch-cxx11-abi-shared-with-deps-latest.zip
[https://download.pytorch.org/libtorch/test/cu126/libtorch-cxx11-abi-shared-with-deps-latest.zip](https://download.pytorch.org/libtorch/test/cu121/libtorch-cxx11-abi-shared-with-deps-latest.zip)
[https://download.pytorch.org/libtorch/test/cu128/libtorch-cxx11-abi-shared-with-deps-latest.zip](https://download.pytorch.org/libtorch/test/cu124/libtorch-cxx11-abi-shared-with-deps-latest.zip)
Windows CUDA 11.8, 12.6, 12.8
(Release version):
https://download.pytorch.org/libtorch/test/cu118/libtorch-win-shared-with-deps-latest.zip
[https://download.pytorch.org/libtorch/test/cu126/libtorch-win-shared-with-deps-latest.zip](https://download.pytorch.org/libtorch/test/cu121/libtorch-win-shared-with-deps-latest.zip)
[https://download.pytorch.org/libtorch/test/cu128/libtorch-win-shared-with-deps-latest.zip](https://download.pytorch.org/libtorch/test/cu124/libtorch-win-shared-with-deps-latest.zip)
(Debug version):
https://download.pytorch.org/libtorch/test/cu118/libtorch-win-shared-with-deps-debug-latest.zip
[https://download.pytorch.org/libtorch/test/cu126/libtorch-win-shared-with-deps-debug-latest.zip](https://download.pytorch.org/libtorch/test/cu121/libtorch-win-shared-with-deps-debug-latest.zip)
[https://download.pytorch.org/libtorch/test/cu128/libtorch-win-shared-with-deps-debug-latest.zip](https://download.pytorch.org/libtorch/test/cu124/libtorch-win-shared-with-deps-debug-latest.zip)
### Docker images CUDA 11.8, 12.6, 12.8
Devel:
``docker pull ghcr.io/pytorch/pytorch-test:2.7.0-cuda11.8-cudnn9-devel``
``docker pull ghcr.io/pytorch/pytorch-test:2.7.0-cuda12.6-cudnn9-devel``
``docker pull ghcr.io/pytorch/pytorch-test:2.7.0-cuda12.8-cudnn9-devel``
Runtime:
``docker pull ghcr.io/pytorch/pytorch-test:2.7.0-cuda11.8-cudnn9-runtime``
``docker pull ghcr.io/pytorch/pytorch-test:2.7.0-cuda12.6-cudnn9-runtime``
``docker pull ghcr.io/pytorch/pytorch-test:2.7.0-cuda12.8-cudnn9-runtime``
cc @seemethere @malfet @pytorch/pytorch-dev-infra
### Versions
2.7.0
| true
|
2,914,161,788
|
[FIX] remove the duplicate key in DEFAULT_STATIC_QUANT_MODULE_MAPPINGS
|
hackty
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: AO frontend"
] | 8
|
CONTRIBUTOR
|
nn.Dropout appeared at line 81
| true
|
2,914,085,000
|
Github Actions API is unstable - High queue times for GHA
|
jeanschmidt
|
closed
|
[
"ci: sev",
"ci: sev-infra.thirdparty"
] | 1
|
CONTRIBUTOR
|
## Current Status
Mitigated on github side - recovering queue of jobs
## Error looks like
Queued jobs, failing to pick up runners
## Incident timeline (all times pacific)
* 04:00 Starded
* 06:56 Identified
* 07:12 GH API seems to be start recovering
## User impact
* queued jobs
* increased TTS on CI
## Root cause
* https://www.githubstatus.com/incidents/nhcpszxtqxtm - Actions API is unstable
## Mitigation
Once this CI:SEV is resolved, please cancel and re-run your CI job if there are queued jobs > 30mins
## Prevention/followups
*How do we prevent issues like this in the future?*
| true
|
2,914,081,606
|
[DO NOT MERGE] [TRITON] Test enablement of buffer ops in AMD triton
|
jataylo
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"keep-going",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 4
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,914,068,317
|
Only 2D, 3D, 4D, 5D padding with non-constant padding are supported for now
|
fallen-leaves-web
|
open
|
[
"module: nn",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
Hello, thanks for sharing the work.
I encountered an issue while running my ESPnet-based TTS script on Windows. Here is the error message I got:
G:\code> & g:/University_documents_over_four_years/AI语音/.conda/python.exe g:/code/tts1.py
Failed to import Flash Attention, using ESPnet default: No module named 'flash_attn'
Loaded spembs for speaker: test
emedding shape:(1, 1, 512)
Fbank feature shape: torch.Size([1, 512, 80])
Traceback (most recent call last):
File "g:\code\tts1.py", line 95, in <module>
result = text2speech(text_input, speech=speech_tensor, spembs=embedding)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\espnet2\bin\tts_inference.py", line 196, in __call__
output_dict = self.model.inference(**batch, **cfg)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\espnet2\tts\espnet_model.py", line 256, in inference
feats = self.feats_extract(speech[None])[0][0]
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\espnet2\tts\feats_extract\log_mel_fbank.py", line 88, in forward
input_stft, feats_lens = self.stft(input, input_lengths)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\espnet2\layers\stft.py", line 105, in forward
output = torch.stft(input.float(), **stft_kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\functional.py", line 707, in stft
input = F.pad(input.view(extended_shape), [pad, pad], pad_mode)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\nn\functional.py", line 5209, in pad
return torch._C._nn.pad(input, pad, mode, value)
NotImplementedError: Only 2D, 3D, 4D, 5D padding with non-constant padding are supported for now
Has anyone encountered this issue before? How can I fix it?
Thanks in advance! 🙏
### Versions
PyTorch version: 2.3.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 家庭中文版 (10.0.26100 64 位)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.16 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:19:12) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 566.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 12th Gen Intel(R) Core(TM) i7-12700H
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2300
MaxClockSpeed: 2300
L2CacheSize: 11776
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.3.1
[pip3] torch-complex==0.4.4
[pip3] torchaudio==2.3.1
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.18.1
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] mkl 2021.4.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h827c3e9_2
[conda] mkl_fft 1.3.11 py310h827c3e9_0
[conda] mkl_random 1.2.8 py310hc64d2fc_0
[conda] numpy 1.22.4 pypi_0 pypi
[conda] pytorch 2.3.1 cpu_py310h0ce1571_0
[conda] pytorch-lightning 2.5.0.post0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torch-complex 0.4.4
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,913,783,273
|
[ROCm] [Testing] enable NHWC convolutions by default on CDNA arch
|
jataylo
|
open
|
[
"module: rocm",
"open source",
"release notes: rocm",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/inductor-perf-test-nightly-rocm"
] | 4
|
COLLABORATOR
|
Also enabled layout optimisation by default on ROCm so Inductor models will see the benefit
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,913,778,503
|
Update nightly PyTorch version to 2.8.0
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Branch for 2.7: https://github.com/pytorch/pytorch/tree/release/2.7
Same as https://github.com/pytorch/pytorch/pull/135916
| true
|
2,913,773,552
|
(Will PR if ok) Support generator returning values
|
fzyzcjy
|
open
|
[
"triaged",
"oncall: pt2"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Hi thanks for the library! It would be great if generators returning values could be supported. I will make a PR if this feature looks OK.
For example:
```python
import torch
def exhaust_generator(g):
ans = []
while True:
try:
ans.append(next(g))
except StopIteration as e:
ans.append(e.value)
break
return ans
def outer():
x = torch.tensor([1000])
output_from_inner = yield from inner(x)
yield output_from_inner
yield x + 10
yield x + 20
return x + 30 # DOES NOT WORK
def inner(x):
yield x + 1
yield x + 2
return x + 3 # DOES NOT WORK
print(exhaust_generator(outer()))
print(torch.compile(lambda: exhaust_generator(outer()))())
```
It prints the following (note the two `None`s):
```
[tensor([1001]), tensor([1002]), tensor([1003]), tensor([1010]), tensor([1020]), tensor([1030])]
[tensor([1001]), tensor([1002]), None, tensor([1010]), tensor([1020]), None]
```
In other words, the `return` in generator functions are silently removed.
### Error logs
(see above)
### Versions
Latest master
c c @guilhermeleobas who made the generator support :)
(below is not done by me but done by GitHub auto template; it seems the bot wants to change my cc above... so try "c c")
cc @chauhang @penguinwu
| true
|
2,913,757,823
|
[AOTInductor] support specify outputs which should be captured
|
zzq96
|
open
|
[
"triaged",
"module: aotinductor"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
In train, model forward may return more outputs for computing loss, like `return {"logits":logits, "rpr":rpr}`
but in inference, we only need some of them, like `return {"logits":logits}`, so torch can simplify graph and ignore some nodes related to `rpr`.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1 @zou3519 @bdhirsh
| true
|
2,913,271,530
|
Force build to conform C++ standard on windows by adding /permissive- flag
|
Stonepia
|
closed
|
[
"module: windows",
"open source",
"Merged",
"ciflow/trunk",
"release notes: jit",
"topic: improvements",
"module: xpu"
] | 7
|
CONTRIBUTOR
|
Fixes #147366
1. Add `/permissive-` to the `torch_compile_options` for the build to conform to the C++ standard.
2. Fix the error when trying to assign a string literal to a non-const ptr.
The `/permissive-` flag can be found at https://learn.microsoft.com/en-us/cpp/build/reference/permissive-standards-conformance?view=msvc-170
From the above [doc](https://learn.microsoft.com/en-us/cpp/build/reference/permissive-standards-conformance?view=msvc-170#remarks),
> By default, the /permissive- option is set in new projects created by Visual Studio 2017 version 15.5 and later versions.
> The /permissive- option is implicitly set by the /std:c++latest option starting in Visual Studio 2019 version 16.8, and in version 16.11 by the /std:c++20 option.
Thus, it is reasonable to add this flag to the existing project.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @gujinghui @fengyuan14 @guangyey
| true
|
2,913,164,681
|
Avoid regenerating template_kernels each time tuned_mm is called with the tensors of the same shape.
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
When tuned_mm is called with tensors of the same shape, I expect the selection of auto tuning to always be the same. However in one of the models i am working on we noticed that the call to
```
mm_template.maybe_append_choice(
choices,
input_nodes=(mat1, mat2),
layout=layout,
**mm_options(config, m, n, k, layout),
)
```
takes around 30% of the time. ==15 mins!
looking at the model there is a lot mm with the same size passed. (potentially different symbol for some of them that are dyanmic)
This should be avoidable hopefully !
repo:
```
import torch
@torch.compile()
def func(x, y, z, m):
a= torch.ops.aten.mm.default(x, y)
b= torch.ops.aten.mm.default(z, m)
return a, b
x = torch.rand(256, 32,device="cuda")
y = torch.rand(32, 256,device="cuda")
z = torch.rand(256, 32,device="cuda")
m = torch.rand(32, 256,device="cuda")
func(x,y, z,m)
```
add to
```
@register_lowering(aten.mm, type_promotion_kind=None)
def tuned_mm(mat1, mat2, *, layout=None):
print("I am called")
print(mat1)
print(mat2)
...
print(choices)
return autotune_select_algorithm(name, choices, [mat1, mat2], layout)
except NoValidChoicesError:
```
and make sure use_triton_template(layout) return True.
looking at logs you will see:
```
I am called
TensorBox(StorageBox(
InputBuffer(name='arg1_1', layout=FixedLayout('cuda:0', torch.float32, size=[256, 32], stride=[32, 1]))
))
TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cuda:0', torch.float32, size=[32, 256], stride=[256, 1]))
))
[<torch._inductor.select_algorithm.ExternKernelCaller object at 0x7f80fdd31990>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdd31de0>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdd33df0>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdc7b9a0>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdc7ba60>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb08250>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdcc8520>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb3ae60>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdcc8a90>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdcc8160>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdcc9f30>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb089d0>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb09e10>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb63100>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb7bc10>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb605e0>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdcf1840>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdce8550>]
I am called
TensorBox(StorageBox(
InputBuffer(name='arg3_1', layout=FixedLayout('cuda:0', torch.float32, size=[256, 32], stride=[32, 1]))
))
TensorBox(StorageBox(
InputBuffer(name='arg2_1', layout=FixedLayout('cuda:0', torch.float32, size=[32, 256], stride=[256, 1]))
))
[<torch._inductor.select_algorithm.ExternKernelCaller object at 0x7f80fdb7b790>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb786a0>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdce8370>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdceae90>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdbfd900>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fc211090>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb09750>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fc2180d0>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fc22ae60>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fc21b130>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fc22a8f0>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb52590>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb508b0>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdbfd7e0>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fdb7a530>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fc213bb0>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fc21b430>, <torch._inductor.select_algorithm.TritonTemplateCaller object at 0x7f80fc212440>]
(myenv) [lsakka@devgpu005.nha1 ~/pytorch (wor_for_x_z)]$
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,913,031,558
|
[regression] Fix pin_memory() when it is called before device lazy initialization.
|
BartlomiejStemborowski
|
closed
|
[
"module: regression",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 19
|
CONTRIBUTOR
|
PR #145752 has added a check in the isPinnedPtr to check if a device is initialized before checking if the tensor is pinned. Also that PR has added a lazy initialization trigger when an at::empty is called with a pinned param set to true. However, when the tensor is firstly created and it is pinned in a separate call by calling pin_memory() function, lazy device init is not called so is_pinned returns always false.
With this PR, the lazy initialization is moved to getPinnedMemoryAllocator function, thus it is assured that device is initialized before we pin a tensor.
Fixes #149032
@ngimel @albanD
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.